Abstract
Background:
Fraudulent research participants create negative consequences for the rigour and soundness of research.
Aims:
A case study is presented from a qualitative study where the research team believed several fraudulent participants fabricated information during an interview about being a caregiver for a person living with dementia and chronic wounds.
Materials & Methods:
Participants were recruited through a free online research registry. Individual semi-structured interviews were held virtually.
Results:
The study was paused after the nurse scientist with qualitative methodology experience identified that participants were giving illogical and repetitive responses across interviews. The team developed a revised screening tool to help reduce fraudulent participants from enrolling in the study. None of the data collected were used for analysis.
Discussion:
Information is provided on how the team dealt with the situation, lessons learned for future studies, and recommendations for gerontological nurse researchers.
Conclusion:
Researchers should be aware that some participants are misrepresenting themselves for financial incentives and this can compromise the soundness of findings. Thorough screening tools are one way to identify and prevent fraud.
Introduction
How often do you conduct qualitative interviews and question if your research participants are properly representing themselves?
We know that in our daily lives we must be vigilant so that we don’t fall prey to scammers who are creatively trying to trick us for personal information, money, and to get us to click on malicious links. Around the world many of us engage in training from our work information technology department on how to detect phishing attempts and cyber security to prevent negative fraud consequences or inappropriately compromise research participant personal information. However, in our research training, there’s often been an omission of education on how to identify and handle research participants who are misrepresenting themselves.
Over the last decade there has been some literature describing deception by research participants, particularly around internet recruitment and subsequent survey responses. This is beyond the cases of when some respondents are simply careless in terms of filling out surveys. Like when Chandler and colleagues (2020) found that 11% of participants erroneously claimed to have been abducted by aliens when the question was concealed among other items in a survey. Research deception involves participants gainfully accessing research studies for financial incentives by self-reporting their inclusion criteria for the study when they are in fact impostors. Researchers have found that some participants changed their self-reported sexual orientation, biological sex, birthdate, as well as concealed or pretended to have a health condition to be eligible for surveys or clinical trial studies (Chandler & Paolacci, 2017; Devine et al., 2021; Glazer et al., 2021).
Some of these potentially fraudulent participants have been identified through comparison of responses to previous surveys as well as identifying participants who exited a survey and reattempted to complete it with different information that met eligibility criteria (Chandler & Paolacci, 2017; Glazer et al., 2021). Additionally, one research team with Institutional Review Board approval used an internet service to confirm participants’ identity and found that 7% of applicants were deemed fraudulent and two people even supplied information from deceased individuals (Glazer et al., 2021). Another research team has been recruiting subjects who use deception and found that they openly admit to concealment, fabrication, and falsification of information to enter studies (Devine et al., 2021; Devine et al., 2013).
Fraudulent participant cases have negative consequences for clinical research (Chandler et al., 2020). In quantitative data analysis, there can be multiple threats to data integrity and consequences of fraudulent respondents lying about their specific characteristics to gain access to studies, such as not being able to detect true treatment effects, skewed individual differences, and suppressed, inflated, or reversed signs of correlations (Chandler et al., 2020). Quantitative research methods provide guidance on dealing with outliers, which contrasts with qualitative methodology. In qualitative research, the investigators need to decide in what ways to manage data when the authenticity of a person’s words is questioned by inconsistencies in their storyline (Flicker, 2004). Three options exist for qualitative data of a potentially fraudulent research participant: excluding qualitative data for analysis, tentatively include the data but keeping in mind that there might be misrepresentation in the person’s story, or include the person in the analysis (Flicker, 2004).
Aligned with qualitative methodology, as researchers we should strive to create a safe, non-judgmental space for our participants, so they are open to sharing details and the truth of their lives which can help answer our research questions. However, we need to be aware when designing our research that participants may not represent themselves authentically. This may be beyond the informants not wanting to answer specific questions or give too much information about themselves, but because of the simple fact that they may be dishonest for the financial incentives associated with participating in research. During a time when most research around the world has moved online because of the coronavirus (COVID-19) pandemic, researchers need to be aware of the possibility of fraudulent participants. The convenience along with the cost and time savings associated with virtual research will likely ensure that it is a predominant method moving forward (Sah et al., 2020). Below we present a case study from our research where we suspected that we had several fraudulent research participants enroll and participate in virtual qualitative interviews, how we dealt with it, and lessons learned for future studies.
Case Study
Our qualitative descriptive study was seeking caregivers for individuals with a diagnosis of Alzheimer’s disease or Alzheimer’s disease related dementia with a history of a chronic leg wound (currently or within the past 12 months) to understand their experience and get their input on a wound healing device for the home environment. Our initial recruitment efforts included contacting via phone participants of the parent study and patients treated at a memory clinic with documented histories of chronic wounds. We then expanded recruitment to include an email invitation and flyer that were shared with our college community partners. These efforts yielded only one caregiver over two months during the COVID-19 pandemic. To increase recruitment efforts, we decided to use ResearchMatch. This is a free platform previously used successfully by members of the research team to recruit individuals. It is a nonprofit program funded by the National Institutes of Health (NIH), that helps to connect people interested in research studies with researchers from top medical centers across the U.S. Study information is sent through the registry via email and interested individuals can agree to provide their contact information to research teams for screening and recruitment.
With Institutional Review Board (IRB) approval we began recruiting participants for a Zoom interview because of social distancing requirements related to the COVID-19 pandemic. This also allowed us to expand our recruitment efforts nationally. The initial recruitment email from ResearchMatch was sent to 1499 people. Of those, twenty-two volunteers indicated interest and contact information was provided to the team for screening. Two of those screened were eligible and enrolled in the study, four were not eligible, and the other 16 did not respond after three attempts at making contact. The project manager (second author) asked about eligibility criteria over email, then used the responses to complete the screening tool. He asked individuals to confirm for each of our two inclusion criteria (caregiver of person with AD/ADRD; and caregiver of person with a history of chronic wounds), and our exclusion criteria (under 18 years of age). Notes were made whether the person met the eligibility criteria, passed screening, and whether they were recruited, consented, and enrolled.
After one of the participants completed the Zoom interview, he emailed the project manager stating that he knew “a few people who are dealing with similar issues” and wondered if more participants were needed. Since our IRB approved protocol indicated that we would use a snowball sampling technique, the project manager emailed the participant back with the study flier to share. That same day, the project manager began receiving emails from potential participants and was surprised to see so many people had reached out. The first five people who reached out and completed the screening were scheduled to be interviewed for back-to-back individual interviews on a Friday of that week. The next four people were scheduled for the start of the next week on Monday.
The first author, a nurse scientist with extensive qualitative experience, was to conduct all the interviews. She began to question the authenticity of the participants and their responses by the second scheduled interview that Friday morning. However, she completed interviewing all scheduled participants and then planned to convene the team to discuss. Several things made her question the legitimacy of the responses received from the participants. This included hearing similar responses from the participants, which were like the answers received from the original interviewee who had referred the new participants. For example, the original participant and subsequent participants talked about going to a pharmacy and asking a pharmacist what to put on the wound of their care receiver. Details were very vague such as receiving “medicine” or “some drugs” and participants couldn’t expand on the specific treatments they were using. This was a common theme across interviews and not consistent with known practices and language of caregivers caring for chronic wounds.
Across the interviews it appeared to the interviewer that some contradictory statements were being made and then retracted by participants when she probed further based on her nursing experience. One example was a participant stating because of money issues they asked a pharmacist for medicine to put on the wound, but then later stated in the interview that they employed a medical doctor to come to the house three times a week to care for the wound. In some cases, participants seemed to be stumped and unprepared to answer probing questions. Additionally, during some interviews, it sounded like participants were shuffling paper, as if to find answers to interview questions. Furthermore, two participants who were scheduled back-to-back explained that they were caregivers for teenage brothers with dementia and chronic wounds. A lot of their responses to questions were similar to each other and illogical. Additionally, the interviewer found that none of the participants put their cameras on (although it wasn’t requirement for this study) and most of the interviewees had poor audio quality. Even though the interviewer became suspicious of fraudulent activity, she completed all interviews in a professional manner, although kept the conversations brief when similar answers to previous participants were given. All participants were paid the financial incentive for participation as indicated in the consent form.
After completing all the Friday interviews and identifying potential fraudulent participants, the volunteers scheduled for Monday were notified by the project manager that the interviews were being paused and they would receive a follow-up email. The team met to discuss the next steps. In an effort to screen out fraudulent participants, we implemented a revised screening including questions about date of birth of the person being cared for, the caregiver’s relationship to the care recipient, the duration that they had been caring for the individual, the stage of dementia, for how long the care recipient had wounds/received wound care, the duration of wound care, frequency of dressing changes, and how they learned to do wound care. The questions were already part of the demographic tool and in the interview guide and served to improve the rigor of the screening process. We did not introduce any new questions. We found that all volunteers responded indicating a date of birth for the care recipient that placed them under 65 years of age. The caregivers were informed that they did not meet the inclusion criteria and would not be eligible to participate. With these findings, the research team decided not to continue with recruitment or screening of participants who were referred by the two participants from the registry. None of the data collected was used for analysis.
Discussion and Implications
Fraudulent research participation around the world may be even more prevalent now that researchers have moved to virtual recruitment and data collection because of the COVID-19 pandemic and social distancing. Virtual research may continue to be the norm in the future due to convenience and cost savings. This case study represents a situation where an experienced qualitative interviewer was surprised to find that participants were giving illogical, inconsistent, and repetitive responses across interviews. As a qualitative researcher there is an innate desire to take participants at their word. However, the patterns in the respondents’ answers in the case study became clear red flags and concerning. The team made the decision to exclude all data of the participants and not use it in analysis. Perhaps if there were multiple interviewers, those with less experience interviewing, or without a background in nursing, the red flags might not have been as clear.
We realized in hindsight that the rapid speed of interested people who wanted to participate in the study was a warning sign for fraud. Particularly since everyone reaching out to the project manager was referred by one person. The unusually swift increase in interested parties for a study has been reported by others as a time to be on alert for fraudsters (Pozzar et al., 2020; Salinas, 2023). When study teams have had a dry spell in recruitment it could be exciting to have people interested in enrollment; however, steps should be taken to confirm the authenticity of the person claiming to meet study inclusion criteria.
For our study, screening took place over email. One way to help mitigate fraud in the future is to have two rounds of screening. Follow-up screening questions that are conducted over the phone could help decrease imposters from enrolling in a study. This is because there is an opportunity to identify inconsistency in responses from what was filled out online initially to what is verbalized. Additionally, screening could go further in depth to get an idea of participants’ responses and if at the outset replies sound logical. For instance, in our study we found that the participants told us that the person living with dementia was younger than the typical population experiencing the syndrome, had a difficult time describing what stage of dementia the person they were providing care was in, and provided illogical responses around caring for the person’s chronic wound from the nurse interviewer’s perspective. Therefore, we added additional screening questions around these three topics, and this helped us identify potentially fraudulent participants and thus declined to interview them.
Implementing this expanded screening tool eliminated all additional people who were scheduled for interviews that were referred from the initial participant. It was clear to our team that the respondents were part of a research scam group. Therefore, we recommend that other qualitative researchers who use an initial online screening schedule follow-up phone calls between the interested person and a study team member who completes a thorough screening. This will prevent people from having an opportunity to research appropriate answers that would make them eligible for a study when filling out thorough screening tools online. Overall, this will save the study team time and resources by reducing the number of scheduled interviews with participants fabricating their experiences.
When study teams are recruiting participants online from a specific geographical area, they may want to also consider working with an information technology specialist who could track the internet protocol (IP) addresses and confirm reported locations are truthful. Examining IP addresses can estimate likely location and alert research teams to respondents possibility logging in from the same computer (Kramer et al., 2014). This method of monitoring location would need to be approved by the IRB since IP addresses are considered protected health information (Gunn et al., 2004). Limitations do exist however with this method, such as the same IP address being reported if respondents are using public computers (e.g., libraries, educational sites) and sophisticated users may be able to hide their IP address (Kramer et al., 2014).
Additionally, the situation described above reinforced the importance for us of tracking how participants learned about the study. We had several recruitment strategies in place, and we were fortunate that we could easily track who was responding because of the snowball sampling from the ResearchMatch respondent. Therefore, we recommend that others carefully track how research participants are recruited so the information can be reviewed and analyzed as needed. We also recommend that teams use one experienced interviewer as much as possible to identify problems right away across interviews. If this is not realistic, teams should then plan frequent meetings to discuss participants’ cases and begin initial data analysis as soon as possible. This may aid in detecting fraud early and teams can put a plan in place before being too deep in analysis.
Qualitative researchers may also want to consider the requirement of having participants keeping a camera on during virtual interviews. This could be built into the research protocol and consent forms. An approach like this would be helpful for capturing facial and body language which is integral in qualitative research. Cameras that are not turned on or video of poor quality prevent the interviewer from reading body language and nonverbal cues to guide probing and the direction the interview should go (Seitz, 2016). Furthermore, having a requirement to put a camera on may deter participants from joining who plan to fabricate the answers to the interview questions and would feel uncomfortable having the interviewer seeing and recording them talking on camera.
A final recommendation that we offer is to increase education on the topic. Academic research program directors should take steps to incorporate education on fraudulent research participants into training. Course directors could review case studies from literature or personal experiences. Discussions could focus on how to handle fraudulent participants, enhancing screening tools, and reviewing how to handle the data. Covering these topics could help prepare our next generation of researchers. This information could also be shared within professional organizations to alert current researchers of the growing worldwide problem of fraudulent research participants.
Conclusion
In summary, we must be attentive to identifying research participants who are misrepresenting themselves as this can compromise the rigor of our research and the soundness of our findings. The case study presented here reviews our situation of having a group of potentially fraudulent participants claiming to be caregivers of people living with dementia who raised red flags with the experienced interviewer due to interview responses. Having thorough screening tools in place is one way to identify and prevent fraud. Having a consistent and experienced interviewer may facilitate the early identification of red flag cases of potentially fraudulent participants. When conducting virtual interviews, the research team may want to consider having in their protocol the requirement for participants to have their camera on. This would help with the team identifying facial and body expressions to help guide the interview direction as well as deter those who find it easier to fabricate answers when only their voice is heard. Research programs and professional organizations should incorporate information about fraudulent research participants and how to deal with the data in an effort to prepare our next generation of researchers and to alert our current researchers of the growing problem.
Acknowledgements:
Research reported in this commentary was supported in part by the National Institute of Nursing Research and the National Institute of Aging of the National Institutes of Health under Award Number 3R01NR015995–04S1, NOT-AG-18–039. Justine S. Sefcik is supported by National Institute of Nursing Research of the National Institutes of Health (K23NR018673). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Footnotes
Conflict of Interest: None.
Contributor Information
Justine S. Sefcik, Drexel University College of Nursing and Health Professions.
Zachary Hathaway, Drexel University College of Nursing and Health Professions.
Rose Ann DiMaria-Ghalili, Drexel University College of Nursing and Health Professions.
Data Sharing:
Data sharing not applicable to this article as no datasets were analysed during the current study.
References
- Chandler J, Sisso I, & Shapiro D (2020). Participant carelessness and fraud: Consequences for clinical research and potential solutions. Journal of Abnormal Psychology, 129(1), 49. [DOI] [PubMed] [Google Scholar]
- Chandler JJ, & Paolacci G (2017). Lie for a dime: When most prescreening responses are honest but most study participants are impostors. Social Psychological and Personality Science, 8(5), 500–508. [Google Scholar]
- Devine EG, Pingitore AM, Margiotta KN, Hadaway NA, Reid K, Peebles K, & Hyun JW (2021). Frequency of concealment, fabrication and falsification of study data by deceptive subjects. Contemporary Clinical Trials Communications, 21, 100713. 10.1016/j.conctc.2021.100713 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Devine EG, Waters ME, Putnam M, Surprise C, O’Malley K, Richambault C, Fishman RL, Knapp CM, Patterson EH, Sarid-Segal O, Streeter C, Colanari L, & Ciraulo DA (2013). Concealment and fabrication by experienced research subjects. Clinical Trials, 10(6), 935–948. 10.1177/1740774513492917 [DOI] [PubMed] [Google Scholar]
- Flicker S (2004). Ask me no secrets, I’ll tell you no lies”: What happens when a respondent’s story makes no sense. The Qualitative Report, 9(3), 528–537. [Google Scholar]
- Glazer JV, MacDonnell K, Frederick C, Ingersoll K, & Ritterband LM (2021). Liar! Liar! Identifying eligibility fraud by applicants in digital health research. Internet Interventions, 25, 100401. 10.1016/j.invent.2021.100401 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gunn PP, Fremont AM, Bottrell M, Shugarman LR, Galegher J, & Bikson T (2004). The Health Insurance Portability and Accountability Act Privacy Rule: a practical guide for researchers. Medical Care, 42(4), 321–327. 10.1097/01.mlr.0000119578.94846.f2 [DOI] [PubMed] [Google Scholar]
- Kramer J, Rubin A, Coster W, Helmuth E, Hermos J, Rosenbloom D, Moed R, Dooley M, Kao YC, & Liljenquist K (2014). Strategies to address participant misrepresentation for eligibility in Web-based research. International Journal of Methods in Psychiatric Research, 23(1), 120–129. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pozzar R, Hammer MJ, Underhill-Blazey M, Wright AA, Tulsky JA, Hong F, Gundersen DA, & Berry DL (2020). Threats of bots and other bad actors to data quality following research participant recruitment through social media: cross-sectional questionnaire. Journal of Medical Internet Research, 22(10), e23021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sah LK, Singh DR, & Sah RK (2020). Conducting qualitative interviews using virtual communication tools amid COVID-19 pandemic: A learning opportunity for future research. Journal of the Nepal Medical Association 58(232), 1103–1106. 10.31729/jnma.5738 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Salinas MR (2023). Are Your Participants Real? Dealing with Fraud in Recruiting Older Adults Online. Western Journal of Nursing Research, 45(1), 93–99. 10.1177/01939459221098468 [DOI] [PubMed] [Google Scholar]
- Seitz S (2016). Pixilated partnerships, overcoming obstacles in qualitative interviews via Skype: A research note. Qualitative Research, 16(2), 229–235. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data sharing not applicable to this article as no datasets were analysed during the current study.
