Skip to main content
AMIA Summits on Translational Science Proceedings logoLink to AMIA Summits on Translational Science Proceedings
. 2021 May 17;2021:122–131.

A Contextual Inquiry: FDA Investigational New Drug Clinical Review

Jonathan Bidwell a,*, Kara Whelply a,*, Sophia Shepard b, John Hariadi a
PMCID: PMC8378631  PMID: 34457126

Abstract

The U.S. Food and Drug Administration (FDA) is modernizing IT infrastructure and investigating software requirements for addressing increased regulator workload and complexity requirements during Investigational New Drug (IND) reviews. We conducted a mixed-method, Contextual Inquiry (CI) study for establishing a detailed understanding of daily IND-related research, writing, and decision-making tasks. Individual reviewers faced notable challenges while attempting to search, transfer, compare, consolidate and reference content between multiple documents. The review process would likely benefit from the development of software tools for both addressing these problems and fostering existing knowledge sharing behaviors within individual and group settings.

1. Introduction

The FDA is modernizing I.T. infrastructure for enabling safer and more expedient drug approval (Administration, 2019). New data sources and the increasing volume and complexity of drug approval applications have prompted the agency to investigate existing work practices for establishing software requirements.

In this study, we investigated daily software requirements during the FDA's Investigational New Drug (IND) process. The study included review team members from the FDA Center for Drug Evaluation and Research (CDER), Department of Psychiatry (DP) and examined how work gets done within a typical group and individual workplace settings.

To gain a fresh perspective, we conducted a mixed-methods study design that included a Contextual Inquiry (CI), semi-structured interviews, and an online survey. The results highlighted important software requirements that we would likely have missed had we used traditional qualitative methods alone. Moreover, we established a user-driven consensus for prioritizing our subsequent software development efforts and showed that our study design was feasible to conduct within a large organization that handles propriety information.

1.1. FDA Review Process

The mission of the U.S. Food and Drug Administration (FDA) is to protect public health by ensuring the safety, efficacy, and security of human and veterinary drugs, biological products, and medical devices (Administration, 2018).

The FDA CDER Office of New Drugs (OND) reviews sponsor IND applications and offers guidance for encouraging safe and expedient drug approval [2]. IND applications include proposed clinical protocols for clinical testing with human subjects along with relevant animal pharmacology studies, toxicology studies, and manufacturing information.

Each IND is delegated to a specific review division such as the Department of Psychiatric Products and is assigned to a review team. The review team consists of a regulatory project manager (RPM), clinical, and non-clinical team members who are coordinated by team leaders (TL). The clinical TL assigns the IND to a clinical reviewer (CR), also known as medical officers. The non-clinical TLs assign the IND to non-clinical reviewers (NCRs) such as pharmacotoxicologists and individual discipline TLs assign chemists, statisticians, and other disciplines as needed. Meanwhile, the RPM schedules meetings, communications with the Sponsor, and organizes resources for the team, including past IND information and a SharePoint website. NCR/CRs analyze and review the submitted IND materials to write safety reports and identify areas of concern. Each review team member typically has multiple active IND applications at the same time.

After analysis, a Supplemental Release Date (SRD) meeting is held with the division head (DH) to discuss safety issues, propose safety guidance and finalize a hold/non-hold letter for the Sponsor to ensure that research subjects will not be subject to unreasonable risk (Administration, 2020).

1.2. Contextual Inquiry

Contextual inquiry (CI) is a user-centered research methodology that seeks to capture and understand user work's context by immersing researchers in the user environment through participatory observation sessions (Beyer & Holtzblatt, 1998; Wixon, 1990). CIs often require fewer resources than focus groups (Guest , et al., 2017; Smithson, 2000) and are less sensitive to peer pressure influence (Greg Guest, 2017). CIs have been used widely within industry, government, and academic organizations (Coble, et al., 1995; PRITCHARD, 2019) for developing suitable IT solutions (Beyer & Holtzblatt, 1998).

Much like an apprentice learning a skill, researchers go where the work is being conducted and ask questions to clarify what users are doing as they work (Beyer & Holtzblatt, 1998; Wixon, 1990). Instead of strictly observing as in shadowing (Daae, 2015) or asking direct questions as in interviews, researchers observe and probe at the same time to better understand how the work is accomplished. For this reason, CIs are well suited to collect tacit knowledge that can be difficult to ascertain with other qualitative methods.

The collection of tacit knowledge is essential because many of our daily routines have become second nature to us. Important aspects of these daily routines are often challenging for us to recall without being engaged in the work. Instead of asking users to explain a hypothetical work process, we joined them when and where they worked during the 30-day IND process to identify workflow breakdowns and problem-solving strategies.

2. Related Work

Establishing accurate user requirements is critical for developing software that successfully addresses user needs. The FDA has conducted interviews, surveys (Berndt, 2006), focus groups (Parenky, 2014), and usability tests (Fitzpatrick, 1999) in the past to identify these needs; however, these approaches require establishing questions in advance. By contrast, a CI focuses on understanding the work rather than approaching requirements gathering with an initial set of questions (Beyer & Holtzblatt, 1998).

The CI methodology offers several notable advantages over these more traditional qualitative methods. For example, CIs focus on understanding work from the ground up (Beyer & Holtzblatt, 1998) without making assumptions regarding initial questions. CI's present greater ecologically validity as researcher observations occur within the same cultural and social context as the user's everyday activities (Schmuckler, 2001). Most importantly, CI's are ideal for capturing tacit knowledge and other nuances that may not otherwise go unaddressed within healthcare (Coble, et al., 1995), academic (Notess, 2005), and government settings.

In our case, we selected the CI methodology for the following two reasons:

First, we needed to gain a fresh perspective. FDA review teams are structured as matrix organizations where users work within a traditional hierarchy that is overlaid by some form of lateral authority (Kuprenas, 2003). CIs are well suited for documenting this collaboration between different organizational roles. Interviews and focus groups tend to be less ecologically valid as they often do not occur in the user's environment and more susceptible to peer pressure (Greg Guest, 2017). We adopted the CI methodology, created CI flow models for documenting workflow, and used direct quotes to preserve meaning and provide a systematic, detailed, and reliable understanding of work practices.

Second, we needed to establish a broader consensus and buy-in across the review team. CIs are often supplemented with additional methods. For example, Maffitt et al. created CI models and affinity diagram sessions to identify user requirements among physicians (Coble, et al., 1995). The models were used to consolidate multiple sets of notes for understanding how the workflow occurred while the affinity diagram helped to identify user requirements (Coble, et al., 1995). In addition to conducting our CI, we also conducted semi-structured interviews (Notess, 2005) and administered an affinity ranking survey (Coble, et al., 1995) to encourage stakeholder participation and better prioritizing our subsequent design and software development efforts.

3. Methods

The study had two parts. The first part included three sessions with the entire review team during SRD meetings, which focused on understanding roles and responsibilities. The second part included six sessions with individual CRs, NCRs, and RPMs, which focused on understanding daily work practice and individual roles.

Each user research session included an observational period, a semi-structured interview, and a debrief session where we organized our notes and created CI flow models. In each case, at least two researchers were present. No recording devices were allowed due to strict confidentiality rules at the FDA. We conducted affinity diagraming sessions for establishing a broader set of user requirements themes across our user research sessions. Then, we administered an affinity ranking survey where we asked participants to vote on these themes for better prioritizing our future development efforts.

3.1. User Research Sessions

We studied participants within the context of three groups. Group #1 included three separate SRD meetings. Groups #2 and #3 included three individual sessions with a CR, NCR, and RPM, respectively. Each user research session included 60 minutes of direct observations, where we captured hand-written notes and asked clarifying questions as needed. Each set of observations lasted 60 minutes.

Next, we conducted a semi-structured interview that asked, "what applications do you use the most," "what type of resources do you use the most," "how do you collaborate with your co-workers" and "what was the most difficult aspect of your last review." Each interview lasted 15 minutes.

Then we created CI flow models for highlighting user roles, responsibilities, and how artifacts and information were exchanged between stakeholders (Beyer & Holtzblatt, 1998). Each flow modeling session was conducted within 48 hours of each user research session and lasted 1-2 hours.

3.2. Affinity Diagramming Sessions

The affinity diagramming sessions enabled us to consolidate our findings across multiple user research sessions. In total, we created three affinity diagrams following each round of observation sessions.

The first diagram included our observations from SRD meetings, Group #1, while the second and third diagrams included our observations from Group #2 and Group #3. In each case, we transcribed summarized sentences and direct quotes from our notes onto stickie note labels. We grouped our stickie notes in a "bottom-up" manner to identify overarching themes and relationships. Then we transcribed significant themes from each diagram as a list of user requirement themes that we later sent to participants during our affinity ranking survey. Figure 1 shows us creating our first affinity diagram.

Fig. 1.

Fig. 1

(Left) Affinity diagram creation for Group #1 (Right) Affinity diagram for Group #3

3.3. Affinity Ranking Survey

The affinity ranking survey included a list of eleven user requirement themes generated from our three affinity diagrams. The participants were asked via email to select the three most relevant themes to them to help us prioritize future development.

4. Results

The study included twenty-two rotating team members during SRD meetings and six individual team members from FDA CDER's DP. In total, we conducted nine user research sessions. Sessions with Group #1 SRD provided insight into sponsor communication at the beginning and end of the IND process as team members finalized different IND applications. By contrast, sessions with Groups #2 and #3 provided insight into daily research, writing, and scheduling tasks. The user research sessions were conducted during different stages of the FDA's IND review process, as shown in Table 1.

Table 1. Review team roles and # user research sessions during each phase of the IND review process.

graphic file with name 3475486unf1.jpg

4.1. Models

We created nine flow models. Breakdowns are indicated with lightning bolts. Most reviewers experienced significant search and information retrieval breakdowns while using Document Archiving, Reporting and Regulatory Tracking System (DARRTS), and Mercado. DARRTS is the FDA's record-keeping system for drug applications and Mercado an analytics and visualization platform for regulatory data.

Figure 3 shows an NCR searched Mercado for an IND application, but the application contains no data. She then searches DARRTS for the same application but mistypes a number, so no results are returned. She attempts to search scanned paper documents, but the search is not available.

Fig. 3.

Fig. 3

Non-clinical Reviewer Flow Model

Figure 4 shows an example of the entire group's workflow during an SRD meeting. The SRD meeting included additional roles that we could not observe during our user research sessions with the reviewer as the division head and team leader.

Fig. 4.

Fig. 4

SRD Flow Model

Figure 5 shows the workflow of a CR in the role of Medical Officer. Significant breakdowns occurred during this workflow with copy and paste functionalities and difficulty with specific software. Additionally, this flow model highlights some workarounds for recall, such as bolding a stopping point and making individual folders for each IND.

Fig. 5.

Fig. 5

CR Flow Model

4.2. Affinity Diagrams

We created three affinity diagrams. Each affinity diagram consisted of bottom-up requirements that we structured into a hierarchy of higher-level themes. For example, we grouped "rotates through pdfs," "searches for a euthanized subject," and "checking cover letter" into the minor theme of checking for omissions, which was later organized into the theme of conducting the individual review. Table 2 highlights our affinity diagram themes. Please Appendix A for an example of one of our affinity diagrams.

Table 2. Affinity diagrams showing major themes that were derived from bottom up requirements.

User research session # Requirements Major affinity diagram themes
1 Group #1-SRD meetings 117
  • meeting preparation*

  • information retrieval

  • creating external deliverables

2 Group #2 - CR, NCR, and RPM 113
  • workflow

  • writing process*

  • meeting preparation*

  • sponsor communication

3 Group #3- CR, NCR, and RPM 124
  • team collaboration

  • workarounds

  • technical issues

  • recall issues

  • conducting individual review

*

donates duplicate theme

4.3. Semi-structured interviews

We conducted nine semi-structured interviews. NCR and CR respondents most often used Microsoft Word (3 of 4 responses), CRs (medical officers) most commonly used DARRTS and G.S. Review (2 of 2 responses), and Regulatory Project Managers (RPMs) most often used Outlook. Most participants used colleagues or supervisors as information resources (5 of 6 responses).

Similarly, all participants reported using email (Microsoft Outlook) or face-to-face communication for collaboration (6 of 6 responses). All NCRs and CRs noted that the most challenging part of their last review was interpreting and validating information between multiple sources. Interestingly, all RPMs reported that scheduling meetings were the most challenging task despite the availability of Outlook scheduling assistant.

4.4. Affinity Ranking Survey

Table 3 shows our affinity diagram user requirement themes sorted by the number of votes.

Table 3. Affinity ranking survey themes sorted by the number of respondent votes.

1 Affinity ranking survey themes # Votes
1 Information Retrieval: how you find information, i.e., command find, Google, Mercado 4
2 Conducting Individual Review: your individual process to write your review including research and analysis 3
3 Team Collaboration: how you work with your colleagues and share resources, e.g., sharing templates 2
4 Work Arounds: creative solutions to accomplish your work, i.e., pdf to excel for tables, linking OneNote to Outlook 2
5 Recall Issues: when you cannot remember where something is in a document or where you left off 1
6 Writing Process: how you write your review, i.e., rewording statements, proofing 1
7 Communication with Sponsor: creating and sending information to the Sponsor 1
8 Meeting Preparation: how you prepare for meetings 1
9 Creating External Deliverables: developing and collaborating on the documents to send to a sponsor, i.e., placing your hold or non-hold comments 0
10 Technical Issues: when an application does not work, i.e., when copy and paste fails 0
11 Workflow: how you do things, i.e., coordinating meetings, analysing sponsor data 0

5. Discussion

The findings provided us with a fresh perspective for supporting the IND review process. Being able to observe daily IND tasks first-hand, enabled us to better appreciate how the work happens and establish design requirements that we likely would not have made using traditional qualitative methods alone. Notable shortcomings and strengths can be addressed in a follow-on project. The bulk of these shortcomings directly impact review team members by causing repetition, being time-consuming, and tedious. These shortcomings can be categorized as individual tasks, collaborative work, information retrieval, data extraction, proofing, and cross-checking documents. Most notably, reviewers often needed to search, compare, transfer, consolidate and reference multiple documents while writing safety reports yet existing software required them to perform extensive workarounds to complete these tasks. Strengths included a culture of knowledge sharing and building upon established user consensus. For example, reviewers shared physical items such as templates and other general knowledge to assist others with finding documents. Software could be developed to address these shortcomings and promote these strengths.

The key findings from our user research sessions are as follows:

5.1. Information retrieval and interruptions were problematic during individual and group tasks.

Information retrieval was voted as the most important priority during our affinity ranking survey (4 out of 5 respondents). FDA's Document Archiving Reporting and Regulatory Tracking System (DARRTS) had numerous shortcomings, including slow response times when searching, limited filtering capabilities, and being unable to handle typos. Information was archived and not available in DARRTS required tracking down prior reviewers who may no longer be working in the same department and/or request paper documents. A CR stated that "DARRTS is too difficult to identify team members" and instead relied on search capabilities within her Outlook email. A CR expressed frustration after missing a digit when searching for an IND. An NCR blamed herself for these difficulties and told us that "I am new here" and that "I do not even know how to search" despite her being a reviewer for over three years. Significant breakdowns occurred in our flow models that highlighted the importance of information retrieval. These breakdowns focused on review team members being unable to find information related to IND assignments within DARRTS, SharePoint and Mercado for information such as team assignments, sponsor documents and text content within documents. In our flow models, CRs failed to find documents in DARRTS.

Team collaboration was impacted by legacy software limitations and interruptions. Notably, certain steps in the review process were contingent on document edits; however, there was no fool-proof way to keep track of when edits occurred. Each review team member was responsible for adding comments to a hold/non-hold letter on Microsoft SharePoint 2010; however, only one person could edit the document at a time. RPMs had to first remember to enable tracked-changes before inviting reviewers to edit and then follow-up with them separately via email to confirm that they had finished editing the letter. In our flow model, we noted that this process breaks down when team members forget to enable tracked-changes within Microsoft Word. Interruptions further impacted individual tasks. Each review team member (NCR, CR, RPM) was assigned several existing IND applications. In our flow model (Figure 5), incoming emails, requests for in-person meetings often interrupt important CR writing, and research tasks. A CR marked a sentence in bold to indicate a sentence to resume work in anticipation of being pulled away. An NCR only had alerts on for emails from her superiors to help maintain her focus. Additionally, we observed reviewers using paper checklists and Microsoft OneNote checklists to keep track of specific tasks.

Improved support for information retrieval and resuming tasks would benefit most review team members. For example, showing a history of recent spreadsheet changes (Asuncion, 2011) could help reviewers to recall next steps from a previous work session. Introducing Microsoft SharePoint Online and Microsoft Teams could provide reviewers with tracked change support while also enabling real-time collaborative editing to better determine when team members have finished editing specific documents sections. Information retrieval services could index documents based on common search queries.

5.2. Individual review team members reported that data extraction, proofing, and checking for omissions were the most time-consuming and tedious for them.

Information retrieval issues further hampered efforts to compare documents when checking for omissions. A CR from Group #3 needed to check whether a euthanized animal subject was mentioned in four separate sponsor documents. Her search for the word "euthanized" failed because the PDF viewer only matched exact keywords. The synonym "terminated" was not matched. Instead she had to read several pages of text to find the animal's subject number. Basic copy & paste operations often failed between PDF and Word documents. Tables and other copied PDF content was transferred as raw text within Word. In three of our flow models, review team members had to stop what they were doing and recreate content from scratch before continuing. For example, an RPM copied and pasted a phone number from a PDF to paste into OneNote. The phone number appeared as symbols causing the RPM to type the number by hand. As a workaround, one NCR worked exported an entire PDF document to Excel and copying the specific tabular data that she needed. She told us that this approach only had a "50/50 chance" of working. To make matters worse, Microsoft Word often presented incorrect autocorrection suggestions after pasting. An NCR told us that "proofing" was her "most tedious task".

Information consolidation and referencing tasks were similarly time-consuming and difficult due to the nature and length of the documents involved. Not having an easy way to keep track of references increases the risk of misrepresenting the FDA's position and decisions when writing emails and reports. The entire review team needed to carefully review any IND sponsor's content when copying and pasting to avoid simple typographical errors. For example, a noted flow model breakdown occurred when an RPM began a scheduling email by copying and pasting a sponsor's paragraph. She would have sent the wrong information had she not caught a mistake and replaced a "type A" meeting with a "type B" meeting.

Indexing related words between documents could streamline the search for omissions between documents. Introducing the ability to paste screenshots of tables and equations could preserve formatting while transferring content between documents. Similarly, text could be pasted with formatting metadata that include a reference to the source document for keeping track of non-edited and edited content.

5.3. Existing review teams excel at knowledge sharing.

Individual reviewers were comfortable asking for help in their work environment from both colleagues and superiors. In our semi-structure interviews, participants regarded colleagues and supervisors as the best information resources. All review team members indicated that they were comfortable asking for help both in person and via email, depending on the situation or severity of the question or issue. The same was true during our sessions with individual reviewers and RPMs. In Figure 3, an NCR failed to find a document in DARRTS but succeeded after asking a team leader for help. RPMs shared resources such as email templates and acronym lists. Individual office mates shared productivity tips and Word templates.

Introducing a knowledge-sharing platform such as Microsoft Teams or Slack could help to cultivate the FDA's knowledge-sharing culture further. For example, review team members could ask questions and receive answers from review team members with similar specializations and provide informal mentoring. FDA is currently acquiring and implementing Microsoft Teams, which could further improve the knowledge sharing capabilities.

5.4. Existing software, while not perfect, supports a broad range of needs across roles.

To date, a top-down enterprise-level adoption of tools such as DARRTS has resulted in a "one-size-fits-some" scenario where reviewers often must develop elaborate workarounds to accomplish daily tasks. For example, a CR searched Outlook for emails to find past IND information instead of internally searching through DARRTS. An RPM used the signature feature in Outlook as templates for starting emails. Electronic and paper notepads were used to assist with recalling information that was not available within the software. For example, a CR used a paper notepad to keep track of tasks related to individual INDs that she needed to complete.

The most popular software among reviewers were DARRTS, GS Review, and Microsoft Word. In our semi-structured interviews, NCRs and CRs indicated that they most often used DARRTS and GS Review; however, the software was often slow and unstable. In one case, DARRTS froze for more than 30 seconds and crashed altogether during another user research session. We consistently received requests for improved information retrieval for clinical summaries.

By contrast, the most popular software among RPMs was Outlook. RPMs used Outlook for emails and scheduling with shared calendars, SharePoint folders, and utilizing add-ins such as Cisco WebEx and Microsoft OneNote. In our affinity ranking survey, respondents indicated that information retrieval and conducting the individual review were the most important to them during the review process.

Introducing FDA-specific services and plugins could help to achieve these daily requirements while continuing to use existing software. For example, sharing search index results across DARRTS and Outlook would enable reviewers to retrieve the same search results on either platform. Email templates could be made available using a Microsoft Outlook Add-in when starting emails. Introducing Customer Relationship Management (CRM) could support role-specific needs such as accessing and annotating sponsor documents, scheduling meetings between IND applications. Multiple desktops could be used for saving and restoring workspace state. For example, reviewers could use Amazon WorkSpaces to manage all documents and browser windows associated with a given IND application.

5.5. Future work

In the future, we would like to include additional review team members for identifying a broader set of everyday needs at the FDA. Time and resource constraints limited our enrollment to twenty rotating members of DP team members. As a next step, we plan to apply Contextual Design (Beyer & Holtzblatt, 1998) for addressing multiple-document search, transfer, compare, consolidate and reference needs among review team members.

6. Conclusion

In this study, we conducted a mixed-methods study with review team members from the FDA CDER DP to investigate how work gets done during FDA's IND review process. We conducted a Contextual Inquiry (CI), semi-structured interviews, and affinity ranking survey. The results highlighted several important design challenges. Individual reviewers needed to search, transfer, compare, consolidate and reference content between multiple documents while writing safety reports yet existing software required extensive workarounds to complete these tasks. Existing knowledge sharing behaviors important for the IND review process yet are not formally supported by existing software.

Acknowledgements

Thank you to Javier Muniz, MD and Michael David, MD and the FDA DP group for your support. This project was supported in part by an appointment to the Science Education Programs at the U.S. Food and Administration, Center for Drug Evaluation and Research, administered by ORAU through the U.S. Department of Energy Oak Ridge Institute for Science and Education.

Figures & Table

Appendix A: Affinity Diagram Example

graphic file with name 3475486unf2.jpg

References

  1. Administration, U. F. &. D. What We Do. 2018. [Online] Available at: https://www.fda.gov/about-fda/what-we-do .
  2. Administration, U. F. a. D. FDA’s Technology Modernization Action Plan. 2019. [Online] Available at: https://www.fda.gov/about-fda/reports/fdas-technology-modernization-action-plan. [Accessed 2020]
  3. Administration, U. F. a. D. fda.gov. 2020. [Online]
  4. Available at: https://www.fda.gov/drugs/types-applications/investigational-new-drug-ind-application#Introduction .
  5. Asuncion H. U. In situ data provenance capture in spreadsheets. IEEE Seventh International Conference on eScience. 2011. Issue https://www.uwb.edu/getattachment/css/about/faculty/tech-reports/UWB-CSS-11-01.pdf .
  6. Berndt E. R. G. A. H. &. S. M. W. Opportunities for improving the drug development process: results from a survey of industry and the FDA. Innovation Policy and the Economy. 2006;Volume 6:91–121. [Google Scholar]
  7. Beyer H., Holtzblatt K. Contextual Design: Defining Customer-Centered Systems. Morgan Kaufmann Publishers; 1998. p. s.l. [Google Scholar]
  8. Coble J., Maffitt J., Orland M., Kahn M. Contextual Inquiry: Discovering Physicians’ True Needs. AMIA; 1995. [PMC free article] [PubMed] [Google Scholar]
  9. Daae J. &. B. C. A classification of user research methods for design for sustainable behaviour. Journal of Cleaner Production. 2015. pp. 106, 680–689.
  10. Fitzpatrick R. Strategies for evaluating software usability. School of Computing. 1999;Volume 353.1 [Google Scholar]
  11. Greg Guest E. N. J. T. N. E. &. K. M. Comparing Focus Groups and Individual Interviews: Finding from a Randomized Study. International Journal of Social Research Methodology. 2017.
  12. Guest G., et al. Comparing Focus Groups and Individual Interviews: Findsing from a Randomized Study. International Journal of Social Research Methodology. 2017.
  13. Kuprenas J. A. Implementation and performance of a matrix organization structure. International Journal of Project Management. 2003.
  14. Notess M. Understanding and Representing Learning Activity to Support Design: A Contextual Design Example. Orlando, s.n: 2005. [Google Scholar]
  15. Parenky A. M. H. A. L. B.-P. K. R. A. K. S. &. Q. V. New FDA draft guidance on immunogenicity. 2014. [DOI] [PMC free article] [PubMed]
  16. PRITCHARD P. M. N. S. S. &. V. L. More Than A Robot: Designing for the Unique Advantages of Sending Humans to Mars. Ethnographic Praxis in Industry Conference Proceedings. 2019.
  17. Schmuckler M. A. What Is Ecological Validity? In: s.l.: Lawrence Erlbaum Associates, Inc.; 2001. pp. 419–436. [Google Scholar]
  18. Smithson J. Using and analysing focus groups: limitations and possibilities. International Journal of Social Research Methodology. 2000.
  19. Wixon D. K. H. a. S. K. Contextual design: an emergent view of system design. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1990.

Articles from AMIA Summits on Translational Science Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES