Table 2.
Theme | Researcher statement frequency (n = 35 proposals) | Panelist response frequency (n = 35 proposals) | Refers to issues that pertain to… |
Representativeness | 18 | 6 | Any risks or concerns that arise from insufficient or unequal representation of data, participants, or intended user population (e.g., excluding international or low-income students in a study of student well-being) |
IRB purview | 14 | 6 | Any risks or concerns regarding the research that fall under IRB purview (e.g., participant consent, data security, etc.) |
Diverse design and deployment | 13 | 8 | Incorporating relevant stakeholders and diverse perspectives in the project design and deployment processes (e.g., consulting with parents who have been historically disadvantaged to develop fairer school choice mechanisms) |
Dual use | 10 | 8 | Any risks or concerns that arise due to the technology being coopted for nefarious purposes or by motivated actors (e.g., an authoritarian government employed mass surveillance methods) |
Harms to society | 10 | 5 | Potential harms to any population that could arise following from the research (e.g., job loss due to automation) |
Harms to subgroups | 7 | 11 | Potential harms to specific subgroup populations that could arise following from the research (e.g., technical barriers to using an AI that is prohibitive to poorer populations) |
Privacy | 4 | 1 | Any risks or concerns related to general expectations of privacy or control over personally identifiable information (e.g., consequences of mass surveillance systems for individuals’ control over their information) |
Research transparency | 3 | 0 | Sufficiently and accessibly providing information such that others can understand and effectively employ the research, where appropriate (e.g., training modules for interpreting an AI model) |
Accountability | 2 | 2 | Questions of assigning responsibility or holding actors accountable for potential harms that may arise (e.g., how to assign responsibility for a mistake when AI is involved) |
Other | 2 | 3 | Other issues not covered above (e.g., intellectual property concerns) |
Tool or user error | 2 | 4 | Any risks or concerns that arise from tool/model malfunction or user error (e.g., human misinterpretation of an AI model in decision-making) |
Collaborator | 1 | 1 | Any risks or concerns that specifically relate to a collaborator on the research project (e.g., whether a collaborator could credibly commit to a project on inclusivity when their platform was notorious for exclusive and harmful behavior) |
Methods and merit | 1 | 2 | Any risks or concerns reserved for methods and merit reviews of the grant proposal (e.g., whether model specifications are appropriate for the goals of the research) |
Publicness | 0 | 2 | Questions of using publicly available data for research when those that generated the data are unaware of researchers’ intended use of their data (e.g., use of Twitter data without obtaining express consent from affected Twitter users) |
The researchers, in their ESR statements, were most likely to raise issues of representativness. The panelists, in their feedback, were most likely to raise issues regarding harms to subgroups. Both researchers and panelists also commonly focused on diverse design and deployment, dual-use concerns, harms to society, and issues pertaining to IRB purview.