Skip to main content
. 2021 Apr 9;14(5):1719–1724. doi: 10.1111/cts.13021

TABLE 1.

Translator performance and lessons learned when applying Translator Reasoners to answer MCAT questions over three 4‐h moderated hackathons

Hackathon date Success rate Lessons learned
January 2019 0/5 questions (0%) a
  • Missing/incomplete data sources

  • Errors with existing data sources

  • Inadequate specificity with existing data sources

  • Entity identifier mismatches

  • “One‐hop” graph queries insufficient

July 2019 3/4 questions (75%)
  • Missing/incomplete relationships between entities

  • Limited or absent annotation for certain data sources

  • Lack of relative/contextual relationships

  • “Opposites” under‐represented or absent in data sources

  • “Synonymization” or equivalence of text terms challenging

  • Lack of differentiation or unclear differentiation between data types (e.g., disease vs. phenotype, protein vs. gene)

  • Multiple implementation strategies (e.g., direct match, process of elimination, and inference) improves success rate

September 2019 5/5 questions (100%) a
  • “Two‐hop” graph queries and other more complex queries more effective than “one‐hop” queries

  • Query directionality and choice of predicate important

  • Missing or incomplete predicates

  • Terminology challenges with pluralities

  • Exact matches to correct answers uncommon

  • Generalization and inference required for terms that lack specificity

  • Careful review of supporting evidence improves success rate

  • Biomedical input facilitates developer identification of correct answer

Abbreviation: MCAT, Medical College Admission Test.

a

The goal was to tackle five questions for this hackathon session, but only four questions were attempted due to time constraints.

b

The correct answer to one of the five questions was confirmed during a subsequent November 2019 meeting with the moderator and the lead developer of one of the Translator Reasoners who was unable to attend the September 2019 hackathon.