Skip to main content
. 2020 Jul;18(4):334–340. doi: 10.1370/afm.2550

Table 4.

Model Agreement and Specialty Match Using 2016 Data

Specialty Count Models Predicting the Same Specialty, % Specialty Match, %a Specialty Mismatch, %b
Allergy/immunology 1,625 97.1 89.6 7.5
Anesthesiology 16,110 97.9 94.3 3.6
Cardiology 11,170 96.9 90.4 6.5
Dermatology 5,498 98.8 96.7 2.1
Emergency medicine 18,663 98.3 87.0 11.3
Endocrinology 2,497 95.8 83.3 12.5
Gastroenterology 5,960 97.2 92.4 4.8
Hematology-oncology 5,572 94.9 84.9 10.0
Infectious disease 2,328 91.1 61.2 29.9
Nephrology 3,691 96.7 86.9 9.8
Neurology 6,217 94.5 83.1 11.4
Neurosurgery 2,008 80.6 48.3 32.3
Obstetrics and gynecology 11,505 96.7 90.6 6.1
Ophthalmology 8,755 99.1 97.9 1.2
Orthopedic surgery 11,095 94.6 86.1 8.5
Otolaryngology 4,262 96.8 89.5 7.3
Pathology 4,831 99.3 97.8 1.5
Physical medicine and rehabilitation 3,438 83.2 41.6 41.6
Plastic surgery 1,795 80.7 42.2 38.5
Primary care 101,498 98.3 92.6 5.7
Psychiatry 14,974 97.9 92.1 5.8
Pulmonology 5,395 96.1 83.2 12.9
Radiation Oncology 1,903 95.9 91.0 4.9
Radiology 11,816 99.1 96.4 2.7
Rheumatology 2,030 97.6 91.7 5.9
Surgery 13,278 91.7 77.7 14.0
Urology 4,579 97.3 94.5 2.8
Overall 282,493 97.0c 89.4c 7.6c

For this analysis, we applied the 2014, 2015, and 2016 combined random forests to 2016 Test data, for a total of 3 predictions based on prescribing and procedure data for a single year. Model agreement is defined as all 3 models predicting the same specialty.

a

All 3 models predicted the self-reported specialty.

b

All 3 models predicted a specialty that differed from the self-reported category.

c

Mean across all specialties weighted by number in each specialty.