Table 2.
CC Member Needs | Dashboard Element | Quotes |
---|---|---|
1. Explore Workplace-Based Assessment Data | FG1: A big part is specific EPA data. So, from each of the numerical EPAs. And then the narrative comments that go along with those. I4: First I look at all the outliers, if any of the EPA ratings that are 3 or less. And then go through the actual feedback with those just to see if they’re actually correlated. If the feedback correlates with an EPA rating that they got. |
|
1.1 EPA Acquisition Metrics (Figure 1) | I2: So usually would start just by looking at kinda total number of EPAs observed. So the EPAs per week total and then the expired ones that they have. And then I would just break those down based on the numbers for the last EPA period just to have an overall idea of how the resident has done. | |
1.1.1 Comparative EPA Metrics (Figure 4) | FG2: Just kinda overall within residents within specific stages would kinda compare total number of EPAs with different rotations done just to see kinda what the trends were for residents in different years. I2: I think we’re happy – or I’m at least happy – without the comparison data on just looking at the resident metric dashboard just because we wanna get out of that mindset of potentially comparing residents. ‘Cause it may put undue pressure on certain residents. |
|
1.1.2 Contextualized EPA Metrics (Figure 5) | FG1: I think currently where it’s most helpful is seeing how many EPAs residents are getting on specific rotations. Because if you have one resident that rotates through general surgery and they’re getting 15 EPAs and then another one that comes through and they’re only getting three to five, then that kinda helps kind of assess them from that perspective as well. I5: … then you could see that their numbers were also low, then it would be a flag to talk to the resident to see what’s going on… To take it another level, it’d be nice to see like a target for each rotation in terms of what the history was last year. |
|
1.1.3 Expired EPAs (Figure 1-3) | I3: Sometimes the attending just doesn’t fill it out. Like they can’t get any more. But to get an idea of how many are expiring in general and in particular for that resident | |
1.2 Quantitative EPA Data (Figure 1-3) | I2: The trend is the most important thing. So it’s looking at the overall number and then what they’ve done ‘cause you can clearly see the trend if they’re down in the 2s and 3s versus if they’re up in the 4s and 5s. | |
1.2.1 Clinical Presentation and Patient Demographics (Figure 3) | I1: Or did they present to you sort of a representative sampling of procedures that they would be expected to do in the ED? I3: Just to understand where, like what they’ve been experiencing, where there may be gaps. Maybe they’re only getting like middle-aged people and they’re not getting the geriatric experience, for example. |
|
1.3 Narrative EPA Data (Figure 2) | I1: So it’s handy actually to have the narrative feedback where you can just sort of look with one click or one mouse over to see all of the things that have been said in that area. So that’s a big timesaver. | |
1.4 Narrative Assessments (Figure 6) | I4: I just use (narrative assessments) to get an overall picture of how the resident’s doing. If they all sort of paint the same picture then it’s great and you get a better feel of where they’re at then just the feedback data on the EPA. They tend to be a little bit more in-depth. So it just gives you a better overall picture and a better – just gives you a better feel of where they’re at. | |
2. Explore Other Assessment Data | FG1: I suspect because so much of this information is hard to collate together, we probably haven’t even dreamed up what would even be the best. Because once we actually have some sort of usable interface to look at data, we can look at more of it and expect more of it. Whereas right now I think we’re just wrapping our heads around collating bits and pieces from so many sources […] But if it was all on one interface dashboard, we would look at it and go, “Awesome, this would be a great place to now add this bit and this bit and this bit.” | |
2.1 Resident Self-Assessment | I4: [Resident self-assessments] are very useful, mostly ‘cause it kinda summarizes a lot of the data that you get from the EPAs. So it gives you a little bit more of a background of what they were on... then I get a bit of a better idea of where they think they’re going in the next few months. And it really helps me with their goals especially. So their goals for their past rotation, their goals for their next rotation. And that way you can kinda follow-up with their EPAs and correlate their EPAs with their goals that they identified and make sure that they’re actually getting to where they wanna be. | |
2.2 Competence Committee Decisions (Figure 7) | I1: So I try to look through the report or the minutes from the previous competency committee to just refresh my memory on what we were saying our priorities for the resident were. | |
2.3 Exam Scores (Figure 8-9) | FG2: So we do now twice annual written exams and once annual mock oral exams. And so it would be nice to see like a running tally of their exam scores across the years to see where they’re trending and where they rank. | |
2.4 List of curricular requirements | CC2: What are they missing? Scholarly activities? Required activities for the year that have been ticked off - how many do they have checked off? Should be a constant reminder for them. A 'tick sheet' of their activities. I5: And then similarly… being able to tick off like I’ve done PALS, I’ve done my airway course, I’ve done my toxicology shifts. I’ve finished [the research course]. I’ve finished [the] Indigenous wellness [course]. I finished [the] ImageSim [modules]… so it’s very obvious when a resident hasn’t finished something. |
|
3. Understand the Data in Context | FG2: I would look at EPA numbers just overall to get a sense of how many they’re doing. Then I would focus in on – I’d have a quick scan of what rotations they’d done recently to get a sense of whether or not that was a reasonable number of EPAs or not. Then I would move down into the specific stage of training that they were in and I would look at the EPAs they’d done in terms of scores as well as narrative comments. And I would filter it for the last three months to make sure I’m looking at the most recent data. […]And then I would take their narrative comments from previous – like their previous summary of how they were doing and what they wanted to work on to make sure that those things had been incorporated into this quarter of their work. | |
3.1 Organization for Efficiency | FG1: It’s, yeah, from our perspective I think it’s more [the dashboard’s] organization and having things like readily available as opposed to the [previous] system right now where it’s clickclick download click click. So it’s more the things together that can be easily accessed. | |
3.1.1 Orienting Features | I1: maybe just like a one-page job aid to how to get the most out of the dashboard. Just so that if there’s anything like that, people could quickly just scan a one-page summary and say, “Oh! I see it can do that. I didn’t realize,” or something like that. I4: I think one thing that would help people especially the new people coming in is just to have the icons – if you could label the icons. Then the little dropdown menus that you have, just to get an idea of what they actually are. Some of the things that you don’t realize you can – initially I didn’t realize that I could actually click on these so I was always kinda fumbling through this. |
|
3.2 Rotation Schedule (Figure 5) | FG1: It’d be nice also if you could have a bar of like what rotation – clinical rotation – they were on. So you could be like, “Oh well they didn’t get many this month but they were on plastic surgery, and we know that they’re only gonna get a handful.” But then the next block they were on emerg and then they got only seven which is way below what we’d expect. | |
3.3 Date Filter (Figure 2) | I4: The first thing I do is I try and narrow down the data just from the period that we’re looking at. So I just try and get the date filter right for just the block that we’re looking at. And eliminate all the other pieces of the data to make it a little bit cleaner to look through. | |
3.4 Rater Context | FG1: The quality of the data we have varies from faculty to faculty. Some are very good about filling out EPAs and getting a sense of what they mean. Other faculty don’t understand as much. There’s also quite a variability in the quality of the narrative comments. Some people are very descriptive and get to the heart of where the residents’ thought processes are. And other faculty write very non-descript vague statements about what was done. | |
4. Ensure the Security of the Data | FG1: It doesn’t matter to me where [the data is] stored as long as it’s secure. In terms of where it’s viewed, as long as we can – all committee members – can access it and look at changes to the screen in real time. I2: So I think it actually helps enhance security for what we’ve been doing for our resident reviews as opposed to like downloading of the Excel documents I5: the stakes of this data being compromised are much higher because you could have a PGY5 resident applying for a job somewhere and someone diving into their stuff. If they saw something they didn’t like might say, “Oh we’re not gonna give them this job.” |
|
Legend CC = Competency Committee Meeting Field Notes from September 2018 (1) and March 2019 (2) FG = Focus Groups with the Program Director and Competency Committee Chair from September 2018 (1) and March 2019 (2) I = Interviews with Competency Committee Members 1 through 5 in June 2019 |