Skip to main content
F1000Research logoLink to F1000Research
. 2026 Jan 24;12:79. Originally published 2023 Jan 19. [Version 3] doi: 10.12688/f1000research.129219.3

Effects of visual art observation on technical skills in novice healthcare learners: A scoping review

Koji Matsumoto 1,a
PMCID: PMC12759281  PMID: 41488825

Version Changes

Revised. Amendments from Version 2

A paragraph has been added to the end of the Limitations section to address the reviewers’ comments. In addition, several typographical errors have also been corrected.

Abstract

Background

Recently, health professional education uses visual art observation to promote various observation-related technical skills. This article maps the studies on such interventions, scrutinizes what they measured as observational skills, and discusses their effectiveness.

Methods

Following the PRISMA Extension for Scoping Reviews, a scoping review was conducted. Publications from 2001 on were identified by searching four databases and by hand searching. The author screened each publication using the pre-designed eligibility criteria: participants were novice healthcare learners enrolled in visual art observation training; the study aimed to evaluate the effect of the intervention on technical skills related to observation; the skills were objectively measured. The author extracted relevant information from the included papers without additional inquiry into the study authors. The extracted information was illustrated in both a tabular and descriptive format.

Results

3,157 publications were identified, of which 18 articles were included. Few studies had valid and reliable experiments. The relatively valid evidence is that the participants listed more elements or signs for artistic or medical images.

Conclusions

Sound evidence is lacking for all the technical skills intended to be fostered. Observation skills for artistic images have not been demonstrated to transfer to technical skills. Nor do the studies show that they promoted accurate diagnoses and reduced misdiagnoses. Additionally, the evidence on verbalizing skills is not isolated from the impact of discussions and is unclear regarding its transfer to actual communication. For the others, there are not enough valid studies on technical skills. This is true for studies that directly examine promoting accurate diagnosis or reducing misdiagnosis. Moreover, there may be promising alternatives to visual art observations for cultivating such technical skills, but no comparative studies were conducted.

Keywords: visual arts observation, diagnostic accuracy, observation skills, verbalizing skills, perceptual learning, mental images, pattern recognition, intuition, scaffolding

Introduction

Observation is crucial for advancing healthcare in various health professions. 1 5 Though health professional education has not included cultivating observation skills until recently, 2 , 6 intervention studies, i.e., studies which include both training practices and corroborating its effects, on such training have grown rapidly, sometimes in collaboration with art museums. 7 These intervention studies usually use visual arts, such as painting, based on the assumption that there are similarities between observing visual arts and making observations in medical examination or potential applications of observing visual arts when making medical observations. 6 , 8 11

Advocates argue that it can improve the following technical skills ―in this article, “technical” means unique to a specific job role― related to observation: tolerating being unsure for long enough to sustain a meaningful analysis; 12 grounding inferences in evidence; 12 assessing medical images and patient symptoms accurately to reduce misdiagnosis; 13 , 14 metacognitive awareness; 12 capturing and documenting clearly what is observed 15 in teamwork. Also, concerning accurate diagnosis in patient interviews, the skills include: gleaning nonverbal cues to support how they care for a patient, 15 such as reading their facial expressions 6 ; understanding or extraction of the patient’s background and contexts; perspective-taking as cognitive empathy, such as knowing and postulating how another is thinking and feeling, 16 for patients. 15 Thus, in this article, observation skills are defined broadly.

Previous reviews on the use of visual arts in developing observation skills in medical and nursing education have been limited in their scope and analysis. For example, Gelgoot et al. 17 and Elhammoumi and Kellam 18 reviewed several studies on developing observation skills in medical or nursing education; however, they only listed the main findings without discussing them. Though Turton et al. 19 reviewed the intervention studies that used various art forms for training in palliative care, their conclusions cannot be validated because they did not report all the literature used. Ike and Howell’s 20 review discussed only the validity of the psychometric scales or quantitative observation metrics used in intervention studies. Although Perry et al., 21 Mukunda et al., 22 and Alkhaifi et al. 23 concluded that such studies offer reasonable evidence in favor of using visual arts to improve clinical observation skills in medical education, they did not sufficiently scrutinize how the included studies defined and measured observational skills. Additionally, Mehta and Agius 24 focused on diagnostic skills in medical education. It is argued that the interventions’ effect on diagnostic abilities has insufficient evidence as only one study has been investigated, and we should examine whether the skills gained are transferable to clinical practice.

Thus, the present article maps the intervention studies that used visual art observation to develop observation skills for novice healthcare learners, namely, students, residents, trainees, or inexperienced workers in related majors or domains. It also scrutinizes what they measured as observational skills and discusses their effects on such technical skills that the advocates argue.

Therefore, this review will answer the following research questions (RQs): 1) What key features or trends characterize visual art observation training in intervention studies aimed at improving observation skills? 2) How did the studies measure the outcome related to observation skills, and what results were obtained? 3) Do the observational skills that the intervention studies intend to foster or measure lead to expert performance? 4) What are the recommended methods and critical issues in future research?

The following sections of our paper are structured as follows. The Methods section explains the process of this scoping review. The Results section overviews the intervention studies and answers RQ1 and RQ2. The Discussion section discusses the review results regarding RQ3 and presents limitations and implications for future research as an answer for RQ4. Finally, we present the study’s main conclusions.

Methods

Search strategy and selection criteria

I designed a scoping review protocol following the PRISMA Extension for Scoping Reviews (PRISMA-ScR). 25

As a preliminary search to plan the search strategy, I collected 12 relevant intervention studies 14 , 26 36 from previous review studies 17 24 , 37 with a manual search by the end of 2023. This result is a part of the published paper as version 1. 38

According to the result, studies were included if: (i) participants were novice healthcare learners such as undergraduate or post-graduate students, residents, and trainees in any health professional education; (ii) participants were enrolled in a visual art observation training program of any duration; (iii) the main objective of the study was to evaluate the effect of the intervention on technical skills related to observation; (iv) these skills were objectively measured, such as using psychometric scales, quantitative metrics, or grading participants’ statements by predetermined criteria; and (v) the main text was written in English.

On the other hand, studies were excluded if: (i) the full text could not be obtained; (ii) the main text was written in other than English; (iii) review articles; (iv) intervention studies that used visual art but do not aim to develop observation skills; (v) interventions included other activities, except for pre- and posttests, such as drawing or creating visual arts and observing medical images; (vi) studies on “graphic medicine,” 39 that used comics included written dialogues; (vii) outcome measures for the skills were only subjective measures, irrespective of the method used (e.g., participants’ subjective evaluation of their skills, their free comments on the interventions, and scales of their impressions of the interventions), (viii) studies that extracted key concepts from the participants’ comments, such as thematic analysis, 40 because they are not suitable for determining the learning effects for the entire concerned population since the concepts are identified from the comments of only a few participants, or (iv) the participants were not novice healthcare learners, or it is not clear that they were.

To confirm the rigor of the selection process, the reference lists of included studies or relevant reviews identified through the search were hand-searched to identify other relevant papers. Moreover, by comparing the results of the preliminary search, any missing studies were included in the final search result.

Data collection

The search included publications from MEDLINE, ERIC, APA PsycInfo, and Open Dissertations in EBSCOhost published since 2001, the year the seminal paper by Dolev et al. 27 was published. This search was undertaken on January 16, 2025. The search terms ( Table 1) were determined by including as many of the titles of the papers collected in the preliminary search and included in Mehta and Agius’s 24 recent scoping review as possible while also considering a balance that would not overload the search results through repeating trial searches. Along with the search terms, some subject terms were set as exclusion parameters, as mentioned in Table 1. After these databases were searched separately, an integrated search across these databases was conducted to exclude duplicate records automatically. It was desirable to include CINAHL, but it was impossible because my institution does not subscribe to this database.

Table 1. Search terms entered into the databases.

TI (((art OR arts OR “visual thinking” OR “visual literacy” OR “visual art*” OR (observati* AND skill*)) AND (train* OR student* OR resident* OR educati*)) NOT (“art therapy” OR “art education” OR “arts education” OR “state of the art” OR “state-of-the-art” OR “the art and science”) OR ((art OR visual) AND (diagnostic OR perception) AND (train* OR student* OR resident* OR educati*)) OR (((art OR arts) AND observ* AND patient*) NOT (“art therapy” OR “art education” OR “arts education”))) NOT SU (“academic achievement” OR “language arts education” OR “art therapy” OR “middle school students” OR “high school students” OR “elementary education” OR “elementary secondary education” OR “ secondary education” OR “art teachers” OR “preservice teacher education” OR “elementary school student” OR “childhood education” OR “teacher education” OR “elementary school students” OR “special education” OR “science education” OR drawing OR “creative arts therapy” OR music OR “physical education” OR “career development” OR “middle school teachers” OR dance OR drama)

Data analysis

All the records retrieved from searches were loaded into Excel. Duplicates that could not be removed automatically were removed manually. Then, I screened titles, abstracts, and full-text articles for eligibility. I extracted relevant information on the research questions from the papers of each included study without additional inquiry into the study authors. The extracted data was illustrated in a tabular and descriptive format, linking it to the research questions.

Results

I identified 3,157 publications, of which 18 articles 14 , 26 31 , 33 36 , 41 47 were included (See Figure 1 and Appendix Table (Underlying data)). The 18 articles included three studies in veterinary education. They were comparable to training and experimental designs in human health professional education.

Figure 1. Flow diagram for the scoping review process.


Figure 1.

I will describe the results in line with RQs.

RQ1) What key features or trends characterize visual art observation training in intervention studies aimed at improving observation skills?

Goals

Though three of these intervention studies stated their goals simply such as “enhancing observation skills,” the majority targeted specific skills regarding observation skills, such as perception, interpretation, description, emotional recognition, reflective thinking, tolerance for ambiguity, skills or interest of communication, empathic skills, and collaboration with team members. Additionally, Agarwal et al. 26 aimed to increase the time participants spent analyzing a clinical image, the length of their response, and the degree of clinically relevant descriptive content. Ho et al. 45 aimed to enhance visual literacy, namely, the ability to discriminate and use visual cues to communicate with others. Greige et al. 44 aimed to introduce to the world of visual arts. Moreover, two studies (Gurwin et al. 31 and Wolf et al. 47 ) aimed to develop observation skills specific to each expertise.

Participants

The intervention studies were conducted for undergraduate or post-graduate students, residents, and trainees in various disciplines, such as radiology, dermatology, nursing, neurology, ophthalmology, and veterinary medicine. Although they were implemented in multiple grade levels and programs, none were multicenter studies. Twelve were done in the United States, including four studies at Yale University. Other studies took place in the United Kingdom, Italy, India, and Canada. Sample sizes in these studies were relatively small, from 7 to no more than 101.

Sessions

Sessions in these interventions were held in both required and elective curricula courses and extracurricular activities. The frequency and duration of sessions in each study ranged from a single session lasting only a few hours to multiple sessions over two months. Eleven included activities in art museums.

Main activities/Protocol

In all the interventions, the participants described the images’ details. In addition, they often discussed their interpretations. Fifteen studies utilized the discussion involved in or among the participants. The remaining three studies did not provide the information, but some could be omitted.

Some interventions utilized or applied established and well-known teaching programs. Six studies utilized Visual Thinking Strategies (VTS). VTS has basic tenets that must be obeyed: using the three core questions, paraphrasing with conditional language, linking, framing, resisting the impulse to insert facilitators’ opinions into the discussion, and a non-summative conclusion. 12 The core questions are: “What’s going on in this picture?” “What do you see that makes you say that?” and “What more can we find?” These questions could help participants to observe the entire painting carefully and make inferences based on the visual evidence. 48 Jasani and Saks 33 adapted VTS to the 4-step method, consisting of observation, interpretation, reflection, and communication. Gurwin et al. 31 utilized the Artful Thinking approach, which consists of Observing & Describing, Questioning & Investigating, Reasoning, Comparing & Connecting, Exploring Viewpoints, and Finding Complexity. 49 However, the authors did not seem to use Finding Complexity.

Moreover, ten studies instructed or clearly encouraged the participants to examine the whole image in detail. Thirteen interventions, including ones using VTS, focused on or highlighted, separating the description of visual evidence from the interpretation. Four taught the participants the concepts of visual elements, such as color, light, and texture.

On the other hand, the studies seldom reported providing a de-/brief on the relationship between visual art observation and healthcare practice, though Ker et al. 12 recommends such de-/brief. It is possible they forgot to report this information.

Types and numbers of visual arts used

Among the interventions that provided related information, they often used paintings, including their reproduced images, sketches, photographs, sculptures, and ancient coins. In the choice, protocols such as VTS and the goals of the interventions tend to be considered. Because the number of sessions varied among the studies, the number of visual arts used also varied, though nine studies did not report the number.

RQ2) How did the studies measure the outcome related to observation skills, and what results were obtained?

Experimental designs

Eighteen studies were included in this study. Four included studies were randomized controlled studies with pre- and posttests. One study was a randomized controlled study with posttests only. Two were not randomized controlled studies with both tests. The study was not a randomized controlled study with posttests only. Ten were not controlled studies with both tests.

Timing of the pre- and posttests

In the included studies, pretests were often administered just before the sessions, and posttests were done at the end or within one week after the sessions ended. At the same time, eight studies did not provide this information. Fernandez et al. (2021) 41 and Fernandez et al. (2022) 42 had participants undergo cytology instruction or laboratory training between the immediate and delayed posttests.

Statistical analysis and tests

Many studies misused statistical tests to interpret their results. Twelve did t-tests for non-randomized samples. This does not meet their prerequisite, randomized samples. Agarwal et al. 26 reported that they analyzed variance (ANOVA) but seemed to conduct t-tests repeatedly for all the pairs inappropriately, as far as the test results they showed. Gurwin et al. 31 also should use an ANOVA instead of t-tests for the pre-and posttest increments.

Outcome measures and their results

As measures for observation skills, all the intervention studies used visual images: eight used clinical and artistic, nine clinical only, and one artistic only. The number of images used in each test varies widely, from two to 15, among the studies that stated the information.

In the tests, the participants were usually asked to describe visual features in the images. What features to write about varied. For artistic images, five studies were asked to write what they observed, two were asked to write about the VTS core questions, and a study was asked to spot the differences. For clinical images, the participants were asked to write freely about their observations and write down as many (abnormal) signs as possible, identify them, and write their (clinical) interpretations. Griffin et al. 29 asked participants to identify key visual features and comment on visual elements such as texture, lines and contour, color (or shading), and contrast for both images.

Of the 16 studies that conducted pre- and posttests, one used the same images in both tests, twelve used different images between the tests, and three did not report the information. If images differ between the tests, procedures should be included to equalize their difficulty. As the equalization, three of the 12 studies randomly assigned one of the two versions as the pretest and the other as the posttest. Ho et al. 45 reported that similar complexity between the tests was confirmed by inter- and intra-assessment analysis but did not provide the details or data. In the other (8) studies, the authors assessed the difficulties similarly but did not provide evidence, while others did not offer sufficient information. Indeed, the results of Fernandez et al. (2022), 42 which differed from theoretical predictions, are suspected to be due to a lack of such equalization.

The scoring criteria by which the studies quantitatively or qualitatively evaluated their descriptions varied. In analyzing the criteria, Greige et al. 44 is excluded due to serious suspicions of inconsistencies in the points allocated to each task between the tests, the use of a rubric mixed with multiple perspectives, and unknown details of the rubric. Snigdha et al. 46 is also excluded because only the composite score is shown, and it is not explained how each task was scored and how each score was composited.

Then, we examined the validity of the results for each criterion with the following principles:

  • The results, with inappropriate use of statistical tests, are adopted, but the results of the tests are ignored, and only the measured values are considered.

  • The results with problems in testing procedures are not adopted, specifically when reasonable procedures to equalize difficulty cannot be confirmed, such as when different images were used between tests or when it is unknown whether the same or different images were used in each test.

  • Priority is given to results from the studies that had valid experimental designs.

  • The criteria in each study are categorized based on their qualitative similarity, though not in a strict manner, as the images used in the testing differ across the studies.

  • Because the included studies are not multicenter, only results from three or more studies are considered except for those not adopted in each category.

  • Only result trends are considered, i.e., positive or negative, as the criteria scales also vary between the studies.

Table 2 summarizes the scoring criteria and their results for artistic images. Roughly classified, the categories are Describing more elements, Using more words, Using more subject words, Using more technical vocabulary, Critical thinking, and Other thinking skills. In Describing more elements, positive results tend to be more than negative ones. None of the others have enough studies to consider.

Table 2. The scoring criteria and their results for artistic images.

Category Criteria Results Studies Control experiments Both pre- and posttest Notes
Describing more elements Number of elements identified (marked with the rubric) Significantly higher for both immediate and delayed posttest than for pre-test Fernandez et al. (2021) 41 No Yes D,S
Number of objective observations (rated by experts) C: no significant change. Iv: significant increase in the delayed posttest compared with the immediate posttest Fernandez et al. (2022) 42 Yes Yes D
Number of accurate observations (marked with the expert rubric) For both groups, significantly improved from the pre-test to the immediate posttest and delayed posttest Fernandez et al. (2022) 42 Yes Yes D
Number of elements identified (in an answer to the VTS core questions, but unknown the detail of the scoring) Iv>C significantly in the increment Ferrara et al. 43 R Yes D
Observation skill: identifying the figures/objects and the details in the background of the scene (in answer to the VTS core questions, marking with the rubric) Iv>C significantly in the increment Ferrara et al. 43 R Yes D
Frequency of descriptions of key visual features and visual elements (preset by an arts educator) Iv>C, but not significant Griffin et al. 29 NR No S
Frequency of single factual declaration Significantly increased Klugman et al. 34 No Yes S
Frequency of descriptions of names or identifying something (as an answer to the VTS core questions) Increased, but not significant Poirier et al. 36 No Yes S
Frequency of descriptions of a single factual declaration (in an answer to the VTS core questions) Increased, but not significant Poirier et al. 36 No Yes S
Using more words Number of words used for the description No significant difference between the groups/Iv increased significantly from pre-test to immediate and delayed posttest Fernandez et al. (2022) 42 Yes Yes D
Number of words used for the description Significantly increased Klugman et al. 34 No Yes S
Number of words used for the description (in an answer to the VTS core questions) Iv>C, but not significantly, in the increment Ferrara et al. 43 R Yes D
Using more subject words Number of subjective observations (rated by experts) No significant change over time Fernandez et al. (2022) 42 Yes Yes D
Using more technical vocabulary Number of specific (technical) vocabulary Both groups demonstrated significant increases on the delayed posttest compared with both the pre-test and the immediate posttest Fernandez et al. (2022) 42 Yes Yes D
Critical thinking Critical thinking, describing multiple hypothetical interpretations of the scene considering corresponding supporting reasonings (in answer to the VTS core questions, marking with the rubric) Iv>C significantly in the increment Ferrara et al. 43 R Yes D
Frequency of using terms related to critical thinking skill Iv>C significantly in the increments Gurwin et al. 31 R Yes D,S
Other thinking skills Linguistic expression, elaborating a narrative that deals with characters, dynamics, emotions, and relationships present in the scene in accordance to the type of image (in answer to the VTS core questions, marking with the rubric) Iv>C significantly in the increment Ferrara et al. 43 R Yes D
Problem-solving, considering the scene as a whole and the relationships among elements, and encompassing empirical details, personal experience, and prior knowledge (in answer to the VTS core questions, marking with the rubric) Iv>C significantly in the increment Ferrara et al. 43 R Yes D

Abbreviations: VTS: Visual Thinking Strategies; R: a randomized controlled study; NR: not-randomized controlled study; S: the inappropriate use of statistical tests or analysis, mentioned in the text; D: any reasonable procedures to equalize the difficulty cannot be confirmed when different images were used between tests or unknown whether the same or different images were used in each test; Iv: the intervention group; C, the control group.

Table 3 also summarizes clinical images. Roughly classified, the categories are Using more words, Using more subjective words, Using more technical vocabulary, Describing more signs, Describing more visual elements, Signs identified, Diagnostic comments, Observation of patients and their contexts, Speculative thinking, Looking longer, and Others, while there are some unclassifiable. In Describing more signs, positive results tend to be more than negative ones. None of the others have enough studies to consider.

Table 3. The scoring criteria and their results for clinical images.

Category Criteria Results Studies Control experiments Both pre-and posttest Notes
Using more words Number of words used for the description Iv: significantly increased, C: increased, but no significant Agarwal et al. 26 NR Yes D,S
Number of words used for the description There is no significant difference between groups, and over time Fernandez et al. (2022) 42 Yes Yes D
Number of words used for the description Significantly increased Klugman et al. 34 No Yes S
Number of words used for the description Iv>C, significantly in 5 out of the 6 photos Pellico et al. 35 R (probably) No
Number of words used for the description Increased for radiographs, but not chart Wolf et al. 47 No Yes D
Number of words used for the description (in an answer to the VTS core questions) Iv>C, but not significantly, in the increment Ferrara et al. 43 R Yes D
Using more subjective words Number of subjective observations (rated by experts) No significant change over time in either group Fernandez et al. (2022) 42 Yes Yes D
Frequency of use of subjective terminology Decreased Jasani and Saks 33 No Yes D
Using more technical vocabulary Number of specific (technical) vocabulary Iv increased significantly, but not until the delayed posttest. C increased significantly from pretest to the immediate and delayed posttest Fernandez et al. (2022) 42 Yes Yes D
Describing more signs Frequency of clinical observations Iv: significantly increased, C: increased, but no significant Agarwal et al. 26 NR Yes D,S
Frequency of describing visual diagnostic features (marked by a key) Iv’s posttest score significantly improved by 56%, and it was 10-12% higher than that of the other groups Dolev et al. (intervention between 1998 and 1999) 27 R Yes
Number of accurate observations (marked with the expert rubric) Significantly improved from the pretest to the immediate posttest for both groups. For the delayed posttest, C decreased significantly to pretest levels, while Iv did not change significantly from the immediate to the delayed posttest Fernandez et al. (2022) 42 Yes Yes D
Frequency of describing visual diagnostic features (marked by a key) Improved, but no significant Garino (intervention in 2009) 28 No Yes S
Frequency of unique observations Increased, but no significant Jasani and Saks 33 No Yes D,S
Frequency of single factual declaration Significantly increased Klugman et al. 34 No Yes S
Frequency of plausible objective clinical findings Iv>C, significantly in 5 out of the 6 photos Pellico et al. 35 R (probably) No
Describing more visual elements Number of objective observations (rated by experts) C: No significant change over time Iv: significant increase in the delayed posttest compared with both the pretest and the immediate posttest Fernandez et al. (2022) 42 Yes Yes D
Frequency of correct answers (key visual features and visual elements, pre-set by dermatologists) Iv>C, but no significant Griffin et al. 29 NR No S
Signs identified Whether the abnormality was identified Significantly improved Goodman and Kelleher 14 No Yes D,S
Diagnostic comments Frequency of diagnostic comments Iv and C: decreased, but no significant Agarwal et al. 26 NR Yes D,S
Observation of patients and their contexts Frequency of general patient observations Iv: significantly increased, C: increased, but no significant Agarwal et al. 26 NR Yes D,S
Linguistic expression, elaborating a narrative that deals with characters, dynamics, emotions, and relationships present in the scene in accordance to the type of image (in answer to the VTS core questions, marking with the rubric) Iv>C significantly in the increment Ferrara et al. 43 R Yes D
Scope of interpretations involving the patient’s surroundings, the patient’s perspective, or emotional state Increased Jasani and Saks 33 No Yes D
Frequency of describing elements other than people Mixed, but C tended to be higher than Iv, and Iv<C significantly in 1 out of the 6 photos Pellico et al. 35 R (probably) No
Speculative thinking Critical thinking, describing multiple hypothetical interpretations of the scene considering corresponding supporting reasonings (in answer to the VTS core questions, marking with the rubric) Iv>C, but not significantly, in the increment Ferrara et al. 43 R Yes D
Frequency of the words related to speculative thinking Increased Jasani and Saks 33 No Yes D
Frequency of alternative diagnosis offered Iv>C, significantly in 5 out of the 6 photos Pellico et al. 35 R (probably) No
Looking longer Time spent Iv: increased, but no significant C: decreased, but no significant Agarwal et al. 26 NR Yes D,S
Others Frequency of “self-deprecating” remarks Iv: increased, but no significant C: decreased, but no significant Agarwal et al. 26 NR Yes D,S
Problem-solving, considering the scene as a whole and the relationships among elements, and encompassing empirical details, personal experience, and prior knowledge (in an answer to the VTS core questions, marking with the rubric) Iv>C significantly in the increment Ferrara et al. 43 R Yes D
Frequency of use of visual analogies Increased Jasani and Saks 33 No Yes D
Unclassifiable: Describing more elements, but unknown for signs or visual elements Number of elements identified (marked with the rubric) No significant difference between any of the tests (37.5%, 44.4%, and 36.8% for pretest, immediate posttest, and delayed posttest) Fernandez et al. (2021) 41 No Yes D,S
Number of elements identified (in an answer to the VTS core questions, but unknown the detail of the scoring) Iv>C, but not significantly, in the increment Ferrara et al. 43 R Yes D
Observation skill: identifying the figures/objects and the details in the background of the scene (in answer to the VTS core questions, marking with the rubric) Iv>C significantly in the increment Ferrara et al. 43 R Yes D
Total number of observations made (scoring procedure unknown) Iv significantly increased. But posttest score was not significantly different between groups Ho et al. 45 NR Yes D,S
Unclassifiable Observations (marked by the key validated by three experts, but the detail unknown) Posttest scores for Session 1 were significantly higher than pretest. Posttest scores for Session 2 were higher, but not significantly, than those of Session 1. Grossman et al. 30 No Yes D,S
Interpretations (marked by the key validated by three experts, but the detail unknown) Posttest scores for Session 1 were significantly higher than pretest. Posttest scores for Session 2 were higher, but not significantly, than those of Session 1. Grossman et al. 30 No Yes D,S
The combined frequency of observations (marked by a key) with the general quality of the description (rated by experts and a medical student) Iv>C significantly Gurwin et al. 31 R Yes D,S
Descriptive ability (scoring procedure unknown) Iv significantly increased. But posttest scores was not significantly different between groups Ho et al. 45 NR Yes D,S
Observations (marking with the rubric *) Significantly increased in radiographs. Increased, but not significantly, in the chart. Wolf et al. 47 No Yes D,S
Clinical interpretation (marking with the rubric *) Significantly increased in radiographs. Increased, but not significantly, in the chart. Wolf et al. 47 No Yes D,S

Abbreviations and marks: Same as Table 2.

*

Due to a website error, the supplementary material describing the rubric could not be accessed.

In addition to using these images, Gurwin et al. 31 used the Reading the Mind in the Eyes test, a test for speculating the person's mind from images of their eyes. They reported no significant difference between the intervention group and the control group in the clement, due to the ceiling effect. Klugman et al. 34 used Budner’s Tolerance of Ambiguity Scale and the Communications Skills Attitudes Scale and reported a significantly positive change in both scales, although these scales are subjective.

Hence, few studies had valid and reliable experiments, although it must be quite difficult to execute such experimental designs in educational and training settings. The relatively valid evidence is that the participants’ descriptions of artistic or medical images revealed that participants were able to list more elements or signs. This seemed to be related to skills to capture and document clearly what is observed (in teamwork). On the other hand, there are not enough valid studies for the others of the technical skills that the intervention studies intended to foster: tolerating being unsure for long enough to sustain a meaningful analysis; assessing medical images and patient symptoms accurately to reduce misdiagnosis; metacognitive awareness; gleaning nonverbal cues to support how they care for a patient; understanding or extraction of the patient’s background and contexts; and perspective-taking. No studies also examined whether the participants could separate interpretation from description, though most included studies intended to teach to be.

Discussion

In this section, we discuss the results of this review to answer RQ3, do the observational skills that the intervention studies intend to foster or measure lead to expert performance?

Listing more elements of artistic images

Generally, expert's technical skills for observation do not transfer to images outside of his or her expertise. Experimental studies showed that in terms of radiology, expertise was positively transferred to the task of finding small low-contrast dots in phantom X-ray images 50 but not to the task of finding hidden targets in pictorial scenes. 51 The same can be said for visual recollection memory. Expert cytologists and radiologists were not better at recognizing scenes or isolated objects than non-medical participants. 52

Thus, even if the interventions can facilitate listing more elements of artistic images, the transfer to clinical images needs to be investigated. No included studies did, although some studies examined artistic and medical images. Detecting differences in non-clinical images, such as those used by Snigdha et al., 46 may not be useful.

Longer periods of careful and systematic observations for accurate diagnosis

Listing more signs for medical images, as well as spending more time as Agarwal et al. 26 demonstrated, indicates observing carefully and thoroughly.

However, encouraging novice medical learners to look all over does not lead to an accurate diagnosis. In Kok et al.’s study for medical students, 53 systematic viewing intervention, such as detecting radiograph abnormalities by viewing the images in a given order, or full-coverage viewing intervention, such as mentally dividing each image into nine imaginary segments (3×3) and inspecting each segment separately, did not outperform non-systematic viewing intervention, wherein participants were urged to inspect whatever attracted their attention, notwithstanding that systematic and full-coverage viewing took a longer (but not significant) time than non-systematic viewing. Van Geel et al. 54 also reported similar results.

In addition, observing for a longer period is unnecessary for an accurate diagnosis. Experts take less time than novices to diagnose accurately. In contrast, they are more likely to misdiagnose if they take longer. 55 57

Moreover, observing and describing more visual features or signs in medical images or patient photographs do not lead to accurate diagnoses for novices. Novices, such as medical school students, generate hypotheses based on constant information input and cannot select appropriate hypotheses. 58 An experiment that presented electrocardiograms to non-medical laypersons revealed that participants usually misdiagnosed the problem because they failed to eliminate irrelevant features. 59 The same can be said for experts. Expert general practitioners can diagnose correctly without discovering all the disease features. 60 That is because experts can extract more helpful information about the given situation, immediately generate appropriate hypotheses, eliminate irrelevant hypotheses, and test the hypotheses. 61 63

In sum, it is important to find critical evidence, such as signs, to distinguish a specific disease from other possibilities for an accurate diagnosis by looking at medical images and patient interviews. The effect of visual art observation on such discrimination needs to be investigated, though no included studies have done so.

For the same reasons, tolerance of ambiguity may lead to observing thoroughly but not to diagnostic accuracy. Indeed, Klugman et al. 34 reported a significant positive change in Budner’s Tolerance of Ambiguity Scale but did not demonstrate that tolerance promoted such observation. It is also necessary to investigate how tolerance as a psychological factor works in healthcare practice. This is because it may not be a trait but rather a state, i.e., susceptible to environmental changes, given the findings on misdiagnosis. Misdiagnosis causes are complex, 64 including environmental factors, 65 and may not always be due to mere carelessness or lack of tolerance. Besides, experts’ mental images can diagnose quickly and accurately and are compromises between fitting the data and minimizing the complexity of the model, 66 which leads to experts’ perceptions toward certain classes of stimuli in the domain. 67 That many expert radiologists overlook the gorilla inserted into chest CTs 68 demonstrates that experts’ errors, such as premature closure, may be a byproduct of the development of technical ability. 67 , 69

Technical knowledge and skills based on observation skills

Cultivating mental images based on technical knowledge and skills is needed to find critical evidence to distinguish a specific disease. Many studies 55 , 70 72 and reviews 73 79 on medical image perception have shown that experts’ diagnosis is generally characterized by holistic and quick recognition. This is due to pattern recognition 60 , 80 , 81 or intuition composed of scripts, schemas, or mental models/images/representations (hereinafter collectively referred to as mental images). 61 , 82 Experts compare medical images or patients with mental images—associated with domain-specific knowledge—of normal and abnormal cases at a glance. 62 , 67 , 83 , 84 A study using fMRI 85 revealed that when radiologists looked at radiographs, the brain regions related to visual attention and the encoding, storing, and retrieval of visual memory were activated.

Humans generally recognize things by associating them with mental images formed through our daily visual experiences, not just medical images. 66 For example, humans can recognize a scene’s gist and gaze into areas of interest through their knowledge of the world. 86 88 This top-down control of gaze using mental images is common for humans. Likewise, the experts’ intuition is cultivated through the accumulation of domain-specific knowledge and experiences. Indeed, the number of mammograms observed is proportional to the accuracy of the diagnosis. 89 On the other hand, in the context of novices’ misdiagnoses, Brunyé et al.’s 90 review showed that search errors generally occur when novices overlook definitive features with subtle visual features and repeatedly look at areas that are irrelevant for a diagnosis. Misdiagnoses also happen when signs of extremely low prevalence are overlooked by novices (and experts). Novices will also likely make recognition errors due to insufficient mental images and knowledge. 70 , 84 This implies that novices’ errors are caused by the lack of top-down gaze control due to inadequate mental images.

Thus, it is critical to consider the participants’ technical knowledge and skills as a mediator variable in measuring observation skills, though no included studies did.

Conscious and unconscious observation

The intervention studies emphasize conscious observation but do not seem to consider the unconscious process such as tacit knowledge 91 and intuition. For example, a theory of perceptual learning highlights the unconscious learning process in pattern recognition. It is possible even if the rationale of such recognition is not verbalized, although it is more effective if it is verbalized. 92 Experimental psychology has shown that simply looking at various paintings with the artists' names randomly can enable novices to differentiate between artists. 93 95 Similar results were obtained in experiments that had novices diagnose psychopathological cases presented visually (through words) or aurally. 96 Additionally, novices can detect melanoma by showing benign and malignant cases side by side, even without instructions, such as the ABCD rule. 97 In contrast, instructing novices on such rules has little effect on diagnostic accuracy. 97 , 98

Learning modules based on theories of perceptual learning, such as Perceptual and Adaptive Learning Modules (PALM), have been developed to foster pattern recognition of medical images. 81 , 99 A complete discussion of them is beyond the scope of this article; all the relevant studies that we gathered have reported excellent results of diagnostic accuracy for medical school students and/or residents 100 107 and non-medical participants, 108 111 however, it is possible such excellent results could be due to publication bias. Guégan et al.’s review 112 also concluded that the modules are promising educational tools to improve diagnostic accuracy and rapidity in daily medical practice. Notably, the outcome measures used by these modules included fluency or rapidity of recognition. This is consistent with research findings on medical experts’ performance mentioned above. Therefore, PALM may be an alternative to visual art observation and is worthy of conducting comparative studies.

Fostering verbalizing skills for communication with patients and teamwork

Listed more elements or signs may be linked to skills of verbalizing what one observes, even if it does not help in making an accurate diagnosis. It is assumed that such skills could be useful for patient communication and teamwork. However, the intervention studies have not provided sufficient evidence that listing more elements or signs contributes to such communication. Also, the included studies sometimes taught the participants the concepts of visual elements by using visual arts. Still, they did not explain or demonstrate how the concepts could be used in healthcare practice.

Instead, it may not be visual art observation that contributed to the verbalizing skills, but discussions accompanied it. However, the intervention studies did not examine the impact of the talks separating from that of visual art observation, nor did they describe what instructions and advice were given to the participants for enhancing the quality of the group discussions, except for Jasani and Saks 33 and Klugman et al. 34 that instructed them to think collaboratively and share and combine each other’s ideas.

If discussions contribute to verbalizing skills, finding a rationale for visual art observation for cultivating the skills may be difficult. That is because there could be methods directly related to healthcare practice without being bypassed by visual art observation. Such methods include observing and discussing medical images using the three core questions of the VTS, as well as drama, theater-based improvisation, or role-playing in clinical situations. 113 Indeed, Choe et al. 114 demonstrated that teaching art formal analysis, i.e., analyzing through the concepts of visual elements, of radiologic images, NOT visual arts, improved not only descriptions but also diagnoses of mammographic images and chest radiographs.

Long-term effects such as scaffolding

The long-term effects of visual art observation are unknown because no included studies were examined. Likewise, it is worth investigating whether visual art observation is scaffolding for further learning in technical education and training. Fernandez et al. (2021) 41 and Fernandez et al. (2022) 42 conducted a visual art observation training followed by technical skills lessons. If this is true, learners who take visual art observation interventions should be able to acquire technical knowledge and skills faster or more deeply than those who do not. To investigate that, longitudinal studies are required 17 because the effects of such scaffolding may be delayed.

Limitations

This article has several limitations. First, I excluded articles published before 2001 or written in a language other than English and those that subjectively measured observation skills or included activities other than visual art observation. In addition, there might be studies that should be included in the CINAHL database that we could not search. These studies may have included information that could have contributed to this article.

Second, it is possible that the selection of included studies was skewed because only I judged to in-/exclude and set many search criteria in the database search to make the volume of data manageable. However, this skew may be compensated for to some degree by manually searching the reference lists of the articles obtained through the search process.

Third, since all included intervention studies were published in academic journals, publication bias could lead to overestimating their effects.

Fourth, there may also be bias due to inadequate disclosure of information about intervention methods and outcome measures in the included studies.

Fifth, I excluded intervention studies using visual arts to foster technical skills other than observation skills. Then, the intervention studies may examine the observation-related skills that the included intervention studies rarely did. Other reviews are needed to discuss these studies.

Finally, in health professional education for both novices (pre-service) and experts (in-service), the arts can play many roles beyond enhancing observational skills. 115 122 However, discussing such roles is outside the scope of this article. The author does not intend to extrapolate the conclusions to technical skills beyond those related to observation, nor the role of the arts in health professional education.

Implications for future research

Because the lack of sound evidence precludes offering suggestions regarding the practice of visual art observation, our discussion focuses on the implications for future research regarding RQ4.

We also fully agree with Mehta and Agius’s suggestions, 24 such as conducting a multicenter study, standardization in evaluating skill improvement across the studies, assessing long-term retention of skills, and determining whether the skills gained are transferable to clinical practice.

In addition to their suggestions, we would like to elaborate on some points. First, the participants’ technical knowledge and skills should be mediator variables in measuring observation skills. This is more important for longer intervention sessions or measuring long-term effects because novice healthcare learners concurrently should learn the knowledge and skills through education and training.

Second, it is strongly recommended that relationships between variables as outcome measures are examined. If the included studies had done so, they would have provided evidence for some of the issues addressed in the Discussion section. For example, they could have utilized their data to examine the correlation between descriptions of artistic and clinical images as well as the correlation between word count, length of time spent looking at images, or tolerance for ambiguity and quality of descriptions and interpretation or accuracy of diagnosis.

Third, it needs to investigate technical skills intended to be fostered and have rarely been studied, such as separating interpretation from description, tolerating being unsure for long enough to sustain a meaningful analysis, metacognitive awareness, perspective-taking, and gleaning nonverbal cues. It is also important to examine how visual elements can be integrated into healthcare practice.

Fourth, it is worth investigating how verbalizing skills fostered through visual art observation affect other technical skills or healthcare practices. For example, it may enhance self-reflection skills that require the use of language. It may also help to communicate with patients and colleagues.

Fifth, it is worth examining scaffolding for further learning in technical education and training. For example, to speak of accurate diagnoses, it might be an idea to investigate whether visual art observation is scaffolding for finding critical evidence to distinguish a specific disease.

Sixth, more careful consideration needs to be given to procedures for tests to measure outcomes. If the images differ between tests, the difficulties need to be equalized. Also, given that some intervention participants had negative inclement scores in the tests with an artistic and clinical image in Ferrara et al., 43 tests with a few images appear to increase measurement errors and skew the results. On the other hand, comparisons using the same images across tests or studies could also be of value. Then, it might be good to use sufficient images, including both the same and different images.

Seventh, to measure the effectiveness of visual arts observation, it is recommended to use the same tests for measuring the effectiveness of learning modules on perceptual learning. Diagnostic accuracy and fluency as indicators of technical observation skills should also be examined, as asserted by Kellman et al. 100

Eighth, as Perry et al. 21 mentioned, the intervention methods must also be reported adequately. It is unlikely that merely looking at visual art develops technical skills. Instead, the design of the training, such as the rationale for the selection of the paintings, the type and number of paintings that have been viewed, de-/briefs on the relationship between visual art observation and healthcare practice, and instructions and facilitations for learners, including ways of reflections 123 and discussions, is required. Concerning studies using VTS, it is not enough to explain the three core questions; it is necessary to describe how the precise tenets of VTS were carried out since Ker et al. pointed out that some of the prior interventions deviated from the tenets. 12 It is also necessary to mention the learners’ responses evoked by the artworks, prompts, and questions to identify whether the outcomes were intended or derived. In addition, educational or training curricula and proficiency of the learner's technical knowledge and skills related to the aims and contents of the intervention should also be stated in detail because merely describing the learners' grades and training levels is insufficient for international readers to understand that. Moreover, I found the following deficiencies in the studies used in this article: the number of missing samples from the participants; what did the control group do; details of the content and format of the lectures that the control participants were subjected to; maximum possible values of the scales or items for considering the ceiling effect; details of the scoring criteria; how many schools/organizations did the participants belong to; how many images were used for the test; and timing of the tests. Thus, items listed in Appendix Table should be reported in future studies.

Ninth, research should directly compare promising interventions other than visual art observation, such as learning modules applied by perceptual learning and discussion using art formal analysis 114 of clinical images. For diagnosing medical images, teaching a specific eye movement search pattern that corresponds with the characteristics of the area shown in the radiographs 124 or demonstrating an expert model that searches visually and interprets symptoms (eye-movement modeling examples) 125 could also be effective. Performing arts such as drama could be beneficial for teamwork and communication skill development. 113 Due to time constraints imposed by the many demands of health professional education, it is necessary to identify a more efficient method among those available.

Finally, as Osman et al. 126 suggested, further research exploring the learning process through visual arts observation is required, as extant literature only focuses on outcomes. Dominant research methods such as controlled studies and psychometric scales do not allow for such investigation 118 , 123 ; thus, further research should include alternative methods and outcome measurements, such as qualitative methods. 21 , 127

Conclusions

This scoping review mapped intervention studies using visual art observation for novice healthcare learners to foster observation skills. It scrutinized what they measured as observational skills and discussed their effects on technical skills. This review concludes that sound evidence is lacking for all the technical skills intended to be fostered, although some prior reviews concluded that such studies offer reasonable evidence. The intervention studies provided relatively valid evidence that the participants listed more elements or signs for artistic or medical images. However, observation skills for artistic images have not been demonstrated to transfer to technical skills. Nor can it be said that the evidence provided by the studies showed that they promoted accurate diagnoses and reduced misdiagnoses, as it contradicts the findings on novice and expert observational skills.

Additionally, the evidence on verbalizing skills is not isolated from the impact of discussions and is unclear regarding its transfer to actual communication. For the others, the technical skills, such as separating interpretation from description, tolerating being unsure for long enough to sustain a meaningful analysis, metacognitive awareness, perspective-taking, gleaning nonverbal cues, and understanding or extraction of the patient’s background and contexts, there are not enough valid studies. This is true for studies that directly examine promoting accurate diagnosis or reducing misdiagnosis. Moreover, there may be promising alternatives to visual art observation for cultivating such technical skills, but no comparative studies were conducted.

Acknowledgments

The author would like to thank Editage ( www.editage.jp) for English language editing.

Funding Statement

This study was supported by JSPS KAKENHI Grant Number JP20H01685.

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

[version 3; peer review: 2 approved

Data availability

Underlying data

OSF: Effects of visual art observation on technical skills in novice healthcare learners: A scoping review, https://doi.org/10.17605/OSF.IO/TDGQZ. 128

This project contains the following underlying data:

  • Appendix Table.ods (the selected intervention studies' features, research design, and key results)

Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).

References

  • 1. Anderson N, Dietrich MR: Introduction: visual lessons and the life sciences. Anderson N, Dietrich MR, editors. The Educated Eye: Visual Culture and Pedagogy in the Life Sciences. Hanover (NH): Dartmouth College Press;2012; pp.1–13. [Google Scholar]
  • 2. Braverman IM: To see or not to see: how visual training can improve observational skills. Clin. Dermatol. 2011;29(3):343–346. 10.1016/j.clindermatol.2010.08.001 [DOI] [PubMed] [Google Scholar]
  • 3. Sandhu VK: Inspection—a fine art. JAMA Dermatol. 2018;154(5):630. 10.1001/jamadermatol.2018.0307 [DOI] [PubMed] [Google Scholar]
  • 4. Watson F, Rebair A: The art of noticing: essential to nursing practice. Br. J. Nurs. 2014;23(10):514–517. 10.12968/bjon.2014.23.10.514 [DOI] [PubMed] [Google Scholar]
  • 5. Boisaubin EV, Winkler MG: Seeing patients and life contexts: the visual arts in medical education. Am. J. Med. Sci. 2000;319(5):292–296. Reference Source [DOI] [PubMed] [Google Scholar]
  • 6. Bardes CL, Gillers D, Herman AE: Learning to look: developing clinical observational skills at an art museum. Med. Educ. 2001;35(12):1157–1161. 10.1046/j.1365-2923.2001.01088.x [DOI] [PubMed] [Google Scholar]
  • 7. Pitman B: Art museum and medical school partnerships program descriptions. Richardson, TX: The Edith O’Donnell Institute of Art History, The University of Texas at Dallas;2020 [cited 2022 Oct 29]. Reference Source [Google Scholar]
  • 8. Pellegrino ED: Visual awareness: the visual arts and the clinician’s craft. Berg G, editor. The Visual Arts and Medical Education. Carbondale (IL): Southern Illinois University Press;1983; pp.ix–xi. [Google Scholar]
  • 9. Schaff PB, Isken S, Tager RM: From contemporary art to core clinical skills: observation, interpretation, and meaning-making in a complex environment. Acad. Med. 2011;86(10):1272–1276. 10.1097/ACM.0b013e31822c161d [DOI] [PubMed] [Google Scholar]
  • 10. He B, Prasad S, Higashi RT, et al. : The art of observation: a qualitative analysis of medical students’ experiences. BMC Med. Educ. 2019;19(1):234. 10.1186/s12909-019-1671-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Hoshiko BR: Nursing diagnosis at the art museum. Nurs. Outlook. 1985;33(1):32–36. [PubMed] [Google Scholar]
  • 12. Ker J, Yenawine P, Chisolm MS: Twelve tips for facilitating Visual Thinking Strategies with medical learners. Adv. Med. Educ. Pract. 2024;15:1155–1161. 10.2147/AMEP.S468077 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Jacques A, Trinkley R, Stone L, et al. : Art of Analysis: a cooperative program between a museum and medicine. J. Learn. Through Arts. 2012;8(1). Reference Source [Google Scholar]
  • 14. Goodman TR, Kelleher M: Improving novice radiology trainees’ perception using fine art. J. Am. Coll. Radiol. 2017;14(10):1337–1340. 10.1016/j.jacr.2017.06.033 [DOI] [PubMed] [Google Scholar]
  • 15. Bramstedt KA: The use of visual arts as a window to diagnosing medical pathologies. AMA. J. Ethics. 2016;18(8):843–854. 10.1001/journalofethics.2016.18.8.imhl1-1608 [DOI] [PubMed] [Google Scholar]
  • 16. Batson CD: These things called empathy: eight related but distinct phenomena. Decety J, Ickes W, editors. The Social Neuroscience of Empathy. Cambridge (MA): MIT Press;2009; pp.3–15. [Google Scholar]
  • 17. Gelgoot E, Caufield-Noll C, Chisolm M: Using the visual arts to teach clinical excellence. 1 st version. MedEdPublish. 2018;7:143. 10.15694/mep.2018.0000143.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Elhammoumi CV, Kellam B: Art images in holistic nursing education. Religions. 2017;8(6):1–8. 10.3390/rel8060103 [DOI] [Google Scholar]
  • 19. Turton BM, Williams S, Burton CR, et al. : Arts-based palliative care training, education and staff development: A scoping review. Palliat. Med. 2018;32(2):559–570. 10.1177/0269216317712189 [DOI] [PubMed] [Google Scholar]
  • 20. Ike JD, Howell J: Quantitative metrics and psychometric scales in the visual art and medical education literature: a narrative review. Med. Educ. Online. 2022;27(1):2010299. 10.1080/10872981.2021.2010299 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Perry M, Maffulli N, Willson S, et al. : The effectiveness of arts-based interventions in medical education: a literature review. Med. Educ. 2011;45(2):141–148. 10.1111/j.1365-2923.2010.03848.x [DOI] [PubMed] [Google Scholar]
  • 22. Mukunda N, Moghbeli N, Rizzo A, et al. : Visual art instruction in medical education: a narrative review. Med. Educ. Online. 2019;24(1):1558657. 10.1080/10872981.2018.1558657 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Alkhaifi M, Clayton A, Kangasjarvi E, et al. : Visual art-based training in undergraduate medical education: A systematic review. Med. Teach. 2022;44(5):500–509. 10.1080/0142159X.2021.2004304 [DOI] [PubMed] [Google Scholar]
  • 24. Mehta A, Agius S: The use of art observation interventions to improve medical students' diagnostic skills: a scoping review. Perspect. Med. Educ. 2023;12(1):169–178. 10.5334/pme.20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Tricco AC, Lillie E, Zarin W, et al. : Prisma extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann. Intern. Med. 2018;169(7):467–473. 10.7326/M18-0850 [DOI] [PubMed] [Google Scholar]
  • 26. Agarwal GG, McNulty M, Santiago KM, et al. : Impact of Visual Thinking Strategies (VTS) on the analysis of clinical images: a pre-post study of VTS in first-year medical students. J. Med. Humanit. 2020;41(4):561–572. 10.1007/s10912-020-09652-4 [DOI] [PubMed] [Google Scholar]
  • 27. Dolev JC, Friedlaender LK, Braverman IM: Use of fine art to enhance visual diagnostic skills. JAMA. 2001;286(9):1020–1021. Reference Source [DOI] [PubMed] [Google Scholar]
  • 28. Garino A: Improving observation skill in physician assistant students. J. Physician Assist. Educ. 2008;19(1):47–52. 10.1097/01367895-200819010-00011 [DOI] [Google Scholar]
  • 29. Griffin LL, Chiang NYZ, Tomlin H, et al. : A visual literacy course for dermatology trainees. Br. J. Dermatol. 2017;177(1):310–311. 10.1111/bjd.15073 [DOI] [PubMed] [Google Scholar]
  • 30. Grossman S, Deupi J, Leitao K: Seeing the forest and the trees: increasing nurse practitioner students’ observational and mindfulness skills. Creat. Nurs. 2014;20(1):67–72. 10.1891/1078-4535.20.1.67 [DOI] [PubMed] [Google Scholar]
  • 31. Gurwin J, Revere KE, Niepold S, et al. : A Randomized controlled study of art observation training to improve medical student ophthalmology skills. Ophthalmology. 2018;125(1):8–14. 10.1016/j.ophtha.2017.06.031 [DOI] [PubMed] [Google Scholar]
  • 32. Huang JT, Reynolds SD, DiGiovanni EB, et al. : Fine arts curriculum improves observational skills of dermatology trainees: a pilot study. Br. J. Dermatol. 2016;175(4):815–817. 10.1111/bjd.14616 [DOI] [PubMed] [Google Scholar]
  • 33. Jasani SK, Saks NS: Utilizing visual art to enhance the clinical observation skills of medical students. Med. Teach. 2013;35(7):e1327–e1331. 10.3109/0142159X.2013.770131 [DOI] [PubMed] [Google Scholar]
  • 34. Klugman CM, Peel J, Beckmann-Mendez D: Art Rounds: teaching interprofessional students visual thinking strategies at one school. Acad. Med. 2011;86(10):1266–1271. 10.1097/ACM.0b013e31822c1427 [DOI] [PubMed] [Google Scholar]
  • 35. Pellico LH, Friedlaender L, Fennie KP: Looking is not seeing: using art to improve observational skills. J. Nurs. Educ. 2009;48(11):648–653. 10.3928/01484834-20090828-02 [DOI] [PubMed] [Google Scholar]
  • 36. Poirier TI, Newman K, Ronald K: An exploratory study using Visual Thinking Strategies to improve undergraduate students’ observational skills. Am. J. Pharm. Educ. 2020;84(4):7600. 10.5688/ajpe7600 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Dalia Y, Milam EC, Rieder EA: Art in medical education: a review. J. Grad. Med. Educ. 2020;12(6):686–695. 10.4300/JGME-D-20-00093.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Matsumoto K: Appreciating visual arts may not foster medical diagnosis skills [version 1; peer review: 2 approved with reservations]. F1000Res. 2023;12:79. 10.12688/f1000research.129219.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Green MJ, Myers KR: Graphic medicine: use of comics in medical education and patient care. BMJ. 2010;340:c863. 10.1136/bmj.c863 [DOI] [PubMed] [Google Scholar]
  • 40. Anglin C, Halpin-Healy C, Rosenfeld P: Reflecting art in nursing practice: developing visual arts programs to transform and strengthen practice. J. Nurs. Adm. 2020;50(5):274–280. 10.1097/NNA.0000000000000883 [DOI] [PubMed] [Google Scholar]
  • 41. Fernandez NJ, Fischer M, Burgess H, et al. : Using fine arts-based training to develop observational skills in veterinary students learning cytology: a pilot study. J. Vet. Med. Educ. 2021;48(3):295–300. 10.3138/jvme.2019-0069 [DOI] [PubMed] [Google Scholar]
  • 42. Fernandez NJ, Fischer M, Dickinson RM, et al. : Comparison of fine arts- and pathology-based observational skills training for veterinary students learning cytology. J. Vet. Med. Educ. 2022;49(3):393–406. 10.3138/jvme-2020-0096 [DOI] [PubMed] [Google Scholar]
  • 43. Ferrara V, Shaholli D, Iovino A, et al. : Visual Thinking Strategies as a tool for reducing burnout and improving skills in healthcare workers: results of a randomized controlled study. JCM. 2022;11(24):7501. 10.3390/jcm11247501 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Greige T, Odo D, Mani C, et al. : Education research: the manet project: museum art in neurology education training. Neurol. Educ. 2024;3(4): e200170. 10.1212/NE9.0000000000200170 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45. Ho Tiu CPSP, Asfour L, Jakab M, et al. : An art-based visual literacy training course to enhance clinical skills in dermatology trainees. J. Eur. Acad. Dermatol. Venereol. 2019;33(9):e310–e312. 10.1111/jdv.15588 [DOI] [PubMed] [Google Scholar]
  • 46. Snigdha S, Pathengay A: Patel A et al.: Experience of observation skill workshop intervention for ophthalmologists in fellowship training [version 2; peer review: 2 approved]. F1000Res. 2024;13:524. 10.12688/f1000research.148008.2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Wolf J, Tillander M, Peper K, et al. : "Visual Thinking Strategies" improves radiographic observational skills but not chart interpretation in third and fourth year veterinary students. Front. Vet. Sci. 2024;11:1480301. 10.3389/fvets.2024.1480301 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Yenawine P: Visual Thinking Strategies: Using Art to Deepen Learning across School Disciplines. Cambridge (MA): Harvard Education Press;2013. [Google Scholar]
  • 49. Project Zero: Artful Thinking Palette.Accessed December 29, 2024. http://pzartfulthinking.org/?page_id=2 [Google Scholar]
  • 50. Sowden PT, Davies IR, Roling P: Perceptual learning of the detection of features in X-ray images: a functional role for improvements in adults’ visual sensitivity? J. Exp. Psychol. Hum. Percept. Perform. 2000;26(1):379–390. 10.1037/0096-1523.26.1.379 [DOI] [PubMed] [Google Scholar]
  • 51. Nodine CF, Krupinski EA: Perceptual skill, radiology expertise, and visual test performance with NINA and WALDO. Acad. Radiol. 1998;5(9):603–612. 10.1016/s1076-6332(98)80295-x [DOI] [PubMed] [Google Scholar]
  • 52. Evans KK, Cohen MA, Tambouret R, et al. : Does visual expertise improve visual recognition memory? Atten. Percept. Psychophys. 2011;73(1):30–35. 10.3758/s13414-010-0022-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Kok EM, Jarodzka H, Bruin AB, et al. : Systematic viewing in radiology: seeing more, missing less? Adv. Health Sci. Educ. Theory Pract. 2016;21(1):189–205. 10.1007/s10459-015-9624-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54. Geel K, Kok EM, Dijkstra J, et al. : Teaching systematic viewing to final-year medical students improves systematicity but not coverage or detection of radiologic abnormalities. J. Am. Coll. Radiol. 2017;14(2):235–241. 10.1016/j.jacr.2016.10.001 [DOI] [PubMed] [Google Scholar]
  • 55. Kundel HL, Nodine CF, Conant EF, et al. : Holistic component of image perception in mammogram interpretation: gaze-tracking study. Radiology. 2007;242(2):396–402. 10.1148/radiol.2422051997 [DOI] [PubMed] [Google Scholar]
  • 56. Nodine CF, Mello-Thoms C, Kundel HL, et al. : Time course of perception and decision making during mammographic interpretation. AJR Am. J. Roentgenol. 2002;179(4):917–923. 10.2214/ajr.179.4.1790917 [DOI] [PubMed] [Google Scholar]
  • 57. Norman GR, Rosenthal D, Brooks LR, et al. : The development of expertise in dermatology. Arch. Dermatol. 1989;125(8):1063–1068. 10.1001/archderm.1989.01670200039005 [DOI] [PubMed] [Google Scholar]
  • 58. Joseph GM, Patel VL: Domain knowledge and hypothesis generation in diagnostic reasoning. Med. Decis. Mak. 1990;10(1):31–44. 10.1177/0272989X9001000107 [DOI] [PubMed] [Google Scholar]
  • 59. Norman GR, Brooks LR, Colle CL, et al. : The benefit of diagnostic hypotheses in clinical reasoning: experimental study of an instructional intervention for forward and backward reasoning. Cogn. Instr. 1999;17(4):433–448. 10.1207/S1532690XCI1704_3 [DOI] [Google Scholar]
  • 60. Groves M, O’Rourke P, Alexander H: The clinical reasoning characteristics of diagnostic experts. Med. Teach. 2003;25(3):308–313. 10.1080/0142159031000100427 [DOI] [PubMed] [Google Scholar]
  • 61. Ericsson KA: Acquisition and maintenance of medical expertise: a perspective from the expert-performance approach with deliberate practice. Acad. Med. 2015;90(11):1471–1486. 10.1097/ACM.0000000000000939 [DOI] [PubMed] [Google Scholar]
  • 62. Lesgold A, Rubinson H, Feltovich P, et al. : Expertise in a complex skill: diagnosing x-ray pictures. Chi MTH, Glaser R, Farr MJ, editors. The Nature of Expertise. Hillsdale (NJ): Lawrence Erlbaum Associates;1988; p.311–342. [Google Scholar]
  • 63. Pelaccia T, Tardif J, Triby E, et al. : An analysis of clinical reasoning through a recent and comprehensive approach: the dual-process theory. Med. Educ. Online. 2011;16:5890. 10.3402/meo.v16i0.5890 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64. Norman GR, Eva KW: Diagnostic error and clinical reasoning. Med. Educ. 2010;44(1):94–100. 10.1111/j.1365-2923.2009.03507.x [DOI] [PubMed] [Google Scholar]
  • 65. Cooper N: Human factors. Cooper N, Frain J, editors. ABC of Clinical Reasoning. Chichester (UK): John Wiley & Sons;2017; pp.27–32. [Google Scholar]
  • 66. Buhmann JM, Malik J, Perona P: Image recognition: visual grouping, recognition, and learning. Proc. Natl. Acad. Sci. U. S. A. 1999;96(25):14203–14204. 10.1073/pnas.96.25.14203 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67. Myles-Worsley M, Johnston WA, Simons MA: The influence of expertise on X-ray image processing. J. Exp. Psychol. Learn. Mem. Cogn. 1988;14(3):553–557. 10.1037//0278-7393.14.3.553 [DOI] [PubMed] [Google Scholar]
  • 68. Drew T, Võ ML, Wolfe JM: The invisible gorilla strikes again: sustained inattentional blindness in expert observers. Psychol. Sci. 2013;24(9):1848–1853. 10.1177/0956797613479386 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69. Norman G, Eva K, Brooks L, et al. : Expertise in medicine and surgery. Ericsson K, Charness N, Feltovich P, et al., editors. The Cambridge Handbook of Expertise and Expert Performance. Cambridge (UK): Cambridge University Press;2006; pp.339–354. [Google Scholar]
  • 70. Krupinski EA: Visual scanning patterns of radiologists searching mammograms. Acad. Radiol. 1996;3(2):137–144. 10.1016/s1076-6332(05)80381-2 [DOI] [PubMed] [Google Scholar]
  • 71. Kundel HL, Nodine CF, Krupinski EA, et al. : Using gaze-tracking data and mixture distribution analysis to support a holistic model for the detection of cancers on mammograms. Acad. Radiol. 2008;15(7):881–886. 10.1016/j.acra.2008.01.023 [DOI] [PubMed] [Google Scholar]
  • 72. O’Neill EC, Kong YX, Connell PP, et al. : Gaze behavior among experts and trainees during optic disc examination: does how we look affect what we see? Invest. Ophthalmol. Vis. Sci. 2011;52(7):3976–3983. 10.1167/iovs.10-6912 [DOI] [PubMed] [Google Scholar]
  • 73. Al-Moteri MO, Symmons M, Plummer V, et al. : Eye tracking to investigate cue processing in medical decision-making: a scoping review. Comput. Hum. Behav. 2017;66:52–66. 10.1016/j.chb.2016.09.022 [DOI] [Google Scholar]
  • 74. Evered A: What can cytologists learn from 25 years of investigations in visual search? Br. J. Biomed. Sci. 2005;62(4):182–192. 10.1080/09674845.2005.11732709 [DOI] [PubMed] [Google Scholar]
  • 75. Nodine C, Mello-Thoms C: Acquiring expertise in radiologic image interpretation. Samei E, Krupinski E, editors. The Handbook of Medical Image Perception and Techniques. Cambridge (UK): Cambridge University Press;2018; pp.167–187. [Google Scholar]
  • 76. Norman GR, Coblentz CL, Brooks LR, et al. : Expertise in visual diagnosis: a review of the literature. Acad. Med. 1992;67(10(suppl)):S78–S83. 10.1097/00001888-199210000-00045 [DOI] [PubMed] [Google Scholar]
  • 77. Wu CC, Wolfe JM: Eye movements in medical image perception: a selective review of past, present and future. Vision (Basel). 2019;3(2):32. 10.3390/vision3020032 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78. Gijp A, Ravesloot CJ, Jarodzka H, et al. : How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology. Adv. Health Sci. Educ. Theory Pract. 2017;22(3):765–787. 10.1007/s10459-016-9698-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79. Drew T, Evans K, Võ ML, et al. : Informatics in radiology: what can you see in a single glance and how might this guide visual search in medical images? Radiographics. 2013;33(1):263–274. 10.1148/rg.331125023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80. Gachon J, Beaulieu P, Sei JF, et al. : First prospective study of the recognition process of melanoma in dermatological practice. Arch. Dermatol. 2005;141(4):434–438. 10.1001/archderm.141.4.434 [DOI] [PubMed] [Google Scholar]
  • 81. Kellman PJ: Adaptive and perceptual learning technologies in medical education and training. Mil. Med. 2013;178(10 suppl):98–106. 10.7205/MILMED-D-13-00218 [DOI] [PubMed] [Google Scholar]
  • 82. Norman G: Research in clinical reasoning: past history and current trends. Med. Educ. 2005;39(4):418–427. 10.1111/j.1365-2929.2005.02127.x [DOI] [PubMed] [Google Scholar]
  • 83. Taylor PM: A review of research into the development of radiologic expertise: implications for computer-based training. Acad. Radiol. 2007;14(10):1252–1263. 10.1016/j.acra.2007.06.016 [DOI] [PubMed] [Google Scholar]
  • 84. Waite S, Farooq Z, Grigorian A, et al. : A review of perceptual expertise in radiology-how it develops, how we can test it, and why humans still matter in the era of artificial intelligence. Acad. Radiol. 2020;27(1):26–38. 10.1016/j.acra.2019.08.018 [DOI] [PubMed] [Google Scholar]
  • 85. Haller S, Radue EW: What is different about a radiologist’s brain? Radiology. 2005;236(3):983–989. 10.1148/radiol.2363041370 [DOI] [PubMed] [Google Scholar]
  • 86. Castelhano MS, Rayner K: Eye movements during reading, visual search, and scene perception: an overview. Rayner K, Shen D, Bai X, et al., editors. Cognitive and Cultural Influences on Eye Movements. Tianjin: Tianjin People’s Publishing House;2008; pp.3–33. [Google Scholar]
  • 87. Henderson JM, Weeks PA, Jr, Hollingworth A: The effects of semantic consistency on eye movements during complex scene viewing. J. Exp. Psychol. Hum. Percept. Perform. 1999;25(1):210–228. 10.1037/0096-1523.25.1.210 [DOI] [Google Scholar]
  • 88. Henderson JM: Human gaze control during real-world scene perception. Trends Cogn. Sci. 2003;7(11):498–504. 10.1016/j.tics.2003.09.006 [DOI] [PubMed] [Google Scholar]
  • 89. Nodine CF, Kundel HL, Mello-Thoms C, et al. : How experience and training influence mammography expertise. Acad. Radiol. 1999;6(10):575–585. 10.1016/s1076-6332(99)80252-9 [DOI] [PubMed] [Google Scholar]
  • 90. Brunyé TT, Drew T, Weaver DL, et al. : A review of eye tracking for understanding and improving diagnostic interpretation. Cogn. Res. Princ. Implic. 2019;4(1):7. 10.1186/s41235-019-0159-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91. Heiberg Engel PJ: Tacit knowledge and visual expertise in medical diagnostic reasoning: implications for medical education. Med. Teach. 2008;30(7):e184–e188. 10.1080/01421590802144260 [DOI] [PubMed] [Google Scholar]
  • 92. Gibson EJ: Principles of Perceptual Learning and Development. Englewood Cliffs (NJ): Prentice-Hall;1969. [Google Scholar]
  • 93. Kornell N, Bjork RA: Learning concepts and categories: is spacing the “enemy of induction”? Psychol. Sci. 2008;19(6):585–592. 10.1111/j.1467-9280.2008.02127.x [DOI] [PubMed] [Google Scholar]
  • 94. Kornell N, Castel AD, Eich TS, et al. : Spacing as the friend of both memory and induction in young and older adults. Psychol. Aging. 2010;25(2):498–503. 10.1037/a0017807 [DOI] [PubMed] [Google Scholar]
  • 95. Kang SHK, Pashler H: Learning painting styles: spacing is advantageous when it promotes discriminative contrast. Appl. Cogn. Psychol. 2012;26(1):97–103. 10.1002/acp.1801 [DOI] [Google Scholar]
  • 96. Zulkiply N, McLean J, Burt JS, et al. : Spacing and induction: application to exemplars presented as auditory and visual text. Learn. Instr. 2012;22(3):215–221. 10.1016/j.learninstruc.2011.11.002 [DOI] [Google Scholar]
  • 97. Girardi S, Gaudy C, Gouvernet J, et al. : Superiority of a cognitive education with photographs over ABCD criteria in the education of the general population to the early detection of melanoma: a randomized study. Int. J. Cancer. 2006;118(9):2276–2280. 10.1002/ijc.21351 [DOI] [PubMed] [Google Scholar]
  • 98. Speelman C, Martin K, Flower S, et al. : Skill acquisition in skin cancer detection. Percept. Mot. Skills. 2010;110(1):277–297. 10.2466/PMS.110.1.277-297 [DOI] [PubMed] [Google Scholar]
  • 99. Evered A: Perceptual and adaptive learning modules and their potential to transform cytology training programmes. Cytopathology. 2018;29(4):371–374. 10.1111/cyt.12578 [DOI] [PubMed] [Google Scholar]
  • 100. Kellman PJ, Jacoby V, Massey C, et al. : Perceptual learning, adaptive learning, and gamification: educational technologies for pattern recognition, problem solving, and knowledge retention in medical learning. Witchel HJ, Lee MW, editors. Technologies in Biomedical and Life Sciences Education. Cham (DK): Springer;2022; pp.135–166. 10.1007/978-3-030-95633-2_5 [DOI] [Google Scholar]
  • 101. Aldridge RB, Glodzik D, Ballerini L, et al. : Utility of non-rule-based visual matching as a strategy to allow novices to achieve skin lesion diagnosis. Acta Derm. Venereol. 2011;91(3):279–283. 10.2340/00015555-1049 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102. Krasne S, Hillman JD, Kellman PJ, et al. : Applying perceptual and adaptive learning techniques for teaching introductory histopathology. J. Pathol. Inform. 2013;4:34. 10.4103/2153-3539.123991 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103. Krasne S, Stevens CD, Kellman PJ, et al. : Mastering electrocardiogram interpretation skills through a perceptual and adaptive learning module. AEM Educ. Train. 2021;5(2):e10454. 10.1002/aet2.10454 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 104. Parker EU, Reder NP, Glasser D, et al. : NDER: A novel web application for teaching histology to medical students. Acad. Pathol. 2017;4:2374289517691061. 10.1177/2374289517691061 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 105. Rimoin L, Altieri L, Craft N, et al. : Training pattern recognition of skin lesion morphology, configuration, and distribution. J. Am. Acad. Dermatol. 2015;72(3):489–495. 10.1016/j.jaad.2014.11.016 [DOI] [PubMed] [Google Scholar]
  • 106. Romito BT, Krasne S, Kellman PJ, et al. : The impact of a perceptual and adaptive learning module on transoesophageal echocardiography interpretation by anaesthesiology residents. Br. J. Anaesth. 2016;117(4):477–481. 10.1093/bja/aew295 [DOI] [PubMed] [Google Scholar]
  • 107. Slaught C, Madu P, Chang AY, et al. : Novel education modules addressing the underrepresentation of skin of color in dermatology training. J. Cutan. Med. Surg. 2022;26(1):17–24. 10.1177/12034754211035093 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 108. Evered A, Walker D, Watt AA, et al. : To what extent does nonanalytic reasoning contribute to visual learning in cytopathology? Cancer Cytopathol. 2013;121(6):329–338. 10.1002/cncy.21263 [DOI] [PubMed] [Google Scholar]
  • 109. Sha LZ, Toh YN, Remington RW, et al. : Perceptual learning in the identification of lung cancer in chest radiographs. Cogn. Res. Princ. Implic. 2020;5(1):4. 10.1186/s41235-020-0208-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 110. Xu B, Rourke L, Robinson JK, et al. : Training melanoma detection in photographs using the perceptual expertise training approach. Appl. Cogn. Psychol. 2016;30(5):750–756. 10.1002/acp.3250 [DOI] [Google Scholar]
  • 111. Brown NH, Robertson KM, Bisset YC, et al. : Using a structured image database, how well can novices assign skin lesion images to the correct diagnostic grouping? J. Invest. Dermatol. 2009;129(10):2509–2512. 10.1038/jid.2009.75 [DOI] [PubMed] [Google Scholar]
  • 112. Guégan S, Steichen O, Soria A: Literature review of perceptual learning modules in medical education: What can we conclude regarding dermatology? Ann. Dermatol. Venereol. 2021;148(1):16–22. 10.1016/j.annder.2020.01.023 [DOI] [PubMed] [Google Scholar]
  • 113. Acai A, McQueen SA, McKinnon V, et al. : Using art for the development of teamwork and communication skills among health professionals: a literature review. Arts Health. 2017;9(1):60–72. 10.1080/17533015.2016.1182565 [DOI] [Google Scholar]
  • 114. Choe AI, Conaty S, Ha J, et al. : What's in the shadows? Formal analysis: art history method to improve interpretation skills for mammography and chest radiographs in resident education. Acad. Radiol. 2024;31(2):383–389. 10.1016/j.acra.2023.10.063 [DOI] [PubMed] [Google Scholar]
  • 115. Matsumoto K: The use of arts in health professions education: applicability as a new teaching method. Igaku Kyoiku/Med. Educ. Japan. 2023;54(3):235–243. [Article in Japanese]. 10.11307/mededjapan.54.3_235 [DOI] [Google Scholar]
  • 116. Dennhardt S, Apramian T, Lingard L, et al. : Rethinking research in the medical humanities: a scoping review and narrative synthesis of quantitative outcome studies. Med. Educ. 2016;50(3):285–299. 10.1111/medu.12812 [DOI] [PubMed] [Google Scholar]
  • 117. Ferrara V: Learning through art in medical education. Firstenberg MS, Stawicki SP, editors. Medical Education for the 21st Century. IntechOpen.2021. 10.5772/intechopen.101213 [DOI] [Google Scholar]
  • 118. Lake J, Jackson L, Hardman C: A fresh perspective on medical education: the lens of the arts. Med. Educ. 2015;49(8):759–772. 10.1111/medu.12768 [DOI] [PubMed] [Google Scholar]
  • 119. Rieger KL, Chernomas WM, McMillan DE, et al. : The arts as a catalyst for learning with undergraduate nursing students: findings from a constructivist grounded theory study. Arts Health. 2020;12(3):250–269. 10.1080/17533015.2019.1608569 [DOI] [PubMed] [Google Scholar]
  • 120. Rieger KL, Chernomas WM, McMillan DE, et al. : Effectiveness and experience of arts-based pedagogy among undergraduate nursing students: a mixed methods systematic review. JBI Database System. Rev. Implement. Rep. 2016;14(11):139–239. 10.11124/JBISRIR-2016-003188 [DOI] [PubMed] [Google Scholar]
  • 121. Rieger KL, Chernomas WM: Arts-based learning: analysis of the concept for nursing education. Int. J. Nurs. Educ. Scholarsh. 2013;10(1):53–62. 10.1515/ijnes-2012-0034 [DOI] [PubMed] [Google Scholar]
  • 122. Slavin R, Williams MR, Zimmermann C: Activating the Art Museum: Designing Experiences for the Health Professions. Lanham (MD): Rowman & Littlefield;2023. 10.5771/9781538158555 [DOI] [Google Scholar]
  • 123. Voeller M: The art of attending: arts-based observation training for health professions students at the University of South Florida. Costache I, Kunny C, editors. Academics, Artists, Museums: 21st Century Partnerships. Oxon and New York (NY): Routledge;2018; pp.99–110. [Google Scholar]
  • 124. Auffermann W, Mazurowski M: Perception and training. Samei E, Krupinski E, editors. The Handbook of Medical Image Perception and Techniques. Cambridge (UK): Cambridge University Press;2018; pp.470–482. [Google Scholar]
  • 125. Jarodzka H, Balslev T, Holmqvist K, et al. : Conveying clinical reasoning based on visual observation via eye-movement modelling examples. Instr. Sci. 2012;40(5):813–827. 10.1007/s11251-012-9218-5 [DOI] [Google Scholar]
  • 126. Osman M, Eacott B, Willson S: Arts-based interventions in healthcare education. Med. Humanit. 2018;44(1):28–33. 10.1136/medhum-2017-011233 [DOI] [PubMed] [Google Scholar]
  • 127. Belling C: Commentary: sharper instruments: on defending the humanities in undergraduate medical education. Acad. Med. 2010;85(6):938–940. 10.1097/ACM.0b013e3181dc1820 [DOI] [PubMed] [Google Scholar]
  • 128. Matsumoto K: Effects of visual art observation on technical skills in novice healthcare learners: A scoping review.Published March 11, 2025. Reference Source [Google Scholar]
F1000Res. 2026 Feb 3. doi: 10.5256/f1000research.195407.r452893

Reviewer response for version 3

Sven Dupré 1

The author addressed the issues raised in my review in an adequate way. I have no further issues,

Is the work clearly and accurately presented and does it cite the current literature?

Partly

If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Partly

Are sufficient details of methods and analysis provided to allow replication by others?

Yes

Reviewer Expertise:

arts, medical humanities, art history

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2026 Jan 30. doi: 10.5256/f1000research.195407.r452892

Reviewer response for version 3

Margaret S Chisolm 1

I appreciate the thoughtful response and revisions. No further comments

Is the work clearly and accurately presented and does it cite the current literature?

Partly

If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable

Are all the source data underlying the results available to ensure full reproducibility?

Partly

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Partly

Are sufficient details of methods and analysis provided to allow replication by others?

No

Reviewer Expertise:

NA

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2026 Jan 2. doi: 10.5256/f1000research.175471.r435754

Reviewer response for version 2

Sven Dupré 1

This article convincingly shows that there is no sufficient proof that visual art observation is effective in novice healthcare professionals learning technical skills. Yet, it is less convincing in its choice of premises for the design of the study and how they relate to the representation of the literature. There is a rising interest in the role of the arts in medical education. Yet, the field is still in its infancy and in a explorative phase. The current article presents two demarcations, partly in response to comments of previous reviewers, which are of such a nature that it would be important to more clearly flag that they are not representative of the field or the literature in its entirety.

First, in response to previous reviewers, the author has broadened the scope from diagnostic accuracy to technical skills. Yet, learning technical skills is only a part of the beneficial effects of art observation on medical education as listed in the literature. It is not possible to extrapolate conclusions on technical skills to the role of the arts in medical education as a whole, and it it important to explicitly state the limitations of the present study in the introduction and conclusion.

Second, in response to previous reviewers, the focus is on novices, excluding experts or more experienced healthcare professionals. Yet, the literature only partly considers novices and the role of art observation in their acquiring of technical skills. In fact, it would be quite surprising that art observation would play a substantial role in developing diagnostic and technical skills among novices. As such, the conclusions reached in this article are not really surprising.

Is the work clearly and accurately presented and does it cite the current literature?

Partly

If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Partly

Are sufficient details of methods and analysis provided to allow replication by others?

Yes

Reviewer Expertise:

arts, medical humanities, art history

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

F1000Res. 2026 Jan 17.
Koji MATSUMOTO 1

Dear Reviewer,

Thank you very much for your thorough review.

As stated in the Introduction, this article focuses on the role of visual art observation in developing observation-related technical skills. Any remaining concerns you may have are beyond the scope of the article. I did not intend to extrapolate the conclusions to technical skills other than observation-related ones, nor to the role of the arts in health professional education. I also believe that no statement misleads readers into thinking that I made any such claims. Furthermore, as I discussed elsewhere (See Ref 1), I believe that the arts can play many roles in health professional education.

However, as it may be beneficial for readers who are unfamiliar with this topic, I added a paragraph to the end of the Limitations section to address the points you raised:

Finally, in health professional education for both novices (pre-service) and experts (in-service), the arts can play many roles beyond enhancing observational skills. 115 – 122 However, discussing such roles is outside the scope of this article. The author does not intend to extrapolate the conclusions to technical skills beyond those related to observation, nor the role of the arts in health professional education.

[Ref 1] Matsumoto K. The use of arts in health professions education: applicability as a new teaching method. Igaku Kyoiku / Medical Education (Japan). 2023;54(3):235-243. Article in Japanese.

F1000Res. 2025 Mar 26. doi: 10.5256/f1000research.175471.r371695

Reviewer response for version 2

Margaret S Chisolm 1

Approved with no further suggestions

Is the work clearly and accurately presented and does it cite the current literature?

Partly

If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable

Are all the source data underlying the results available to ensure full reproducibility?

Partly

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Partly

Are sufficient details of methods and analysis provided to allow replication by others?

No

Reviewer Expertise:

NA

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2024 Jun 21. doi: 10.5256/f1000research.141887.r292047

Reviewer response for version 1

Margaret S Chisolm 1

Abstract

The abstract is lacking in conceptual clarity/relevance and presentation of replicable methods and clearly stated results. The aim of this article as stated is to review intervention studies that used "appreciation" of visual arts to foster observation skills AND discussed impact on diagnostic accuracy. This aim contains at least two concepts that are problematic for me: 1) the use of the word "appreciation" as most visual arts-based methods used in medical education are not concerned with art appreciation and 2) enhancement of observation skills  is just one of many targeted skills and attributes that may improve patient outcomes, via diagnostic accuracy or other mechanisms, all of which are more "downstream." Thus, I'm concerned with the basic premise as articulated of the manuscript. However, putting that aside for the moment, and turning to the methods sentence of the abstract. This is not a systematic review yet the abstract implies it is as it includes the term "previous systematic reviews." Systematic reviews need to follow PRISMA guidelines, which this review does not. It has no clearly stated inclusion/exclusion criteria, database names, search terms, duplicate review system, no flow chart, no clear search start and end dates, no exact numbers of articles reviewed etc. this lack prevents replication and limits assessment of strength of evidence and conclusions that can be drawn. Again, the aim of affects of "appreciating" art on "diagnosis" accuracy are questionable aims to begin with and lack of rigor in review methods makes this manuscript's conclusions more problematic. Moving to the results section of the abstract, few results are presented and this is more of a discussion section making it challenging to assess whether conclusions are supported by evidence. The conclusions section of the abstract is not supported by results as presented in abstract. No presentation of evidence for medical cases/knowledge to impact learning as compared to visual art "appreciation." No evidence presented in results section of abstract to support verbalization skills enhancement, etc. In conclusion, the abstract is lacking in coherence and clarity.

Introduction 

The manuscript would be enhanced by a clear articulation of the problem(s) that visual arts-based teaching is trying to solve and what the current non-visual arts-based methods are to address this problem. It would also be important to mention the international consensus that the arts and humanities are fundamental to medical education (as stated by WHO, NASEM and AAMC FRAHME reports) and the Moniz et al scoping review [Ref 1] and two papers on the Prism model describing the functions that the arts and humanities have served in medical education to date.

Although one of the primary goals of developing observations skills is to improve diagnostic accuracy, this is a very downstream outcome to which many other skills may contribute and also other downstream outcomes are clinically relevant as well (e.g., being able to observe nonverbals and so enhance communication skills to build rapport and enhance adherence with treatment recommendations, as one example). Also it appears that results are included in introduction prior to presenting methods and results later, and would avoid including results in the introduction and instead put this in context of literature more broadly. A research question needs to be articulated.

Methods

The previously mentioned review studies should be placed in the results, not in the Introduction or Methods sections. The Methods section should include the search terms, the databases searched, the exact search date, the number of abstracts/titles identified in search, exact inclusion/exclusion etc as presented in PRISMA guidelines. The reader should know how many non-duplicate abstracts/titles were identified, how many of those were included in  full text review, etc.

Results

The word "around" should never be part of a systematic review, which this is not. More precision throughout is necessary. Not clear why the term "visual arts appreciation" is used as "appreciation" is not integral to teaching visual arts-based medical education. 

The author mistates two of the three core questions of VTS. These are precisely researched and worded questions so getting two of them incorrect here is very concerning. These questions have the primary aim of holding the group in inquiry as well the aim as stated by the author of having the participants ground their inferences in visual evidence.

The penultimate paragraph of the Results section was the most informative of the manuscript, describing the study design and outcomes, which are suggestive of impact on observation skills. More precision in the text would be helpful here (i.e., % studies versus "most" AAMC or "some"). 

The last paragraph brings up diagnostic accuracy, which - again - the author declares as a primary aim of this work, but I'm not sure it is THE primary aim of any of the use of visual arts in medical education, and so it's not surprising that only one study measured this.

Discussion

Use of the word "target" in the first sentence to describe the work of art is an odd choice; perhaps just saying "artwork" would be clearer. The author raises some interesting ideas in this section. However the idea that observing for a longer period of time is not necessary for an accurate diagnosis among experts  obscures the important role of taking time for beginners (presumably the target learners for most of these exercises) to avoid premature diagnostic closure via arriving at a diagnosis too quickly and not staying open to possibilities is important to good patient outcomes. Plus even among experts who can quickly diagnose most presentations accurately, instances exist where taking more time would have resulted in a more accurate diagnosis for an individual patient. Similarly the ability to look should precede the ability to know what to discard as relevant, so a beginner's inability to overlook irrelevant features seems a bit irrelevant to teaching beginners to look. At least one more recent study does support visual arts to make a statistically significant contribution to tolerance for ambiguity (Tackett et al, Med Ed Online).[Rev 2]

Again, the use of "appreciating" visual arts misses the mark of what visual arts-based teaching aims to do. Art appreciation is beside the point. 

The author raises good questions about the limitations of transfer and the need for more research to study the transfer of skills from the museum or classroom to the clinic. They make a good point about the role of visual arts to foster communication skills (both listening and "verbalizing," which perhaps "speaking" or "articulately" would be a less awkward word choice) and team-building/perspective taking.

Conclusions

Although the methods/results presentation make it difficult to be sure that the conclusions reached are accurate, I would not be surprised by a lack of evidence on whether visual arts-based teaching aids diagnostic accuracy for the reasons outlined above. This is an emerging field whose focus has been on exploring the role of the arts in developing many clinically relevant skills and attitudes, which - taken as a whole - may support patient outcomes (including diagnosis and treatment planning) downstream. The field is in an exploratory phase and its aims are to explore an array of attributes that may have an effect on these and other "downstream"  outcomes.

Limitations should include that this is not a systematic review in accordance with the PRISMA guidelines, and that relevant articles may have been missed by the search strategy etc

Clearly more rigor is needed in the field of visual arts-based medical education research. The author does well to point this out and give some ideas for future directions. However, the findings of the study and these recommendations for improvement should be place in the context of the broader literature on these research gaps as outlined in the Howley et al AAMC FRAHME report and the Moniz et al scoping review, which also suggests more research rigor etc. I would disagree with several conclusions. The conclusion that "adopting visual art appreciation is not encouraged" seems to me an overstatement given the limitations of this review and the exploratory nature of the field. Clearly innovation is needed that goes beyond medical cases and knowledge transfer to teach students an array of otherwise difficult-to-teach clinically relevant attributes, which may improve patient outcomes, whether via

diagnostic accuracy or other effects. I also see great value in increasing learners' awareness of what is an observation and what is an interpretation. The metacognitive skills of recognizing that an interpretation lacks visual evidence, raises awareness of possible biased thinking.

Is the work clearly and accurately presented and does it cite the current literature?

Partly

If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable

Are all the source data underlying the results available to ensure full reproducibility?

Partly

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Partly

Are sufficient details of methods and analysis provided to allow replication by others?

No

Reviewer Expertise:

museum-based health professions education

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

References

  • 1. : How Are the Arts and Humanities Used in Medical Education? Results of a Scoping Review. Acad Med .2021;96(8) : 10.1097/ACM.0000000000004118 1213-1222 10.1097/ACM.0000000000004118 [DOI] [PubMed] [Google Scholar]
  • 2. : Transformative experiences at art museums to support flourishing in medicine. Med Educ Online .2023;28(1) : 10.1080/10872981.2023.2202914 2202914 10.1080/10872981.2023.2202914 [DOI] [PMC free article] [PubMed] [Google Scholar]
F1000Res. 2025 Feb 13.
Koji MATSUMOTO 1

Dear Reviewer,

Thank you very much for your thorough review. Your comments are really valuable and will contribute to improving the quality of the work.

I designed a scoping reviews protocol in this revised version following the PRISMA Extension for Scoping Reviews (PRISMA-ScR). Inspired by your comments, I also expanded my focus from diagnostic skills to the technical skills that the intervention studies claimed art observation can foster.

I would like to address each of the points you raised (some of the comments are reordered, and duplicate comments were consolidated):

Comment (C): The abstract is lacking in conceptual clarity/relevance and presentation of replicable methods and clearly stated results.

Response (R): I designed the study following the PRISMA-ScR and revised the entire Abstract.

C: Not clear why the term "visual arts appreciation" is used as "appreciation" is not integral to teaching visual arts-based medical education.

R: According to a suggestion by another reviewer, I modified it to “visual art observation.”

C: It would also be important to mention the international consensus that the arts and humanities are fundamental to medical education.

R: I did not discuss this in this paper but did elsewhere (See Ref 1). I believe I understand this background.

[Ref 1] Matsumoto K. The use of arts in health professions education: applicability as a new teaching method.  Igaku Kyoiku / Medical Education (Japan). 2023;54(3):235-243. Article in Japanese.

C: Enhancement of observation skills is just one of many targeted skills and attributes that may improve patient outcomes, via diagnostic accuracy or other mechanisms, all of which are more "downstream."

R: I agree; this paper examines the claim that the interventions also work “downstream,” as the advocates argued.

C: Although one of the primary goals of developing observations skills is to improve diagnostic accuracy, this is a very downstream outcome to which many other skills may contribute and also other downstream outcomes are clinically relevant as well.

Then, I added the “Technical knowledge and skills based on observation skills” subsection to insist that it is necessary to consider the technical knowledge and skills of the learners.

C: I'm not sure to improve diagnostic accuracy is THE primary aim of any of the use of visual arts in medical education, and so it's not surprising that only one study measured this.

R: The scope of the study extended to skills related to observation.

C: It appears that results are included in introduction prior to presenting methods and results later, and would avoid including results in the introduction and instead put this in context of literature more broadly.

C: The previously mentioned review studies should be placed in the results, not in the Introduction or Methods sections.

R: The reason for mentioning the prior reviews in the Introduction is to show the need for this review by presenting the attainments and unsolved issues of the previous reviews.

C: A research question needs to be articulated.

R: I added this to the Introduction.

C: The Methods should follow PRISMA guidelines.

R: I revised the Methods according to the PRISMA-ScR.

C: The word "around" should never be part of a systematic review, which this is not. More precision in the text would be helpful here (i.e., % studies versus "most" AAMC or "some").

R: I followed this suggestion and revised this section accordingly.

C: The author misstates two of the three core questions of VTS.

R: I revised this, thank you for pointing it out.

C: The penultimate paragraph of the Results section was the most informative of the manuscript, describing the study design and outcomes, which are suggestive of impact on observation skills.

R: In this revised version, the description was expanded in more detail.

C: In Discussion, use of the word "target" in the first sentence to describe the work of art is an odd choice.

R: In this version, this sentence was entirely revised.

C: The idea that observing for a longer period of time is not necessary for an accurate diagnosis among experts obscures the important role of taking time for beginners to avoid premature diagnostic closure via arriving at a diagnosis too quickly and not staying open to possibilities is important to good patient outcomes. Instances exist where taking more time would have resulted in a more accurate diagnosis for an individual patient.

R: I revised the subsection “Longer periods of careful and systematic observations for accurate diagnosis” to state clearly that teaching novice medical students systematic viewing is not helpful. The previous studies cited in the subsection deny the idea. Additionally, please tell me the specific references you are referring to.

C: At least one more recent study does support visual arts to make a statistically significant contribution to tolerance for ambiguity.

R: As mentioned in the “Longer periods of careful and systematic observations for accurate diagnosis” subsection, the effectiveness of tolerance to clinical practice is debatable.

C: The ability to look should precede the ability to know what to discard as relevant, so a beginner's inability to overlook irrelevant features seems a bit irrelevant to teaching beginners to look.

R: I understood this suggestion to mean that teaching students how to look and then the knowledge or skills to identify the critical signs may be beneficial. If this is what you mean, I discussed this as scaffolding in the “Long-term effects such as scaffolding” subsection, as I did in version 1.

C: The author raises good questions about the limitations of transfer and the need for more research to study the transfer of skills from the museum or classroom to the clinic.

R: This was mentioned in the revised version as well.

C: They make a good point about the role of visual arts to foster communication skills (both listening and and team-building/perspective taking.

R: As a result of reanalysis, I stated that there is insufficient evidence that art observation helps with these skills in the Results and “Fostering verbalizing skills…” subsection of the Discussion.

C: "verbalizing," which perhaps "speaking" or "articulately" would be a less awkward word choice.

R: “Vernalizing” means making words out of what one sees, as the text states. Additionally, given the use of description as an outcome measure in the studies, I think “speaking” and “articulately” differ slightly from what I wished to express.

C: I would not be surprised by a lack of evidence on whether visual arts-based teaching aids diagnostic accuracy. The field is in an exploratory phase and its aims are to explore an array of attributes that may have an effect on these and other "downstream" outcomes.

C: The conclusion that "adopting visual art appreciation is not encouraged" seems to me an overstatement given the limitations of this review and the exploratory nature of the field.

C: The findings of the study and these recommendations for improvement should be place in the context of the broader literature on these research gaps as outlined in the Howley et al AAMC FRAHME report and the Moniz et al scoping review, which also suggests more research rigor etc.

C: Clearly innovation is needed that goes beyond medical cases and knowledge transfer to teach students an array of otherwise difficult-to-teach clinically relevant attributes, which may improve patient outcomes, whether via diagnostic accuracy or other effects.

R: I agree with these opinions and revised the Discussion and Conclusion from this standpoint. I tried to suggest directions and challenges for future research for developing this field. On the other hand, I did not provide suggestions for practice, judging that there is not enough evidence to apply them to practice.

C: Limitations should include that this is not a systematic review in accordance with the PRISMA guidelines, and that relevant articles may have been missed by the search strategy etc

R: I designed the study according to the PRISMA-ScR.

F1000Res. 2024 Feb 16. doi: 10.5256/f1000research.141887.r239810

Reviewer response for version 1

Nicole J Fernandez 1

This paper is a welcome addition to the literature on the humanities in medical education and attempts the daunting task of determining if art observation has a positive impact on medical diagnosis in novice learners. The approach to this question is logical, although in some areas, the reader would benefit from more explanation and context. The conclusion that there is little evidence to support improved diagnosis as a result of art observation is well-supported, although the broad reasons for this could be further detailed, and the statement that another approach would be better is outside the scope of the article.

The article could be improved by considering these further comments to the author:

Title: Use of “appreciating” -consider re-wording. This implies a casual approach to looking at art, while slow looking is an iterative rigorous process. Perhaps “art observation” instead of “appreciating visual arts”. Also “medical diagnostic skills” instead of “medical diagnosis skills”. Suggest including ‘who’ as well- medical students? Novice learners?

Possible revised title: Art observation may not improve medical diagnostic skills in novice learners

The writing style is inconsistent and unusual in some places. Usually in English scientific writing “I” or “the author” is not used- recommend rephrasing throughout. (For example: “The author collected information on…” becomes “Information was collected on…”.)

Abstract: The Background section of an abstract usually provides context for the study rather than describing what the study did.

Make sure that any changes to the body of the article are reflected in the abstract (eg changes to the conclusions).

There are opportunities throughout to provide more context and help the reader’s understanding of the subject- some of these are noted below. Being explicit is important- there may be assumptions that are not shared by all readers, especially regarding definitions and terms used.

-Define what you mean by intervention/empirical intervention study. This is important in helping the reader to know which studies you considered.

-Clarify what you mean by diagnosis- visual diagnosis only (eg radiology, gross pathology)? Clinical reasoning (which would be much broader)?

-Specifically outline your inclusion and exclusion criteria

-How did you decide about diagnostic evidence? Is improved diagnostic accuracy the only criterion? What about other elements of the diagnostic process or approach to complex cases in which the diagnosis is unclear?

-The term ‘expertization’ may not be familiar to readers and should be explicitly defined, including what is meant by ‘expertization studies’.

-In Methods, “features and tendencies” of the selected articles is mentioned- please explain what is meant by this.

The final paragraph of the introduction is not needed, as this is the standard format for a journal article.

Methods-

The first paragraph mentions “studies that used visual arts for cultivating observation skills” but was this the only goal of the included studies? This does not seem consistent with what is stated elsewhere in the article.

3 rd paragraph, second sentence “were excluded” needs to be moved to outside the parentheses

Suggest including a figure that shows how articles were selected, numbers of articles at each step of the process, that 12 articles were selected for full review. How many articles were found using the review articles vs by manual search? There are many examples of figures like this in the published literature.

Results-

Abbreviations need to be explained in the table (regardless of whether they are also explained in the text- the table should be able to stand alone).

Results could be strengthened by using numbers instead of terms like “most”

Discussion-

-Overall the discussion would benefit from a clearer focus on the original question- specifically diagnostic skills, not art observation studies in general.

-Why might improvement in diagnostic skills be difficult to demonstrate in a study? Provide context, difficulty of assessing long term effects. How could future researchers attempt this? (currently in conclusions)

-Consider that one might not see an improvement right after the intervention in 2 nd year, but could it be helpful once students have more diagnostic experience, eg in 4 th year?

-Clearly and briefly list what reported benefits of art observation are- the information is there, but scattered. It could be condensed, as this is not the original focus of the paper.

-If included studies were only those designed to improve observation skills, is it reasonable to assess them for improvement in diagnostic skills? How strong can any conclusions be? Only 1 study assessed diagnostic accuracy, and this was in a limited domain.

The relevance of expertization studies is unclear, since the art observation interventions were done with novice learners. Novices are not experts, one can’t teach them the same way. Just because experts are more likely to misdiagnose if they take longer, it doesn’t necessarily follow that the same is true for novices. If your position is that novices should be taught like experts, this needs to be explicitly stated and supported from the literature.

Throughout the discussion it needs to be made clear when you are referring to studies in experts vs studies in novices. This is not clear for a reader who is not already familiar with this literature.

The Discussion covers much material that does not seem closely related to the original goal and could be condensed and refined to focus on diagnostic skills. It is important to emphasize that only 1 art observation study explicitly looked at diagnosis, and did this in a limited way. The conclusion appears to be that the impact of art observation on diagnostic skill has not been adequately investigated, rather than that it is ineffective. I don’t agree that it has been “shown that appreciating visual arts may not have a direct effect on making an accurate diagnosis”; the statement “there is no concrete evidence on whether appreciating visual art contributes toward an accurate diagnosis” is more accurate.

Conclusions should be a single paragraph stating your major findings- the section titled Conclusions is not really a conclusion. This material is more appropriate in the Discussion.

The claim that “it would be better to…” was not the focus of this article- a review of this literature would need to be included in order to support this. It could be suggested that if improving diagnostic accuracy is the single goal, there may be other methods that are more effective than art observation. But is this the only goal of art observation interventions? I don’t believe so. Observation is the first part of the diagnostic reasoning pathway, but not the only part, certainly not a panacea. It seems simplistic to expect a single intervention to have a great and immediate impact on diagnostic accuracy.

Limitations:

-No veterinary medicine literature included, although nursing literature was included. Is there a reason for this? There are several relevant articles in the vet med literature.

-Studies that included other methods were excluded- can’t comment on what the effects of these studies are

-In methods you mention the limitations of eg thematic analysis, but there is no mention of the limitations of using quantitative population statistics

Is the work clearly and accurately presented and does it cite the current literature?

Partly

If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Partly

Are sufficient details of methods and analysis provided to allow replication by others?

Partly

Reviewer Expertise:

Tolerance of ambiguity, teaching observational skills, case-based learning, clinical pathology

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

F1000Res. 2025 Feb 13.
Koji MATSUMOTO 1

Dear Reviewer,

Thank you very much for your thorough review. Your comments are really valuable and will contribute to improving the quality of the article.

I added the scoping reviews protocol in this revised version following the PRISMA Extension for Scoping Reviews (PRISMA-ScR). I expanded my focus from diagnostic skills to the technical skills that the intervention studies claimed art observation can foster. Therefore, I cannot sufficiently respond to your suggestions regarding diagnostic skills. However, inspired by the suggestions, I discussed the relationship between visual art observation and technical knowledge and skills acquired through education and training.

I would like to address each of the points you raised (some of the comments are reordered, and duplicate comments were consolidated):

Comment (C): In the title, use “art observation” instead of “appreciating visual arts”. Also “medical diagnostic skills” instead of “medical diagnosis skills”. Suggest including ‘who’ as well- medical students? Novice learners?

Response (R): I modified the title: “Effects of visual art observation on technical skills in novice healthcare learners: A scoping review.” I also revised the language to use “visual art observation” and “diagnostic skills” in the article. Moreover, I added the definition of novice medical learners in 4 th paragraph of the Introduction: “novice healthcare learners, namely, students, residents, trainees, or inexperienced workers in related majors or domains.”

C: Usually in English scientific writing “I” or “the author” is not used- recommend rephrasing throughout.

R: I appreciate this feedback. In some cases, I used the personal pronoun or the subject “the author” to avoid passive sentence construction. I tried to use the passive form as little as possible, according to the guidelines of some journals on medical education.

C: In the Abstract, The Background section of an abstract usually provides context for the study rather than describing what the study did.

R: I modified this, as you pointed out.

C: Make sure that any changes to the body of the article are reflected in the abstract.

R: I also modified the abstract.

C: Define what you mean by intervention/empirical intervention study.

R: I added the definition in 1st paragraph of the Introduction: “studies which include both training practices and corroborating its effects.”

C: “Expertization” should be explicitly defined.

R: I deleted the word and removed it from the article.

C: In Methods, “features and tendencies” of the selected articles is mentioned- please explain what is meant by this.

R: I revised research question one to make it clearer and more concise: What key features or trends characterize visual art observation training in intervention studies aimed at improving observation skills?

C: In Methods, the first paragraph mentions “studies that used visual arts for cultivating observation skills” but was this the only goal of the included studies? This does not seem consistent with what is stated elsewhere in the article.

R: As stated in Methods, I included the studies that primarily aimed to “evaluate the effect of the intervention on technical skills related to observation,” and the goals that the included studies stated clearly were listed in Table 2.

C: The final paragraph of the introduction is not needed.

R: The paragraph was written at the journal editor’s suggestion.

C: In Methods, 3rd paragraph, second sentence “were excluded” needs to be moved to outside the parentheses

R: These words were removed.

C: Specifically outline your inclusion and exclusion criteria. Include a figure that shows how articles were selected, numbers of articles at each step of the process, that 12 articles were selected for full review.

R: Following the PRISMA-ScR, I added information to the Methods and added Figure 1.

C: Abbreviations need to be explained in the table (regardless of whether they are also explained in the text- the table should be able to stand alone).

R: I modified Table 2 as per your recommendation.

C: Results could be strengthened by using numbers instead of terms like “most”

R: I modified this, as you pointed out.

C: Overall the discussion would benefit from a clearer focus on the original question- specifically diagnostic skills, not art observation studies in general.

R: As explained, the focus has changed.

C: Clarify what you mean by diagnosis- visual diagnosis only? Clinical reasoning (which would be much broader)? How did you decide about diagnostic evidence? Is improved diagnostic accuracy the only criterion? What about other elements of the diagnostic process or approach to complex cases in which the diagnosis is unclear?

R: Due to the change in the focus of this paper, I cannot directly answer these questions. On the other hand, as mentioned in the Introduction, the intervention studies claimed that the observational skills fostered by art observation could be applied to both medical imaging and clinical interviews. See also the next response.

C: If included studies were only those designed to improve observation skills, is it reasonable to assess them for improvement in diagnostic skills? How strong can any conclusions be? Only 1 study assessed diagnostic accuracy, and this was in a limited domain.

C: Observation is the first part of the diagnostic reasoning pathway, but not the only part. It seems simplistic to expect a single intervention to have a great and immediate impact on diagnostic accuracy.

R: I agree with you. However, the intervention studies claimed that art observation could facilitate diagnostic skills (directly or indirectly), as mentioned in the Introduction. This paper examined the claim. Moreover, in the “Technical knowledge and skills based on observation skills” subsection of the Discussion section, I argued that the relationship to technical knowledge and skills of learners should be considered to examine this clearly.

C: Why might improvement in diagnostic skills be difficult to demonstrate in a study? Provide context, difficulty of assessing long term effects. How could future researchers attempt this?

R: See the response immediately above. In addition, I added the subsection “Long-term effects such as scaffolding” to discuss this. See also the “Implications for future research” subsection.

C: Throughout the discussion it needs to be made clear when you are referring to studies in experts vs studies in novices. This is not clear for a reader who is not already familiar with this literature.

R: This seems to relate to the next comment. If not, I may have misunderstood your question. Please let me know if the subsequent response is not satisfactory, and I will do my best to address your question.

C: Novices are not experts, one can’t teach them the same way. If your position is that novices should be taught like experts, this needs to be explicitly stated and supported from the literature.

R: The revised version does not mention that novices should be taught like experts. Concerning this comment, I also revised the subsection “Longer periods of careful and systematic observations for accurate diagnosis” to state clearly that teaching novice medical students systematic viewing is not helpful.

C: Consider that one might not see an improvement right after the intervention in 2nd year, but could it be helpful once students have more diagnostic experience, eg in 4th year?

R: This is discussed in the subsection “Long-term effects such as scaffolding.”

C: Clearly and briefly list what reported benefits of art observation are.

R: In the 2 nd paragraph of the Introduction, I listed the technical skills that intervention studies aimed at. I also added this as the result of this review in the subsection “Outcome measures and their results” in the Results section.

C: Conclusions should be a single paragraph stating your major findings.

R: I modified it as you pointed out.

C: The conclusion appears to be that the impact of art observation on diagnostic skill has not been adequately investigated, rather than that it is ineffective. The statement “there is no concrete evidence on whether appreciating visual art contributes toward an accurate diagnosis” is more accurate.

R: As a result of the reanalysis, such wording was used in the paper.

C: In Conclusions, the claim that “it would be better to use learning modules based on perceptual learning using cases to foster pattern recognition” was not the focus of this article.

R: This description was deleted.

C: No veterinary medicine literature was included. Is there a reason for this?

R: Three studies in veterinary education were included because they were comparable to training and experimental designs in human health professional education.

C: In Limitations, studies that included other methods were excluded- can’t comment on what the effects of these studies are. Also, there is no mention of the limitations of using quantitative population statistics.

R: I added these to the Limitations subsection.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Availability Statement

    Underlying data

    OSF: Effects of visual art observation on technical skills in novice healthcare learners: A scoping review, https://doi.org/10.17605/OSF.IO/TDGQZ. 128

    This project contains the following underlying data:

    • Appendix Table.ods (the selected intervention studies' features, research design, and key results)

    Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).


    Articles from F1000Research are provided here courtesy of F1000 Research Ltd

    RESOURCES