Abstract
Children with unilateral cleft lip and palate (UCLP) suffer from negative public perceptions. A better treatment strategy should be established to help them live an ordinary life with improved perceptions. To do that, it is important to understand the relationship between physical facial features and perceptual judgment. In this paper, we present FaceReview, a new visualization system to support interactive exploration of a heterogeneous multidimensional dataset with facial measurement data and subjective judgment data. To seamlessly link the two data, we design FaceReview based on information visualization techniques that are proven to be useful and therefore commonly used, such as brushing and linking, small multiples, and dynamic query. Our design decisions successfully support exploratory tasks of our collaborators. We present a case study to show the efficacy of FaceReview.
Introduction
Children with unilateral cleft lip and palate (UCLP) are at increased risk for bullying and teasing in school, being judged by people as being angry all the time and having lower intelligence and less athletic ability. Even their parents may look negatively upon UCLP children and have a greater tendency to sexually or physically abuse them than non-UCLP children1. Therefore, it is important to improve the facial appearance of the UCLP children so that they can live an ordinary life as they grow up in their communities.
As a fundamental step toward developing a better treatment strategy for the UCLP children, the medical community has been trying to build a standardized scaling system for classifying severity of facial disfigurement associated with UCLP following the primary reconstructive surgery2,3,4. As a preliminary effort, a group of researchers at Harvard School of Dental Medicine obtained facial measurement data and subjective judgment data using the pictures of 42 UCLP children and 4 normal children. In addition to measuring 38 facial features, they performed a nationwide survey to collect subjective judgment data from 537 participants with five types of occupations: plastic surgeons, dentists, orthodontists, OMFS (Oral and Maxillofacial Surgeons), and teachers.
When analyzing this dataset, there are three important issues to consider: 1) the dataset consists of two different but linked data (i.e., facial measurement data and subjective judgment data); 2) each data has many dimensions; and 3) the two data are heterogeneous in its format. Hence, descriptive statistics or simple statistical tests such as Chi-square test do not sufficiently support the flexible and interactive exploration of the dataset to get a deeper insight into the dataset.
In this paper, we introduce FaceReview (Figure 1), a new interactive visualization system developed to help the researchers have a better understanding of the dataset and generate plausible models for a standardized scaling system for classifying severity of facial deformity associated with UCLP. FaceReview provides four customized views to best show the different aspects of the two heterogeneous data (facial measurement data and subjective judgment data). The Patient Information View provides patients’ information (gender, side of cleft, frontal/profile pictures, etc.) as well as facial glyph with the area of disfigurement highlighted. The Patient Network View showing the patient’s similarity relationships derived from facial measurement data allows the researchers to dynamically group the patients based on their similarity values. The Measurement Information View uses the parallel coordinates5 to show the facial measurement data with 38 dimensions. The Judgment Information View, an enhanced bar chart overview for the subjective judgment data, enables the researchers to interactively select a specific set of patients on which they want to focus their attention. Furthermore, FaceReview seamlessly links the two heterogeneous data to show the connection and interaction between them by tightly coupling the four views.
Figure 1.
FaceReview consists of four customized views, showing patients’ information in the Patient Information View (A), relationships between patients in the Patient Network View (B), facial measurement data in the Measurement Information View (C), and subjective judgment data in the Judgment Information View (D).
In this paper, we describe the visualization design and interaction of FaceReview along with implementation details. We then present a case study to demonstrate the efficacy of FaceReview as an efficient UCLP data exploration and analysis tool. We also believe that FaceReview can be used as a good educational tool for medical practitioners.
UCLP Dataset and Tasks
Dataset
There are two linked heterogeneous data in the UCLP dataset: facial measurement data and subjective judgment data. They are linked by the patient ID field. A set of photographs (frontal and profile views) of 46 children (42 children with UCLP and 4 normal children) were obtained from the Children’s Hospital Boston. The 42 children with UCLP had already received the primary repair surgery, and the 4 normal children were added as a control group.
The researchers measured 38 facial features for the 46 children. The facial measurements include distance, angle, or ratio between various facial features, e.g.,the angle from the bottom of the nose to the left nostril, the vertical distance of the left cupid bow to the opening of the mouth, and the ratio of the nose width to the mouth width. We used these facial measurement data to cluster the UCLP children. We also derived a set of categorical variables from these measurements (e.g., which nostril is higher, which corner of the mouth is higher, etc.) and allowed users to select the patients or change the arrangement of the patients according to the categorical measurement variables.
Standardized frontal and profile photographs of the 46 children were also evaluated by 537 participants via an Internet-based survey. The participants were from five different professions of plastic surgeons, dentists, orthodontists, OMFS, and teachers. The four medical and dental judge groups have been involved in the primary surgical treatment and subsequent surgical revisions of UCLP patients. The participants rated each patient image in a six-level scale (none, mild, mild-moderate, moderate, moderate-severe, and severe) according to the perceived facial disfigurement. Information such as gender, number of patients treated, years in practice/education, and the facial features influencing the severity classifications was also collected.
Tasks
Our collaborators at Harvard School of Dental Medicine wanted to use FaceReview to search for meaningful relationships between facial measures and subjective judgment. In doing so, they expect to come up with plausible ideas leading to building a standardized scaling system for classifying severity of facial disfigurement associated with UCLP following the primary reconstructive surgery2,3,4. Their initial goal of collaboration with us was to answer the following questions:
Is there any difference in the subjective judgment patterns among different judge groups?
Is there any association between a subset of the facial measures and the grouping of UCLP children with different levels of facial disfigurement?
Do judges in different groups look at different parts of the face (or different facial criteria) when making their judgments?
To find answers to these questions, it is necessary to seamlessly integrate the two linked heterogeneous data (facial measurement data and subjective judgment data). However, the doctors had been mostly using descriptive statistics or simple statistical tests to separate meaningful patterns from the dataset. Since the tasks to answer the above three questions are exploratory in nature, we thought they needed an interactive visualization system that can efficiently link the two heterogeneous data and enable them to explore the linked dataset in a more flexible manner. We tried to understand these initial questions in a broader context and developed an interactive visualization system entitled FaceReview to help them explore the dataset in as many directions as possible.
FaceReview Design
FaceReview consists of four customized views to visualize the different aspects of the two heterogeneous data (Figure 1). To enable users to gain insights from multiple visual displays of domain knowledge, FaceReview was designed to support the interactive coordination of multiple views by tightly coupling the four customized views upon selection of patients in many ways. In this section, we describe the four views in detail and the coordination between them along with the design decisions we made. FaceReview employs four novel interaction techniques: (1) a dynamic query control for filtering patients according to the judgment data (Figure 4), (2) details on the overview technique to show details with context in the background (Figure 3), (3) an improved force-directed layout with interactive relayout and a patients community exploration method using the link-based community identification algorithm (Figure 1B), and (4) interactive selection and animated relayout of patients according to categorical variables (Figure 1B and Figure 2).
Figure 4.
Detail bar chart and dynamic query for filtering patients. (a) Detail bar chart appears when users click on a bar of the judgment bar chart. Each vertical bar represents a patient. The blue bounding box is a novel dynamic query control for selecting the patients who satisfy the thresholds defined by the blue bounding box. Users can change the threshold values by dragging the bottom (b) or top edges of the blue box. The selected patients are highlighted in the coordinated views (e.g., Patient Network View).
Figure 3.
Judgment Information View with Details on the Overview. (A) Judgment bar chart. (B) Facial feature bar chart showing facial feature data for each judge group in each row. (Note that facial bar charts are available for only medical/dental judge groups.) (C) Facial feature bar chart showing facial feature data for each severity level of disfiguration on each column; detail information is shown as a line graph, and the overall information is shown as a bar chart.
Figure 2.
Relationship between (a) right outer corner of the eye and (b) right corner of the mouth.
Patient Information View
The Patient Information View shows the frontal and profile images of children, a facial glyph to highlight the area of disfigurements, and distributions of children across 12 categorical measurement variables. Except gender and side of cleft, all other categorical variables have three levels: Right, Left, and Equal (or Even). We list the categorical variables in Table 1.
Table 1.
Categorical measurement variables available in FaceReview.
Variable Name | Definition |
---|---|
gender | gender of the patients (Male or Female) |
side of cleft | side of cleft of the patients (Right or Left) |
asymmetry_ex_to_midline | which outer corner of the eye is farther from the facial midline |
asymmetry_en_to_midline | which inner corner of the eye is farther from the facial midline |
l_en_ht_to_r_en_ht | which inner corner of the eye is higher |
l_ex_ht_r_ex_ht | which outer corner of the eye is higher |
distance_ala_to_en | which nostril is farther from the inner corner of the eye |
asymmetry_ala_to_midline | which nostril is farther from the facial midline |
l_al_ht_to_r_al_ht | which nostril is higher |
asymmetry_ch_to_midline | which corner of the mouth is farther from the facial midline |
which_is_higher_ch | which corner of the mouth is higher |
which_christa_higher | which cupid bow is higher |
For each categorical variable in Table 1, FaceReview employs a stacked bar chart to show how many patients are in each level at the right side of the Patient Information View (Figure 1A). It supports bidirectional coordination between this view and all other three views upon selection of patients. Users can click on a bar segment of a category in the stacked bar chart to select all the patients belonging to the corresponding level of the category. Then, other three views reflect this selection. Conversely, when users make a selection in other views, the stacked bar charts in this view show the information for the selected patients. In this way, users can interactively explore the dataset to find meaningful patterns in the dataset. For example, when users click on the bar segment for the “Right” level of l_al_ht_to_r_al_ht, they can see if the selected patients (whose right nostril is higher than left) show any interesting patterns in other views (e.g., if judges’ overall ratings tend to be severer for this group).
Users can drill down through the dataset by making subsequent selections. Once users select a bar segment of the stacked bar chart, the all stacked bar charts are updated to show the information for the selected patients (Figure 1A). A subsequent selection of another bar segment will show the information that satisfies both conditions (i.e., a conjunction). Since a selected bar segment extends to the full range and its label is highlighted in red, users can easily tell the selection condition. Users can undo their selection by clicking again on the selected bar segment.
Along with the stacked bar chart, we also provide a facial glyph display to enhance users’ perception of disfigured facial features. When a single patient is selected in any views, the facial glyph highlights the patient’s disfigured facial features using a red outline while the stacked bar chart shows the detail values for the selected patient.
Patient Network View
Since one of the most important tasks in FaceReview is to look into the patient pictures, we decided to always show all patient images in the Patient Network View while only the information for the selected patients are visible in other views. FaceReview highlights the selected patients with a thick red border, and allows users to zoom and pan in this view.
Force-directed layout and link-based patients community identification
Users can arrange patients’ frontal figures in various ways in the Patient Network View. FaceReview uses the force-directed layout algorithm to determine the initial layout and calculates the Euclidean distance between each pair of patients using all facial measurement variables. Users can adjust the similarity threshold using the “Similarity Threshold” slider control at the top left corner of this view (Figure 1B). A pair of patients is connected by a link only when their similarity is higher than the similarity threshold. Users can also adjust the link force using the “Link Force” slider control (Figure 1B) to draw closer together or spread out further patients.
Once the layout is determined, we apply a link-based community identification algorithm6 to cluster similar patients together. The algorithm first computes the similarity between two links which have a shared adjacent node. Once the similarity values for all pairs of connected links are calculated, this algorithm extracts link communities by merging links together that have a higher inter-link similarity than a specified threshold. The choice of the threshold value affects the size of community; the higher the threshold is, the smaller the community size is. We determine the interlink similarity threshold so that three nodes connected with only two links can form a community. In this case, the number of nodes in the union of neighbors is 3 and the number of common neighbors is 1. Thus, we determine the inter-link similarity threshold as 1/3.
Node communities can be driven from link communities. Nodes connected to the links belonging to the same link community form a node community. FaceReview highlights each community by enclosing the community members in a bounding region with a grey background (Figure 1B). Since the community identification algorithm is link-based, one patient can be a member of multiple communities. FaceReview enables users to select a patient or one of his/her communities. Through the selection and highlighting of all members of a community using a context menu for a patient node, users can check their similarities or differences in different arrangements of the Patient Network View or in other coordinated views.
Animated relayout by categorical measurement variables
FaceReview provides support for interactive rearrangement of patients in the Patient Network View according to categorical measurement variables in Table 1. When users select a categorical measurement variable in the “Sort by” combo box at the top right of this view, patients are re-positioned according to the variable with smooth animation. For example, when users select asymmetry_ch_to_midline, patient nodes are rearranged in a bar chart-like visualization as shown in Figure 2b. They are stacked on bars according to their values for the selected category. In Figure 2b, it is evident that there are more patients whose right corner of the mouth is farther from the midline than left.
Users can select a group of patients (e.g., all patients in a bar) to see if they show any interesting patterns in a different arrangement by another categorical measurement variable in this view or in other views. For example, we can easily notice that only four patients move out of the “Right” bar when we change the “Sort by” variable from l_ex_ht_r_ex_ht to asymmetry_ch_to_midline as shown in Figure 2. This implies that the patient’s right corner of the mouth tends to be farther from the facial midline when the patient’s outer corner of the right eye is higher than left.
Measurement Information View
FaceReview employs the parallel coordinates5 to show the facial measurement data since it is one of the efficient ways to visualize a multidimensional dataset in a compact screen space (Figure 1C). There are 38 vertical axes for 38 facial measurement variables in the parallel coordinates. When users mouse over an axis, a reference image with the corresponding facial feature highlighted with red lines is shown as a tooltip to inform users of what feature the vertical axis represents. FaceReview shows information for all patients when none is selected. Once any selection is made, the measurement data for only the selected patients are shown in this view. Users can visually examine the parallel coordinates to see if any distinctive patterns emerge for the selected patients. Such patterns include the case where the selected patients have similar values for a subset of facial features.
Interactive adjustment of weights of variables in similarity calculation
The 38 facial measurement variables are all quantitative variables: 2 variables are for angle measurements (the angles from the bottom of the nose to the left/right nostril), 2 variables are for distance measurements (the vertical distances of the left/right cupid bow to the opening of the mouth), and 34 variables are for ratio measurements.
As mentioned above, the force-directed layout algorithm adopted in the Patient Network View calculates the similarity between patients using the multidimensional facial measurement data shown in the Measurement Information View. By default, FaceReview uses all variables in the facial measurement data, but it is useful to examine how the relationship among patients changes when only a subset of variables is used in the similarity calculation.
We decide to use the parallel coordinates for interactively adjusting the weights of measurement variables in the similarity calculation to directly control the variables where they are actually visualized. FaceReview turns the vertical axes of the parallel coordinates into slider controls for adjusting the weights of variables. By default, the values of all slider controls are set to maximum so that all variables equally contribute to the similarity calculation.
If users click the “Show Link Control” check box on the top left corner of this view, users will see a slider control on each vertical axis (Figure 1C). The position of slider thumb indicates the weight of each variable by which the variable contributes to the similarity calculation. The “ALL” slider control at the top left corner allows users to control all sliders at once. Adjusting the sliders will instantaneously affect the force-directed layout in the Patient Network View. For example, if users want to make only the two angle measurements variables affect the layout of the patients’ network in the Patient Network View, they first drag the “ALL” slider to the bottom to set the weights of all variables to zero, and then drag the two sliders for the angle measurement variables to the top (Figure 1B&C). As they adjust the sliders, the similarity values between patients are updated in real time and the layout of the patients’ network reflects the change with smoothly animated transitions.
Judgment Information View
Trying to support the users’ tasks to find answers to the initial questions (section 2.2) in a broader context, we set the following design goals for the Judgment Information View:
provide users with a compact clear overview of subjective judgment data
encourage comparison between judgments patterns by different judge groups
show the detail (individual information) in relation to its context (average information)
support the interactive selection and filtering of patients based on judgment data
reveal the link between the judgment result and the visual features that influence the result
Small multiples for overview and comparison
We decided to use bar charts as an essential visual representation for displaying the judgment pattern of each judge group. A bar chart in each row (judgment bar chart) shows the judgment distribution of the corresponding judge group (Figure 3A). In each row, from left to right, the severity of disfigurement increases from none to severe. We also color-code the bars according to the severity: green for none and red for severe. The height of each bar denotes the average percentage of judges who judge the selected patients as having the corresponding level of severity.
We adopt the concept of small multiples7 to achieve the first and second design goals. Since the same encoding and visual structures are repeated row by row, users are encouraged to perform comparative analysis across rows7. For example, it becomes apparent that the teachers are the most generous in terms of judging the severity of the UCLP children since their bar chart looks more skewed to the left than other judge groups (Figure 3A).
There is a small bar chart (facial feature bar chart) at the right end of each row and the bottom end of each column. The facial feature bar chart in each row (Figure 3B) provides a visual summary on what facial features the corresponding judge group looked at to make their decision. The facial feature bar chart on each column (Figure 3C) visualizes the summary on what facial features the judges who judge the selected patients as having the corresponding severity level actually looked at to make their decision. It looks evident from these facial feature bar charts that there is no noticeable difference between judge groups; the most influential facial features are nose shape and lip shape, followed by lip position and proportions.
Details on the Overview
It is important to compare the judgment information for an individual patient (i.e., detail) with the overall judgment information (i.e., overview) since such comparisons enables users to understand an individual data item in a related broad context. As Shneiderman’s visual information seeking mantra8 attests, detail is as important as overview. Instead of showing the detail and overview in separate views, we decided to show both in a single view using different visual representations: a line graph for the detail and a bar chart for the overview (Figure 3). We call this technique Details on the Overview, which enables users to easily perceive the difference between the individual pattern and the overall pattern. It is applied to the judgment bar chart and the facial feature bar chart (Figure 3).
When users move the mouse over a patient in the Patient Network View or the Measurement Information View, a line graph appears over the judgment bar chart to show the judgment information for the patient. In this way, users can compare the judgment information for a specific patient (shown as a line graph) with the overall average judgment information of all the currently selected patients (shown as a bar chart).
Bar charts as dynamic query controls
If users click on a bar in the judgment bar chart, a detail bar chart (Figure 4a) appears within the bar to show the individual judgment information for all patients in the severity level that the bar represents. Each vertical bar in the detail bar chart represents a single patient. The height of a detail bar represents how many judges (percentage) judged him/her as having the corresponding severity level, and vertical detail bars are sorted in a descending order their height. The middle horizontal line of the chart indicates the average of the percentage of judges who judged the patients as having the corresponding severity level. For example, when users click on the bar for plastic surgeons’ moderate-severe rating, the detail bar chart appears (Figure 4a). The patient shown to the left was judged by many more plastic surgeons as having the moderate-severe level of disfigurement than the patient shown to the right.
When users click on a bar to see the detail bar chart within the bar, a blue bounding box appears around the bar chart (Figure 4a). This interaction selects all patients that belong to the severity level that the bar represents. The blue bounding box works as a novel dynamic query8 control for filtering patients based on the judgment data. Users can filter out some patients by dragging the bottom (Figure 4b) and top (Figure 4c) edges of the bounding box to adjust the thresholds defined by the locations of top and bottom edges. Only the patients whose value satisfies the thresholds are selected (i.e., a patient is selected when the tip of the corresponding bar is enclosed with the blue bounding box) and highlighted in all other views.
Users can drag up the bottom edge of the dynamic query control to filter patients who took more votes from the judges for a severity level. For example, on the detail bar chart for moderate-severe level of the plastic surgeons group (Figure 4b), if users drag the bottom of the dynamic query control up to 25%, only the patients whom more than 25% of plastic surgeons judged as having moderate-severe severity are selected. It is interesting to note that all the selected patients have the same value (i.e., “Right”) for the categorical measurement variable, asymmetry_en_to_midline as shown in Figure 5b. It means that they all have their right inner corners of the eye farther from the facial midline compared to the left inner corner of the eye. We learned that this homogeneity does hold for other four judge groups as well. The top edge works in the same way. Users can drag down the top edge of the dynamic query control to filter patients who received fewer votes from the judges for a severity level than a threshold.
Figure 5.
Change of subjective judgments with age for a female patient (top: younger, bottom: older). The red polyline on the Judgment Information View (overlaid for illustration) to show the peak ratings’ shifts to the left for the older image, indicating she gets more favorable judgment as she gets older.
Implementation
FaceReview was implemented as a stand-alone application in JAVA. We built it on the Swing framework and the Prefuse toolkit10 based on the Model-View-Controller architecture. In FaceReview, the seamless integration of the two heterogeneous data is made possible by interactive coordination between the four views. The brushing and linking9 is the key interaction technique used in FaceReview to implement the interactive coordination. Any selection events in one view cause instantaneous updates in other views to highlight the selected patients.
Qualitative Evaluation – A Case Study
“The visualization tool allowed us to see spatial patterns and relationships across faces which were not possible before.” - One of our collaborators-
We argue that FaceReview provided the medical community with the first opportunity to “see” the UCLP data and interactively explore it. In this section, we present a case study as a qualitative evaluation result to show that FaceReview serves as an effective UCLP data exploration and analysis tool.
Participants and Procedure
Our collaborators at Children’s Hospital Boston had used statistics packages only to analyze the UCLP data before using FaceReview. It is a common practice in the medical community to use traditional statistical tests for testing a small number of carefully considered hypotheses. However, the traditional statistical packages have their own limitations in supporting explorative data analyses where rapid hypothesis formulation is important. We thought that interactive visualization tools could efficiently support the hypothesis formulation so that users can rapidly formulate many more hypotheses by seeing and interacting with visual representations of their data.
In winter of 2010, we started to work with our collaborators to design and develop FaceReview to help them explore and understand the UCLP data. We interviewed them to understand their main goals and tasks to design and build a prototype. After the first prototype was built, we trained our collaborators through remote interaction. They used the prototype, and reported their findings and feedbacks to us. We improved the prototype by referring to the reports, and let them use it again. This iterative design and evaluation process has been continued for about a year until we built the current version of FaceReview.
Analyses and Results
With FaceReview, our collaborators could view the differences and similarities in the judgments of facial deformity in the Judgment Information View, changes in the asymmetries represented by the stacked bar charts for categorical measurement variables in the Patient Information View, and interesting patterns of symmetric and asymmetric development using interactive coordination between the four views. This had been nearly impossible with the parametric/nonparametric statistical tests in statistical packages.
Our collaborators all agreed that FaceReview facilitated the process of determining which faces clustered together based upon different independent variables such as nasal/facial asymmetry. This had not been possible using the conventional statistical tests as they presented at the American Cleft Palate Association meeting.
Our collaborators could instantly notice in the Judgment Information View that the teachers differed from medical/dental professionals, with teachers rating most of the faces as less severe than the medical/dental professionals (Figure 1D). It was also discernible that orthodontists tended to be the most critical overall in their ratings of the patient images (Figure 1D).
We already reported some other interesting findings made in our case study in section 3 while we were explaining the FaceReview interaction. To sum them up, those findings are as follows:
The patient’s right corner of the mouth tends to be farther from the facial midline when the patient’s outer corner of the right eye is higher than left (Figure 2).
The most influential features are nose shape and lip shape (Figure 3C).
Patients who are judged as having moderate-severe severity tend to have right inner corners of the eye farther from the facial midline compared to the left inner corner of the eye (Figure 4b).
One of our collaborators also made additional interesting observation related to age and gender of patents. There were several sets of images of the same patient over several years of growth and development. Most of the medical/dental judges rated the female faces much more favorably as they grew older (Figure 5). This pattern of decreasing severity was not found in the male patients. Our collaborators were able to easily see how judge groups’ perceptions and facial measurement had changed as the patient grew older through coordinated updates in the multiple views in FaceReview. Given the tendency of change in the severity of facial disfigurement with age across almost all the judge groups, doctors might have to consider the interaction effect of age with facial disfigurement as an important parameter in planning surgical interventions for UCLP patients.
In summary, we observed that FaceReview made it possible for our collaborators to easily formulate interesting hypotheses based on their findings throughout their visual exploration. We believe that interactive integration of the two heterogeneous data using the four coordinated views in FaceReview played an important role. They could see the overview of the entire data, dynamically select groups of patients with a distinctive attribute defined by clinical information or categorical measurement variables, and compare the patterns of the selected patients’ information in different linked perspectives.
Some of the hypotheses can be informally tested by rapidly comparing visual patterns in FaceReview, but we believe that FaceReview should be complementary to traditional statistical packages. To have more rigorous scientific results, it is necessary to perform a confirmatory hypothesis testing with more samples (i.e., more patients and controls) in traditional statistics packages.
Conclusion and Future Work
In this paper, we present FaceReview, an interactive visualization system designed to help medical experts to effectively explore a heterogeneous dataset of children with unilateral cleft lip and palate (UCLP). We discuss the design and development process of FaceReview that consists of four coordinated views for revealing different aspects of facial measurement data, subjective judgment data, and their interaction. We deployed FaceReview to our collaborators and trained them remotely through email and tutorial materials on the web, and we present the result of a case study to show the efficacy of FaceReview.
Ultimately, the results of this project and subsequent studies using additional stimulus images and judges will be used in the development of a standardized scaling system for classifying severity of facial disfigurement associated with UCLP following the primary reconstructive surgery. FaceReview can also serve as a part of an educational package (CD’s, video, readings, tests, and etc.) describing the etiological and biopsychosocial issues related to UCLP for use in professional development programs such as the National Education Association’s National Bullying Awareness Campaign.
Even though FaceReview was developed for UCLP datasets, we expect that it could be applied to other face-related medical problems such as Down syndrome, fetal alcohol syndrome, and infant autism as long as there are physical measurement and subjective judgment data. We plan to make FaceReview scalable as our collaborators are planning to increase the number of patients in the dataset. Another interesting future direction would be to add a classification function to FaceReview so that doctors can interactively classify a new UCLP patient into existing UCLP patient communities in FaceReview. Then we can anticipate the expected perceptual judgment and develop a personalized surgery plan for a better outcome.
Acknowledgments
This work was supported in part by the BK21 Project in 2011 and by Basic Science Research Program through NRF of Korea funded by the MEST (No. 2011-0005566 and No. 2011-0003631). The ICT at Seoul National University provides research facilities for this study.
References
- 1.Sullivan PM, Brookhouser PE, Scanlan JM. Patterns of physical and sexual abuse of communicatively handicapped children. Ann Otol Rhinol Laryngol. 1991;100(3):188–194. doi: 10.1177/000348949110000304. [DOI] [PubMed] [Google Scholar]
- 2.Colville J. Towards the quantification of disfigurement. British Journal of Plastic Surgery. 2000;53(1):84. doi: 10.1054/bjps.1999.3299. [DOI] [PubMed] [Google Scholar]
- 3.Assuncao AG. The V.L.S. classification for secondary deformities in the unilateral cleft lip: clinical application. British Journal of Plastic Surgery. 1992;45(4):288–296. doi: 10.1016/0007-1226(92)90055-3. [DOI] [PubMed] [Google Scholar]
- 4.Cussons PD, Musison MSC, Fernandez AEL, Pigott RW. A panel based assessment of early versus no nasal correction of the cleft lip nose. British Journal of Plastic Surgery. 1993;46(1):7–12. doi: 10.1016/0007-1226(93)90057-i. [DOI] [PubMed] [Google Scholar]
- 5.Inselberg A, Dimsdale B. Parallel coordinates: a tool for visualizing multi-dimensional geometry. Proc. of the 1st conference on Visualization ‘90 (VIS ‘90); 1990. pp. 361–378. [Google Scholar]
- 6.Ahn YY, Bagrow JP, Lehmann S. Link communities reveal multiscale complexity in networks. Nature. 2010:761–764. doi: 10.1038/nature09182. [DOI] [PubMed] [Google Scholar]
- 7.Tufte ER. The visual display of quantitative information. 2nd ed. Cheshire, Conn: Graphics Press; 2001. [Google Scholar]
- 8.Shneiderman B. Dynamic queries for visual information seeking. IEEE Software. 1994;11(6):70–77. [Google Scholar]
- 9.Becker RA, Cleveland WS. Brushing scatterplots. Technometrics. 1987;29:127–142. [Google Scholar]
- 10.Heer J, Stuart CK, James LA. Prefuse: a toolkit for interactive information visualization. Proc. of the SIGCHI conference on Human factors in computing systems (CHI ‘05); 2005. pp. 421–430. [Google Scholar]