Table 3.
Nature of measures and procedures of studies included in the review (N=32).
| Author | Nature of measures and procedure |
| Attfield et al [6] | Semistructured interviews, eliciting accounts of health information-seeking episodes and how they relate to ongoing health care |
| Briet et al [19] | Questions and answers to a health website were categorized and analyzed descriptively |
| Cartright et al [20] | Logs were mined and categorized as either evidence-directed, hypothesis-directed with diagnostic intent, or hypothesis-directed with informational intent, according to defined algorithms |
| Chin [21] | Participants were randomized to complete either an ill-defined task (find possible causes for a list of symptoms) or well-defined task (find a specific medical term), using a health website; cognitive measures (working memory capacity, processing speed), health literacy measures, medical knowledge measure, search performance for both tasks were measured |
| Chin & Fu [22] | Participants were given a symptom vignette and asked to find possible causes. Participants were randomized to complete either a parts task (described symptoms based on body parts) or a systems task (described symptoms by functional systems). Tasks were completed either in the parts interface (categorized symptoms based on body parts) or systems interface (categorized symptoms based on functional body systems). Measures included Patients’ Medical Background Knowledge, Mental Interface Match Index, Broadness (no. of links), link decision time: time spent reading. |
| Cooper et al [23] | Discussion in focus groups: which symptoms from a list would be of most concern, why, and what could cause them, what would be their hypothetical response to them, what were actual responses in the past? |
| Cumming et al [24] | Participants viewed a storytelling video online and then completed a questionnaire evaluating the effect of the video on feeling informed, planned future help seeking, etc |
| De Choudhury et al [25] | In the survey, participants were asked questions about their experiences using Twitter and search engines to share and seek health information; on the log analysis, tweets and logs were categorized as relating to 4 categories: (1) symptoms of major diseases, (2) benign explanations (nonlife-threatening illnesses), (3) serious illnesses, and (4) disabilities; logs were then analyzed descriptively |
| Fiksdal et al [26] | Moderators used a semistructured moderator guide to facilitate discussion in focus groups about: (1) participants’ perception and understanding of health care information, (2) the process of information collection on the Internet, (3) understanding and usage of information, and (4) implications of health care information for health and well-being |
| Fox & Duggan [1] | People were contacted via telephone for telephone interviews about online health information seeking |
| Hay et al [27] | Before their appointment, patients were interviewed about online health information (OHI) seeking, and completed the Wong-Baker-Faces Pain Scale; the consultation was audio-recorded to determine whether OHI was mentioned and then patients completed a satisfaction scale regarding the consultation |
| Keselman et al [28] | Participants read a hypothetical scenario describing a relative who experienced symptoms typical of stable angina and then discussed possible causes of symptoms from the symptom vignettes in semistructured interviews; then Think Aloud while they researched symptoms on MedlinePlus |
| Lauckner & Hsieh [29] | The study took place online; participants were presented with a symptom vignette and then with a search engine result page manipulated to show serious conditions either at the top or bottom, and low or high frequency of serious conditions; participants then completed several scales: perceptions of severity and susceptibility using the Risk Behavior Diagnosis scale, history of viewing online health information, their health status, how often they experienced each of the 4 symptoms, and their demographic information, health literacy using the Newest Vital Sign (NVS) |
| Luger [30] | Participants were presented with 1 of 2 symptom vignettes and asked to diagnose them using Think Aloud, either on Google or WebMD. Measures taken included Think Aloud, self-reported age, gender, ethnicity, education, and income, recent health history, number of hours per week that they used a home computer as well as the number of years that they had owned a home computer, whether or not they had previous experience with the Internet tool to which they were assigned (Google or WebMD’s Symptom Checker). |
| Medlock et al [31] | Participants completed an online questionnaire, which included questions about health information resources used; the Autonomy Preference Index was used to assess information needs and preferences for involvement in health decisions |
| Morgan et al [32] | A random sample of questions posted to the GARD website were analyzed thematically; collected data included inquiry origin (domestic), type of contact (email and Web-based form), gender, date received at the information center, the specific condition for which they were inquiring, primary language (English), and their reason for inquiry |
| Mueller et al [33] | Participants first completed a survey about their symptoms and risk factors. They were then randomized to receive the intervention (personalized, theory-based health webpages), or control conditions. Subsequently, participants completed a questionnaire which assessed demographic details, participants’ self-reported intention to seek help (scale 1-7), behavioral attitudes and beliefs about help seeking. |
| Norr et al [34] | Participants first completed the Anxiety Sensitivity Index (ASI), Intolerance of Uncertainty Scale (IUS), and a health anxiety scale (SHAI). Participants were randomized to view either symptom-related websites or general health and wellness control websites. Afterwards, they completed the ASI and SHAI. |
| North et al [35] | For the MayoClinic website, click data was collected using Google Analytics; for the telephone triage, all completed calls were counted and put into symptom categories based on the algorithm/guideline used during the call. |
| Perez et al [36] | Participants were randomized to one of two symptom scenarios and instructed to search the Internet while using Think Aloud; participants’ Internet searches and think-out-loud vocalizations were digitally recorded using screen capture video-recording software |
| Powell et al [37] | Users of the NHS Direct website completed an online questionnaire survey. A subsample of survey respondents participated in in-depth, semistructured, qualitative interviews by telephone or instant messaging/email. |
| Powley et al [38] | Patients completed a brief survey on Internet use for symptom appraisal prior to attending clinic; patients were then asked to complete the NHS and WebMD symptom checkers based on their symptoms and their answers and the outcomes were recorded; demographic and disease-related data were obtained from clinic records. |
| Rice [39] | Respondents were contacted via telephone for telephone interviews asking about online health seeking. |
| Teriaky et al [40] | Patients awaiting gastroenterology consultation were asked to complete a questionnaire consisting of 16 multiple-choice questions to understand patient use of Web resources for medical information. Abstracted information included patient demographics, level of education, reason for referral, preceding investigations, patient resources utilized, websites browsed, information obtained, reasons for seeking information on the Internet, patient self-diagnosis, and lifestyle changes instituted. |
| Thomson et al [41] | Semistructured interviews focused on patient sociodemographic and psychological factors, symptom recognition and appraisal, and communication with HCPs, friends, and family. |
| White & Horvitz [5] | Analysis of logs: Formulated a list of symptoms and associated benign and serious conditions. Recorded all queries to search engines and clicks on result pages, and identified those that included symptoms as search terms. Escalations: Observed increases in medical severity of search terms within a search session. Nonescalations: Search progresses to benign explanation of the symptom; survey: Microsoft employees were sent a survey with open and closed-ended questions regarding participants’ medical history and online search behavior |
| White & Horvitz [42] | Microsoft employees were sent a survey to elicit perceptions of online medical information, experiences in searching for this information, and the influence of the Web on health care concerns and interests. The survey contained “around 70” open and closed questions |
| White & Horvitz [43] | Cases were identified where queries for symptoms were followed by a query about a related serious condition. Cases where it led to a benign query or no change were termed nonescalations. Using logistic regression, a model was developed to predict escalation using website features of the previously visited page; website features: structural features, title and URL features, firs-person testimonials, page reliability/credibility, commercial intent |
| White & Horvitz [44] | Log analysis: logs containing symptoms as search terms were filtered, and it was determined whether subsequent searches showed health care utilization intent (HUI). Logistic regression was used to predict HUI based on search characteristics; log entries include a user identifier, a timestamp for each page view, and the URL of the page visited; HUI: queries that indicate searching for contact information for medical facilities |
| White & Horvitz [45] | Queries were labeled to identify medical and symptoms related queries, and escalations. Subsequently occurring searches were examined. Log entries included a unique user identifier, a timestamp for each page view. Search sessions on Google, Yahoo!, and Bing. Escalation queries were categorized as within-session and between session |
| White & Horvitz [46] | Log data relating to symptom queries were filtered. Subsequent behavior on the search engine result page was examined, including hovering, cursor movements, clicks, scrolling, as well as bounding boxes of areas of interest (AOIs) |
| Ybarra & Suman [47] | Respondents were contacted via telephone and completed a telephone survey about online health information seeking and help-seeking behavior (seeking help from a health professional or others) |