Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Dec 1.
Published in final edited form as: Qual Life Res. 2013 Mar 27;22(10):10.1007/s11136-013-0387-8. doi: 10.1007/s11136-013-0387-8

Using the EORTC QLQ-C30 in Clinical Practice for Patient Management: Identifying Scores Requiring a Clinician’s Attention

Claire F Snyder 1,2,3, Amanda L Blackford 3, Toru Okuyama 4,5, Tatsuo Akechi 5, Hiroko Yamashita 6, Tatsuya Toyama 7, Michael A Carducci 3, Albert W Wu 1,2
PMCID: PMC3843980  NIHMSID: NIHMS460798  PMID: 23532341

Abstract

Purpose

Patient-reported outcomes (PROs) are used increasingly for individual patient management. Identifying which PRO scores require a clinician’s attention is an ongoing challenge. Previous research used a needs assessment to identify EORTC-QLQ-C30 cut-off scores representing unmet needs. This analysis attempted to replicate the previous findings in a new and larger sample.

Methods

This analysis used data from 408 Japanese ambulatory breast cancer patients who completed the QLQ-C30 and Supportive Care Needs Survey-Short Form-34 (SCNS-SF34). Applying the methods used previously, SCNS-SF34 item/domain scores were dichotomized as no vs. some unmet need. We calculated area under the receiver operating characteristic curve (AUC) to evaluate QLQ-C30 scores’ ability to discriminate between patients with no vs. some unmet need based on SCNS-SF34 items/domains. For QLQ-C30 domains with AUC≥0.70, we calculated the sensitivity, specificity, and predictive value of various cut-offs for identifying unmet needs. We hypothesized that compared to our original analysis (1) the same six QLQ-C30 domains would have AUC≥0.70, (2) the same SCNS-SF34 items would be best discriminated by QLQ-C30 scores, and (3) the sensitivity and specificity of our original cut-off scores would be supported.

Results

The findings from our original analysis were supported. The same six domains with AUC≥0.70 in the original analysis had AUC≥0.70 in this new sample, and the same SCNS-SF34 item was the best discriminated by QLQ-C30 scores. Cut-off scores were identified with sensitivity≥0.84 and specificity≥0.54.

Conclusion

Given these findings’ concordance with our previous analysis, these QLQ-C30 cut-offs could be implemented in clinical practice and their usefulness evaluated.

Keywords: EORTC QLQ-C30, patient-reported outcomes, clinical practice, cancer

INTRODUCTION

The use of patient-reported outcome (PRO) measures in clinical practice for individual patient management involves having a patient complete a questionnaire about his/her functioning and well-being and providing that patient’s scores to his/her clinician to inform care and management [12]. The procedure is analogous to laboratory tests that inform the clinician about the patient’s health – the difference being that PROs are based on scores from patient-reported questionnaires rather than values from chemical or microscopic analyses. The use of PROs for individual patient management has been consistently shown to improve clinician-patient communication [36]. It has also been shown to improve detection of problems [69], affect management [5], and improve patient outcomes, such as symptom control, health-related quality-of-life, and functioning [3, 10, 11].

Although we have demonstrated that PROs can effectively identify the issues that are bothering patients the most [12], an ongoing challenge to the use of PROs in clinical practice is determining which scores require a clinician’s attention. That is, after patients complete the PRO questionnaire, their responses are scored and a score report is generated. However, for clinicians reviewing the scores, it is not intuitive which scores represent a problem that should motivate action. Various methods have been applied to assist with score interpretation, including providing the mean score for the general population for comparison [3] or highlighting scores using the lowest quartile from the general population as a cut-off [13]. However, these methods do not actually reflect whether a score represents an unmet need from the perspective of the patient, which would require a clinician’s attention.

To address this issue, in a previous study, we used the Supportive Care Needs Survey-Short Form (SCNS-SF34) to determine cut-off scores on the European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire-Core 30 (QLQ-C30) that identify unmet needs [14]. We demonstrated that QLQ-C30 scores can discriminate between patients with and without unmet needs; however, the study was conducted in a limited sample (n=117) of breast, prostate, and lung cancer patients from a single institution. The present analysis was undertaken to attempt to replicate the findings using a new and larger sample.

PATIENTS AND METHODS

Research Design and Data Source

The objective of this study was to test the replicability of the QLQ-C30 cut-off scores from our previous study. To address this objective, we conducted a secondary analysis of data originally collected in the validation study of the Japanese version of the Supportive Care Needs Survey-Short Form (SCNS-SF34-J). The methods of this Japanese study have been reported previously [15]. Briefly, ambulatory breast cancer patients were recruited from the Oncology, Immunology and Surgery outpatient clinic of Nagoya City University Hospital. Inclusion criteria included diagnosis of breast cancer, age at least 20 years, awareness of cancer diagnosis, and Eastern Cooperative Oncology Group (ECOG) performance status of 0–3. Exclusion criteria were severe mental or cognitive disorders or inability to understand Japanese. Participants were selected at random using a list of visits and a random number table to limit the number of patients enrolled each day.

After providing written consent, subjects completed a paper survey that included the SCNS-SF34-J (validated in the parent study [15]) and the Japanese version of the EORTC-QLQ-C30 (described below). In addition to these PRO questionnaires, the survey included basic sociodemographic questions. Patients were instructed to return the completed survey to the clinic the following day, and follow-up by telephone was used to clarify inadequate answers. The attending physician provided ECOG performance status, and information on cancer stage and treatments was abstracted from the patients’ medical records.

The SCNS-SF34 was originally developed by investigators in Australia to identify unmet needs cancer patients have in five domains: physical and daily living, psychosocial, patient care and support, health system and information, and sexual [1617]. The 34-item questionnaire uses five response options: 1=not applicable, 2=satisfied, 3=low unmet need, 4=moderate unmet need, and 5=high unmet need and a recall period of the “last month”. To calculate domain scores, we averaged the scores of the items within the domain; thus, domain scores>2.0 reflected some level of unmet need.

The QLQ-C30 [18] is a cancer health-related quality-of-life questionnaire that has been widely used in clinical trials and investigations using PROs for individual patient management [3, 6, 11, 19]. It includes five function domains (physical, emotional, social, role, cognitive), eight symptoms (fatigue, pain, nausea/vomiting, constipation, diarrhea, insomnia, dyspnea, and appetite loss), as well as global health/quality-of-life and financial impact. Subjects respond on a four-point scale from “not at all” to “very much” for most items. Most items use a “past week” recall period. Raw scores are linearly converted to a 0–100 scale with higher scores reflecting higher levels of function and higher levels of symptom burden. The Japanese version of the QLQ-C30 has been validated previously [20].

The Japanese study was approved by the Institutional Review Board and Ethics Committee of Nagoya City University Graduate School of Medical Sciences [15]. A de-identified dataset was provided to the Johns Hopkins investigators for this analysis, which was exempted for review by the Johns Hopkins School of Medicine Institutional Review Board.

Analyses

The data were analyzed using the methods applied in the original study using the SCNS-SF34 to identify cut-off scores on the QLQ-C30 that represent unmet need [14]. First, we dichotomized the SCNS-SF34 item and domain scores into no unmet need (scores ≤2.0) vs some unmet need (scores>2.0). We then tested the ability of QLQ-C30 domain scores to discriminate between patients with and without an unmet need using the SCNS-SF34 domains and items we tested in our previous analysis (see Table 1 for a summary of the SCNS-SF34 items/domains tested for each QLQ-C30 domain). Variables for the discriminant analysis were selected to correspond as closely as possible to the content of the QLQ-C30 domains. In some cases, the content was quite similar (e.g., pain on the QLQ-C30 and pain on the SCNS-SF34). For a few QLQ-C30 domains, there was no SCNS-SF34 item or domain with similar content. In these cases we used a generic SCNS-SF34 item such as “feeling unwell a lot of the time.”

Table 1.

Hypothesized Relationship between QLQ-C30 and SCNS-SF34 Domains and Resulting Areas Under the Curve (AUC): Original and Replication Analysis

QLQ-C30
Domain
SCNS-SF34 Domain/Item(s) AUC
Original
Analysis[14]
Replication
Analysis
Hypothesized AUC≥0.70
Physical Function Physical & Daily Living Needs (overall score and individual items) 0.69–0.81 0.69–0.74
Role Function Work around the home

Not being able to do the things you used to
0.71–0.73 0.70–0.70
Emotional Function Psychological Needs (overall score and individual items) 0.56–0.74 0.61–0.75
Pain Pain 0.78 0.74
Fatigue Lack of energy/tiredness 0.74 0.75
Global Health/QOL Feeling unwell a lot of the time 0.73 0.76
Hypothesized AUC <0.70
Social Function Not being able to do the things you used to 0.64 0.68
Sleep Lack of energy/tiredness

Feeling unwell a lot of the time

Being given information…about aspects of managing your illness and side-effects at home
0.41–0.51 0.39–0.55
Cognitive Function Feeling unwell a lot of the time

Being given information…about aspects of managing your illness and side-effects at home
0.54–0.60 0.53–0.63
Nausea/Vomiting 0.19–0.36 0.22–0.27
Dyspnea 0.37–0.48 0.32–0.48
Appetite Loss 0.47–0.49 0.32–0.49
Constipation 0.31–0.37 0.32–0.40
Diarrhea 0.34–0.34 0.18–0.21

The discriminative ability of each QLQ-C30 domain score was summarized using the area under the receiver operating characteristic (ROC) curve (AUC). The AUC summarizes the ability of QLQ-C30 scores to discriminate between patients with and without a reported unmet need. Higher AUCs indicate better discriminative ability. For the domains with AUC≥0.70, we then calculated the sensitivity and specificity, as well as the positive and negative predictive values, associated with various QLQ-C30 cut-off scores. We used a threshold of AUC≥0.70 because Hosmer & Lemeshow suggest that values below 0.70 represent poor discrimination, between 0.70 and 0.80 represent acceptable discrimination, and above 0.80 represent excellent discrimination [21]. It was also the standard used for our previous analysis [14]. We hypothesized that compared to our original analysis (1) the same QLQ-C30 domains would have AUC≥0.70; (2) the same SCNS-SF34 items would be best discriminated by the QLQ-C30 and thus provide the highest AUC, and (3) the sensitivity and specificity of our original cut-off scores would be supported. Analyses were performed using statistical freeware R version 2.15.1.

RESULTS

The sample has been described previously [15]. Briefly, from a pool of 420 potential participants, 12 were excluded due to declining participation (n=7), cognitive deficits (n=2), advanced disease (n=1), and failure to respond after consenting (n=2). The study sample included 408 subjects with a mean age of 56 years, 100% female, 76% married, and 45% employed full- or part-time. The ECOG performance status was 0 for 90% of the sample; the clinical stage was I or II for 71%; 93% had received surgery, 44% chemotherapy, and 39% radiation; and the median time from diagnosis was 701 days (range 11 to 17,915 days). Complete data was available for all 408 subjects, with the exception of 1 participant who was missing a single SCNS-SF34 item. That observation was excluded from analyses that required that item.

Table 1 shows which SCNS-SF34 items/domains were used to evaluate the discriminative ability for each QLQ-C30 domain, as well as the resulting AUCs both from our original analysis [14] and from this replication analysis. The AUCs were largely similar between studies. As hypothesized, the same six QLQ-C30 domains with AUCs≥0.70 in the original analysis had AUCs≥0.70 in the replication sample. Further, the SCNS-SF34 item that was best discriminated by the QLQ-C30 with the highest AUC in the original analysis also had the highest AUC in the replication sample. The following QLQ-C30 domain-SCNS-SF34 item pairings were used: physical function-work around the home (AUC=0.74), role function-work around the home (AUC=0.70), emotional function-feelings of sadness (AUC=0.75), pain-pain (AUC=0.74), fatigue-lack of energy/tiredness (AUC=0.75), and global health/QOL-feeling unwell a lot of the time (AUC=0.76).

Using these pairings, we evaluated the sensitivity, specificity, and predictive value of various cut-off scores on the QLQ-C30 (Table 2). Again, the results were largely similar between the original analysis and this replication sample. Examples of cut-off scores (sensitivity, specificity) from the replication sample are: physical function<90 (0.85, 0.65); role function<90 (0.85, 0.62); emotional function<90 (0.84, 0.60); global health/QOL<70 (0.86, 0.56); pain>10 (0.93, 0.54); and fatigue>30 (0.86, 0.62). Thus, each domain had at least one cut-off score with sensitivity≥0.84 and specificity≥0.54. This means that patients who reported unmet needs in a domain were identified correctly at least 84% of the time and that patients who reported no unmet needs in a domain were identified correctly at least 54% of the time using these cut-offs. In general, the negative predictive values (NPVs) associated with these cut-offs were higher than the positive predictive values (PPVs), with the NPVs ranging from 0.86 to 0.94 and PPVs ranging from 0.33 to 0.58. This means that if a patient was identified by the cut-off as not having an unmet need in a domain, 86–94% of the time they did not report an unmet need and that if a patient was identified by the cut-off as having an unmet need, 33–58% of the time they actually did report an unmet need. While we describe these cut-off scores for illustrative purposes, the specific cut-off scores used in a given application should be determined based on the relative importance of sensitivity and specificity.

Table 2.

Sensitivity and Specificity of Example Cut-Off Scores: Original and Replication Analysis

QLQ-C30
Domain
SCNS-SF34 Item Cut-Off Cohort Sensitivity Specificity Positive
Predictive
Value
Negative
Predictive
Value
Physical Function Work around the home 80 Original[14] 0.65 0.83 0.55 0.89
Replication 0.40 0.92 0.63 0.82
90 Original[14] 0.85 0.58 0.39 0.92
Replication 0.85 0.65 0.45 0.93
Role Function Work around the home 80 Original[14] 0.69 0.79 0.50 0.89
Replication 0.69 0.79 0.52 0.88
90 Original[14] 0.85 0.69 0.46 .94
Replication 0.85 0.62 0.43 0.93
Emotional Function Feelings of sadness 90 Original[14] 0.89 0.53 0.48 0.91
Replication 0.84 0.60 0.58 0.86
100 Original[14] 0.94 0.35 0.41 0.93
Replication 0.92 0.42 0.51 0.89
Global Health/QOL Feeling unwell a lot of the time 70 Original[14] 0.71 0.69 0.52 0.84
Replication 0.86 0.56 0.33 0.94
80 Original[14] 0.89 0.58 0.50 0.91
Replication 0.89 0.45 0.29 0.94
Pain Pain 20 Original[14] 0.66 0.84 0.64 0.85
Replication 0.70 0.81 0.62 0.86
10 Original[14] 0.91 0.66 0.54 0.95
Replication 0.93 0.54 0.47 0.94
Fatigue Lack of energy/tiredness 30 Original[14] 0.77 0.71 0.73 0.75
Replication 0.86 0.62 0.54 0.90
20 Original[14] 0.91 0.55 0.68 0.86
Replication 0.97 0.42 0.46 0.97

DISCUSSION

This analysis was undertaken to test the generalizability of the findings from our previous study which evaluated the ability of different cut-off scores on the QLQ-C30 to identify patients with an unmet need in a given domain. Such cut-off scores facilitate the interpretation of PROs used clinically for individual patient management by helping clinicians determine which scores deserve further attention. Currently, there are few guides available to help clinicians determine which PRO scores represent a problem. For example, in PatientViewpoint, the PRO webtool used at Johns Hopkins [13, 22], we highlight in yellow QLQ-C30 domain scores representing the lowest quartile based on published general population norms [23] as an indication to the clinician reviewing the report that the patient may be having a problem in this area. However, these cut-off scores using distributions of the data are not empirically based on whether the score is likely to represent a problem from the patient’s perspective. For example, the results from this analysis suggest that domain scores <90 on role or emotional function likely represent a patient-reported unmet need. However, at our institution, we are currently using cut-off scores of <66.7 for these two domains, based on the population distribution of scores. This means that our current cut-offs are missing patients with unmet needs with scores between 67 and 90. Based on the results of this analysis, we will explore changing the cut-offs to those presented here to highlight QLQ-C30 scores for the clinician’s attention.

Our findings should be interpreted in the context of the study’s strengths and limitations. First, the approach of using the SCNS-SF34 to identify QLQ-C30 cut-off scores only works well for the six QLQ-C30 domains where there is content overlap between the SCNS-SF34 and QLQC30. For the domains without a corresponding SCNS-SF34 item to use for comparison, we do not have indicators of appropriate cut-offs. Future research could address this issue by using items similar in format to the SCNS-SF34 but covering the content of the relevant QLQ-C30 domains for which no data are currently available. Also, the SCNS-SF34 uses a recall period of the “past month” whereas the QLQ-C30 generally uses a recall period of the “past week.” Ideally, the comparison between scores would be made with questionnaires that use the same recall period. The study design used in both the current sample and the original analysis was cross-sectional, so while absolute cut-off scores can be identified, important changes in scores are not addressed. Research from longitudinal studies using both the QLQ-C30 and SCNS-SF34 could explore change scores representing an unmet need.

Notably, this validation sample used QLQ-C30 and SCNS-SF34 data collected using the Japanese versions of the questionnaires. That we found such similarity between our original analysis and the current sample, despite differences in language and culture, suggests that these findings are robust. While the Japanese study provided a new sample to test our original cut-offs, and almost four times as many patients, only breast cancer patients were enrolled in the Japanese study, whereas our original analysis included three different cancer types (breast, prostate, lung). Also, the Japanese sample included women with a wide range of time since diagnosis (11 to 17,915 days). The symptom burden for women who had completed treatment years previously may be lower than for women in active treatment. Nevertheless, given the substantial concordance between this replication sample and our original sample, we believe there is adequate evidence to support implementing these cut-offs in PatientViewpoint and other applications of the QLQ-C30 being used in clinical practice.

The next important step will be to evaluate whether clinicians and patients find these cut-offs helpful. A key consideration is which cut-off to use. We presented several example cut-off scores for illustrative purposes here, but the cut-off scores appropriate for a specific application depend on the relative importance between sensitivity and specificity. That is, the more likely a cut-off score is to identify patients with unmet needs (true positives), the more likely it will also identify patients without an unmet need (false positives). Thus, it is important to consider the implications of false positives versus false negatives.

In general, the use of PROs for individual patient management involves helping the clinician identify problems the patient may be experiencing and facilitating a focused discussion of PRO topics that might otherwise go unaddressed. This is essentially a screening function. We therefore expect follow-up of a “positive” score based on the cut-off to involve the clinician simply asking the patient about the issue and determining whether there is something that can and should be done to address any unmet needs. Given that this requires a minimal effort, it may be appropriate to favor high sensitivity over high specificity. However, it is also important to avoid alert fatigue, a phenomenon that leads to clinician inattention to potential problems and resistance to the tools in general. In addition, if the cut-off scores were to be applied by, for example, generating an automatic page to the clinician, then false positives would be much more problematic. Another issue is how to address PRO scores representing an unmet need. In previous research, we developed a range of suggestions for how to address issues identified by PRO questionnaires [24]. However, it is important to consider resource and reimbursement limitations for certain services (e.g., psychosocial services, home care), as well as their effectiveness, before implementing them as part of care pathways. Consideration of how these cut-off scores will be applied in practice will help determine the appropriate compromise between sensitivity and specificity.

In summary, this analysis was conducted to replicate our original analysis to determine whether specific cut-off scores effectively identify patients with unmet needs. For the QLQ-C30 domains with appropriate SCNS-SF34 content matches, our findings from the original analysis were largely supported. This suggests that these cut-off scores could be applied in practice, with an evaluation of their effectiveness from the clinician and patient perspectives. Specifically, it will be important to see how clinicians actually respond when presented with information from PROs using these (or other appropriate) cut-offs and whether the information helps increase clinicians’ awareness of unmet needs. Further research is also needed to identify cut-off scores for QLQ-C30 domains without SCNS-SF34 content matches, as well as to identify changes in scores that represent unmet need. In the meantime, the results for these six domains provide critical guidance to clinicians interpreting PRO reports on which scores require their attention.

ACKNOWLEDGEMENT

This analysis was supported by the American Cancer Society (MRSG-08-011-01-CPPB). The original data collection was supported in part by Grants-in-Aid for Cancer Research and the Third Term Comprehensive 10-Year Strategy for Cancer Control from the Ministry of Health, Labour and Welfare, Japan. Drs. Snyder and Carducci are members of the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins (P30CA006973). The funding sources had no role in study design, data collection, analysis, interpretation, writing, or decision to submit the manuscript for publication.

ABBREVIATIONS

AUC

area under the curve

ECOG

Eastern Cooperative Oncology Group

EORTC-QLQ-C30

European Organization for the Research and Treatment of Cancer Quality of Life Questionnaire Core 30

NPV

negative predictive value

PPV

positive predictive value

PRO

patient-reported outcome

ROC

receiver operating characteristic

SCNS-SF34

Supportive Care Needs Survey-Short Form-34

Footnotes

CONFLICT OF INTEREST STATEMENT

The authors report no conflict of interest.

REFERENCES

  • 1.Snyder CF, Aaronson NK. Use of patient-reported outcomes in clinical practice. The Lancet. 2009;374:369–370. doi: 10.1016/S0140-6736(09)61400-8. [DOI] [PubMed] [Google Scholar]
  • 2.Greenhalgh J. The applications of PROs in clinical practice: What are they, do they work, and why? Quality of Life Research. 2009;18:115–123. doi: 10.1007/s11136-008-9430-6. [DOI] [PubMed] [Google Scholar]
  • 3.Velikova G, Booth L, Smith AB, et al. Measuring quality of life in routine oncology practice improves communication and patient well-being: A randomized controlled trial. Journal of Clinical Oncology. 2004;22:714–724. doi: 10.1200/JCO.2004.06.078. [DOI] [PubMed] [Google Scholar]
  • 4.Berry DL, Blumenstein BA, Halpenny B, et al. Enhancing patient-provider communication with the electronic self-report assessment for cancer: A randomized trial. Journal of Clinical Oncology. 2011;29:1029–1035. doi: 10.1200/JCO.2010.30.3909. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Santana MJ, Feeny D, Johnson JA, et al. Assessing the use of health-related quality of life measures in the routine clinical care of lung-transplant patients. Quality of Life Research. 2010;19:371–379. doi: 10.1007/s11136-010-9599-3. [DOI] [PubMed] [Google Scholar]
  • 6.Detmar SB, Muller MJ, Schornagel JH, Wever LDV, Aaronson NK. Health-related quality-of-life assessments and patient-physician communication. A randomized clinical trial. JAMA. 2002;288:3027–3034. doi: 10.1001/jama.288.23.3027. [DOI] [PubMed] [Google Scholar]
  • 7.Greenhalgh J, Meadows K. The effectiveness of the use of patient-based measures of health in routine practice in improving the process and outcomes of patient care: A literature review. Journal of Evaluation in Clinical Practice. 1999;5:401–416. doi: 10.1046/j.1365-2753.1999.00209.x. [DOI] [PubMed] [Google Scholar]
  • 8.Marshall S, Haywood K, Fitzpatrick R. Impact of patient-reported outcome measures on routine practice: A structured review. Journal of Evaluation in Clinical Practice. 2006;12:559–568. doi: 10.1111/j.1365-2753.2006.00650.x. [DOI] [PubMed] [Google Scholar]
  • 9.Haywood K, Marshall S, Fitzpatrick R. Patient participation in the consultation process: A structured review of intervention strategies. Patient Education and Counseling. 2006;63:12–23. doi: 10.1016/j.pec.2005.10.005. [DOI] [PubMed] [Google Scholar]
  • 10.Cleeland CS, Wang XS, Shi Q, et al. Automated symptom alerts reduce postoperative symptom severity after cancer surgery: A randomized controlled clinical trial. Journal of Clinical Oncology. 2011;29:994–1000. doi: 10.1200/JCO.2010.29.8315. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.McLachlan S-A, Allenby A, Matthews J, et al. Randomized trial of coordinated psychosocial interventions based on patient self-assessments versus standard care to improve the psychosocial functioning of patients with cancer. Journal of Clinical Oncology. 2001;19:4117–4125. doi: 10.1200/JCO.2001.19.21.4117. [DOI] [PubMed] [Google Scholar]
  • 12.Snyder CF, Blackford AL, Aaronson NK, et al. Can patient-reported outcome measures identify cancer patients' most bothersome issues? Journal of Clinical Oncology. 2011;29:1216–1220. doi: 10.1200/JCO.2010.33.2080. [DOI] [PubMed] [Google Scholar]
  • 13.Snyder CF, Blackford AL, Wolff AC, et al. Feasibility and value of PatientViewpoint: a web system for patient-reported outcomes assessment in clinical practice. Psycho-Oncology. 2012 doi: 10.1002/pon.3087. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Snyder CF, Blackford AL, Brahmer JR, et al. Needs assessments can identify scores on HRQOL questionnaires that represent problems for patients: an illustration with the Supportive Care Needs Survey and the QLQ-C30. Quality of Life Research. 2010;19:837–845. doi: 10.1007/s11136-010-9636-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Okuyama T, Akechi T, Yamashita H, et al. Reliability and validity of the Japanese version of the Short-Form Supportive Care Needs Survey Questionnaire (SCNS-SF34-J) Psycho-Oncology. 2009;18:1003–1010. doi: 10.1002/pon.1482. [DOI] [PubMed] [Google Scholar]
  • 16.Bonevski B, Sanson-Fisher RW, Girgis A, et al. Evaluation of an instrument to assess the needs of patients with cancer. Cancer. 2000;88:217–225. doi: 10.1002/(sici)1097-0142(20000101)88:1<217::aid-cncr29>3.0.co;2-y. [DOI] [PubMed] [Google Scholar]
  • 17.Sanson-Fisher R, Girgis A, Boyes A, et al. The unmet supportive care needs of patients with cancer. Cancer. 2000;88:226–237. doi: 10.1002/(sici)1097-0142(20000101)88:1<226::aid-cncr30>3.3.co;2-g. [DOI] [PubMed] [Google Scholar]
  • 18.Aaronson NK, Ahmedzai S, Bergman B, et al. The European Organization for Research and Treatment of Cancer QLQ-C30: a quality-of-life instrument for use in international clinical trials in oncology. Journal of the National Cancer Institute. 1993;85:365–376. doi: 10.1093/jnci/85.5.365. [DOI] [PubMed] [Google Scholar]
  • 19.Velikova G, Brown JM, Smith AB, Selby PJ. Computer-based quality of life questionnaires may contribute to doctor-patient interactions in oncology. British Journal of Cancer. 2002;86:51–59. doi: 10.1038/sj.bjc.6600001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Kobayashi K, Takeda F, Teramukai S, et al. A cross-validation of the European Organization for Research and Treatment of Cancer QLQ-C30 (EORTC QLQ-C30) for Japanese with lung cancer. European Journal of Cancer. 1998;34:810–815. doi: 10.1016/s0959-8049(97)00395-x. [DOI] [PubMed] [Google Scholar]
  • 21.Hosmer DW, Lemeshow S. Applied Logistic Regression (ed 2) New York: Chichester: Wiley; 2000. [Google Scholar]
  • 22.Snyder CF, Jensen R, Courtin SO, Wu AW Website for Outpatient QOL Assessment Research Network. PatientViewpoint: A website for patient-reported outcomes assessment. Quality of Life Research. 2009;18:793–800. doi: 10.1007/s11136-009-9497-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Fayers PM, Weeden S, Curran D on behalf of the EORTC Quality of Life Study Group. EORTC QLQ-C30 Reference Values. Brussels: EORTC (ISBN: 2-930064-11-0); 1998. [Google Scholar]
  • 24.Hughes EF, Wu AW, Carducci MA, Snyder CF. What can I do? Recommendations for responding to issues identified by patient-reported outcomes assessments used in clinical practice. Journal of Supportive Oncology. 2012;10:143–148. doi: 10.1016/j.suponc.2012.02.002. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES