Skip to main content
Deutsches Ärzteblatt International logoLink to Deutsches Ärzteblatt International
. 2008 Dec 12;105(50):871–877. doi: 10.3238/arztebl.2008.0871

Patient Satisfaction With Care in Office-Based Oncology Practices

Walter Baumann 1,*, Alexandra Nonnenmacher 2, Bernd Weiß 3, Stephan Schmitz 4
PMCID: PMC2689630  PMID: 19561808

Abstract

Introduction

The purpose of patient surveys is to measure the quality of health care from the patient’s point of view. They are recommended as a way to detect the strengths and weaknesses of patient care and to locate areas of potential improvement.

Methods

In the autumn of 2006, patients undergoing care in subspecialty oncology practices across Germany were given a questionnaire to be answered in writing. A total of 15 272 patients participated (response rate, 68.8%). The questionnaire addressed patient satisfaction with practice staff and organization as well as with the treating physicians themselves.

Results

The practices, their staffs, and the doctors were generally rated at least "good" in all aspects of patient care. Less highly rated aspects of practice organization, despite overall satisfaction, were waiting times and accessibility in emergencies. Appointment scheduling was considered good. Patients were particularly satisfied with the time their doctors devoted to them, but less satisfied with their advice about "alternative" treatments. The doctors involved them in the treatment process to differing extents and gave a variable degree of psychosocial care.

Conclusion

The questionnaires documented high overall satisfaction with oncology practices with little variation among the individual items. There were a few specific areas that accounted for the differences between well and poorly rated practices and physicians; in these areas, there is a potential for improvement.

Keywords: patient survey, ambulatory care, oncological care, quality management, medical practice organization


Patient satisfaction is an immediately measurable short-term variable of outcome quality of medical care.

Patient surveys are among the obligatory instruments that the Joint Federal Committee stipulates in its quality assurance guidelines for doctors’ practices, and they are a constituent part of relevant quality management systems.

Patient-related outcomes are important for quality monitoring in medical oncology, because clinical parameters as end points are available for routine care to a limited degree only.

Satisfaction surveys have become internationally established (1, 2). Often, high overall satisfaction values are reported for oncology services; problem areas are highlighted only when specific questions are asked (3, 4).

The doctor-patient relationship is of crucial importance for patients’ satisfaction (5, 6). Differences and potential for improvements are often identified in the quality of medical information and communication (7, 8).

In the context of joint quality assurance, the scientific institute of hematologists and oncologists in private practices (WINHO, Wissenschaftliches Institut der niedergelassenen Hämatologen und Onkologen) conducted a patient survey in office-based hematology and oncology practices in the autumn of 2006. The study had three objectives. Firstly, on the basis of a separate evaluation, each practice was given the opportunity to compare itself with all office-based oncology practices. Secondly, it was intended to uncover weaknesses in patient care and identify the potential for improvement. Thirdly, we wanted to identify which components differentiated practices and doctors that were rated as good and less good. This article deals with the second and third objectives.

Methods

Data were collected in all of Germany from the end of October to December 2006. All WINHO partner practices (187 at the time) were invited to participate; 145 (77.5%) agreed to this. For reasons of practicality and cost, an in-house survey was conducted. The participating practices received questionnaires (anonymized, for patients to complete themselves), explanatory leaflets for the staff, and a collection box. The questionnaires were handed to the patients by the practice staff. This guaranteed that even in larger practices, patients received a questionnaire that included the name of their own treating physician.

The intention was to only survey patients who had attended the practice at least once before the study and who were therefore in a position to give their own evaluation. The survey was limited to patients with an oncological or hematological diagnosis. The practice staff was instructed to hand the questionnaires to all patients to whom the listed criteria applied. The collection box was to be placed in a position such that the questionnaires could be returned unobserved. The questionnaires were usually completed in the waiting room.

The four page questionnaire was developed by WINHO, in collaboration with practice representatives, with consideration of the particularities of oncological services. The concept was developed with a view to comparable instruments for measuring patient satisfaction (9). Except for a standard pre-test, no validation of the questions was undertaken. The main section of the questionnaire comprised 29 questions about the practice, the staff, the treating physician, and the patient’s ability to help themselves. Additionally, data about the number of consultations over the preceding 6 months, access to the practice, sex, and age (in categories) were asked. No further demographic data were collected so that the questionnaire could easily be completed during each patient’s wait in the practice.

For the evaluations, for which the authors used the software package SPSS 15.0, responses from 15 272 patients were available that originated from 251 doctors in 145 practices. The investigators examined whether the 42 nonparticipating WINHO partner practices differed in any way from the 145 practices in the study sample (for example, with respect to practice size, location, federal state). No deviations were seen that might indicate that the sample was not representative for all WINHO partner practices.

The response rate at the patient level was 68.8%. The average response rate at the level of the practices was 68.4% (standard deviation 23.6%, minimum 12.1%). The response rate remained below 30% in only 7.6% of practices. It was not possible to calculate the proportion of surveyed patients in the total practice population as the totals were not known. The distribution of socioeconomic characteristics (table 1) was mostly consistent with that of the total number of patients in oncology practices (10).

Table 1. Patient characteristics (n=15 272).

Sex
Women 56.8%
Men 43.2%
Total No 13 878
Age
< 40 years 5.4%
41–50 years 11.8%
51–60 years 20.0%
61–70 years 34.1%
71–80 years 23.5%
> 81 years 5.3%
Total No 14 532
Practice accessed via
Hospital referral 29.5%
Primary care doctor referral 46.3%
Acquaintances/friends 9.1%
Health insurance company 0.25%
Own information 4.9 %
Combination of various access pathways 8.4%
Total No 14 983
Frequency of visits (in preceding 6 months)
1–2 times 18.6%
3–5 times 20.8%
> 5 times 60.6%
Total No 14 339

Table 1 shows the distribution of patient characteristics. The high proportion of women (56.8%) surveyed is consistent with the proportion of female patients in the practices. The number of gynecological tumors treated and the higher number of female patients with benign hematological diagnoses were contributing factors. Among those surveyed, the proportion of older patients (>70 years), at 29%, is slightly smaller than their proportion in practices’ patient statistics (32%). Patient data relating to access to the practice show that three quarters of those surveyed had been referred to the practice by their family doctor/general practitioner/specialist or by a hospital; private information sources were mentioned only rarely.

Table 2 shows that 54.5% of all practices—notably more than in other medical specialties (11)—were run as group practices. Eight out of 10 practices were in the old federal states (former West Germany), but relative to the size of the populations, oncology and hematology practices from the old and new federal states (former German Democratic Republic) were equally well represented (12). Practices in large cities (>100 000 population) were also slightly over-represented, at 56%.

Table 2. Characteristics of physicians and practices (physicians: n = 251; practices: n = 145).

Physicians by sex
Female 21.2%
Male 78.8%
Practice by number of physicians
1 physician 45.5%
2 physicians 42.1%
3 or more physicians 12.4%
Practice by region
Old German states 80.9%
New German states 19.1%
Practice by size of municipality (head of population)
<20 000 7.1%
20 000 to <100 000 36.9%
100 000 to <500 000 31.2%
≥ 500 000 24.8%

Of the 29 items in the main section of the questionnaire, 24 items are of interest for this article, which fall into 2 categories: satisfaction with the practice (10 items) and with the treating physician (14 items). The wording of the items is shown in tables 3 and 4. To test the dimensionality of the items, the authors conducted exploratory principal component analyses. This method is used to lower dimensions for analysis and, on the basis of correlations of individual items, tests whether several items constitute a joint latent dimension or component (13). The items used in our study did not meet one of the criteria required for principal component analysis (interval scaling). We therefore tested and confirmed the results in a second step by using principal component analysis of categorical data.

Table 3. Patient ratings of practice furnishings and equipment, staff and organization.

Practice furnishings and equipment Very good Good Less good Poor n Missing data Difference upper/lower quartile*
How do you rate the overall appearance of the practice? 62.8% 36.5% 0.7% 0.0% 14 429 5.5% 0.35
How do you rate the hygiene and cleanliness of practice rooms and lavatories? 64.7% 34.1% 1.0% 0.2% 14 388 5.8% 0.36
How do you rate the furnishings and equipment of the waiting rooms? 34.6% 60.2% 4.9% 0.3% 14 043 8.2% 0.39
Staff and organization
How do you rate the friendliness and helpfulness of the staff? 80% 19.5% 0.5% 0.0% 14 513 5.0% 0.20
Does the practice staff take an interest in your problems and worries? 65% 34% 1.0% 0.0% 14 259 6.6% 0.23
Are you satisfied with appointment scheduling at your doctor’s practice? 59.3% 38.3% 2.2% 0.2% 14 437 5.5% 0.31
Do you find the waiting times acceptable? 32% 58% 9% 1.0% 14 290 6.4% 0.45
How do you rate telephone access of the practice in an emergency? 51.5% 42.4% 5.1% 1.0% 6301 58.7% 0.28
How would you rate the working atmosphere in the practice? 51.3% 48.2% 0.5% 0.1% 14 018 8.2% 0.29
What is your overall impression of the working processes and the organization of the practice? 53.8% 45.3% 0.9% 0.0% 14 460 5.3% 0.24

* Difference of mean item values in the area below the 25% percentile and above the 75% percentile of the practices grouped by total index values

Table 4. Differences in the rating of the doctor-patient relationship.

Doctor-patient relationship Very good Good Less good Poor n Missing data Difference upper/lower quartile*
How do you rate the doctor’s friendliness? 76.4% 22.2% 1.2% 0.2% 14 572 4.6% 0.33
Does your doctor take time to see you? 67.2% 29.9% 2.7% 0.2% 14 494 5.1% 0.37
In your impression, is your doctor well informed and up to date in his medical knowledge? 70.7% 28.15% 1.0% 0.1% 14 171 7.2% 0.27
How comprehensible do you find your doctor’s explanations? 52.9% 44.5% 2.3% 0.2% 14 502 5.0% 0.33
How has the doctor explained your diagnosis to you? 54.0% 43.3% 2.5% 0.2% 12 768 16.4% 0.34
How does your doctor deal with your questions and worries? 53.8% 43.2% 2.5% 0.4% 14 292 6.4% 0.38
Can you address everything that is important to you with your doctor? 60.5% 36.4% 2.6% 0.5% 14 340 6.1% 0.37
How thorough is your doctor in following up the health problems that you report? 60.5% 36.7% 2.5% 0.3% 14 134 7.5% 0.35
How do you rate the advice about your further treatment? 52.1% 44.8% 2.8% 0.3% 13 915 8.9% 0.33
How were your questions about alternatives to traditional medicine dealt with? 32.3% 55.4% 10.3% 2.0% 9594 37.2% 0.37
How were you educated and advised about the risks and side effects of your treatment? 48.4% 45.6% 5.5% 0.5% 13 681 10.4% 0.31
How well informed do you feel by your doctor? 55.2% 41.8% 2.7% 0.4% 14 382 5.8% 0.37
Do you feel included in decisions about your treatment? 45.4% 50.1% 3.9% 0.5% 13 632 10.7% 0.36
How do you rate the extent to which your relatives are included? 43.7% 49.2% 5.9% 1.1% 11 357 25.6% 0.32

* Differences of the mean item values in the area below the 25% percentile and above the 75% percentile of the practices that were grouped by total index values

The results of the principal component analysis suggested, according to Kaiser’s criterion, the division of the 10 items measuring satisfaction with the practice into two scales: "practice furnishings and equipment" and "staff and organization" (e-table 1). The 14 items on satisfaction with the treating physician reflect a single, joint dimension (e-table 2). For the evaluations that follow, the investigators combined the items of one dimension with equal weight into composite indices, while coding as follows: "very good" = 1, "good" = 2, "less good" = 3, and "poor" = 4 (we also used these codes in the principal component analysis of the categorical variables). The scales’ international consistency is high (Cronbach’s alpha = 0.73 [practice furnishings and equipment]), alpha = 0.81 [staff and organization], and alpha = 0.95 [doctor-patient relationship]) (14).

e-Table 1. Principal component analysis of items rating practice and staff, factor loadings after orthogonal rotation, communalities, explained variance.

Item formulation C1: practice furnishings and equipment C2: staff and organization h2
How do you rate the overall appearance of the practice? 0.76 0.28 0.66
How do you rate the hygiene and cleanliness of practice rooms and lavatories? 0.78 0.23 0.66
How do you rate the furnishings and equipment of the waiting rooms? 0.76 0.22 0.63
How do you rate the friendliness and helpfulness of the staff? 0.16 0.72 0.54
Does the practice staff take an interest in your problems and worries? 0.15 0.79 0.64
Are you satisfied with the appointment dates you are given? 0.32 0.64 0.51
Do you find the waiting times acceptable? 0.38 0.53 0.43
How do you rate telephone access of the practice in an emergency? 0.23 0.58 0.39
How would you rate the working atmosphere in the practice? 0.25 0.67 0.51
What is your overall impression of the working processes and the organization of the practice? 0.23 0.57 0.38
Explained variance in percent 31.1 22.3

Results of principal component analysis of 10 items measuring satisfaction with practice and staff. Coding of items: 1 "very good," 2 "good," 3 "less good," 4 "poor."

Shown are factor loadings, the percentage of the total variance explained by the component (C1, C2) after orthogonal rotation and the communalities (h2)

e-Table 2. Principal component analysis of items rating the doctor-patient relationship, factor weightings, communalities, explained variance.

Item formulation C1 h2
How do you rate the doctor’s friendliness? 0.71 0.51
Does your doctor take time to see you? 0.78 0.60
In your impression, is your doctor well informed and up to date in his medical knowledge? 0.73 0.53
How comprehensible do you find your doctor’s explanations? 0.77 0.59
How has the doctor explained your diagnosis to you? 0.78 0.61
How does your doctor deal with your questions and worries? 0.83 0.69
Can you address everything that is important to you with your doctor? 0.82 0.67
How thorough is your doctor in following up the health problems that you report? 0.80 0.65
How do you rate the advice about your further treatment? 0.81 0.65
How were your questions about alternatives to traditional medicine dealt with? 0.73 0.53
How were you educated and advised about the risks and side effects of your treatment? 0.76 0.58
How well informed do you feel by your doctor? 0.84 0.71
Do you feel included in decisions about your treatment? 0.82 0.68
How do you rate the extent to which your relatives are included? 0.71 0.51
Explained variance in percent 60.7

Results of principal component analysis of 14 items measuring satisfaction with the doctor-patient relationship. Coding of items: 1 "very good", 2 "good", 3 "less good", 4 "poor".

Shown are factor loadings, the percentage of the total variance explained by the component (C1) after orthogonal rotation and the communalities (h2)

To determine which individual aspects made for the most notable distinction between practices and doctors with overall good or poor ratings, we calculated the first and third quartiles for the three indices. We then calculated for the practices (indices "practice furnishings and equipment" and "staff and organization") and doctors (index "doctor-patient relationship") for the first and last quarters the differences between the average values of the individual items. The larger the difference found, the larger the difference between practices/doctors rated as overall good or overall poor with regard to the respective individual aspect of patient satisfaction—that is, the more a particular item determines the overall rating.

Results

Satisfaction with practice furnishings and equipment, staff and organization

Practice furnishings and equipment, and staff and organization were mostly rated "very good" or "good" (table 3). With regard to the practice furnishings and equipment, only the waiting rooms are less often rated "very good" and more often "less good." The fact that the differences between the first and last quarter were practically identical for all practices shows that all three items contribute equally to the overall rating and do not differentiate between practices with good ratings and those with less good ratings.

With regard to staff and organization of practices, a high degree of satisfaction was expressed with the staff’s friendliness and helpfulness. Many responders were critical about waiting times in the practices. Telephone access in emergencies was also seen as a critical point. However, more than half of the participants did not actually comment on this item.

The criterion that determined most strongly whether practices were rated as overall good or less good was the waiting times. The greatest agreement was found between ratings of friendliness and helpfulness of the staff and their willingness to engage with patients’ problems and worries.

Satisfaction with the treating physician

Differences in ratings of the doctor-patient relationship were slight (table 4).

The way in which doctors answered questions relating to alternatives to traditional medicine was rated comparatively critically. However, 37.2% of participants did not comment on this item. Further items that were rated poorly were patients’ education and advice on risks and side effects of the treatment, including the patient in treatment decisions, and including relatives.

The question whether the doctor was informed and his/her knowledge up to date contributed least to the variance of overall ratings; almost all patients gave their doctors positive ratings in this regard. The greatest differences were seen in the items measuring psychosocial care ("How does the doctor deal with your questions and worries?" "Does your doctor take time to see you?" "Can you address everything that is important to you with your doctor?" "How well informed do you feel by your doctor?") and how doctors dealt with questions about alternatives to traditional medicine.

Discussion

The results show a high degree of satisfaction of the surveyed patients with their doctors and practices. This confirms once more in which high esteem physicians practising in private practice are held by patients (15, 16). Our data are not unusual. As has been shown repeatedly, high ratings of patient satisfaction can go hand in hand with patients’ wishes remaining unsatisfied (4, 6). Data seem to indicate a higher degree of satisfaction among oncological patients compared with patients with other diagnoses, and among outpatients compared with inpatients; however, in-depth comparisons are lacking (4). The older age of cancer patients (10) is also a factor because older patients are usually more satisfied than younger patients (5, 8). Patient-related factors were not seen otherwise—except for slightly higher ratings in the eastern states.

A possible limitation of our study lies in the fact that an in-house method may favor effects of social desirability, which results in higher satisfaction ratings (16). The investigators did not conduct a postal survey, for reasons of cost and practicality as well as participation rates.

For the reasons listed in the methods section, the sample cannot with absolute certainty be regarded as representative of patients in oncology practices; however, there are agreements with some of the parameters of patients in such practices. Two selection biases may be present:

We included only patients who had attended the practice previously (one visit being the minimum); patients who changed doctors after one visit were therefore not included in the study. However, such "voting with one’s feet" is probably less common in oncology, owing to the patients’ illness and limited alternative treatment institutions in oncology than is the case, for example, for family doctors/general practitioners. This is supported by the low proportion of patients that were not referred directly to the oncologist by their family doctor/general practitioner/specialist. If patients had selected the doctors themselves (after dissatisfaction with the first doctor) a larger proportion of those surveyed could have been expected to have found their oncologist via friends/acquaintances or other independently gathered information.

The second possible selection bias relates to participation in the survey. In spite of rules about patient inclusion, satisfied patients might show greater willingness to participate. For this reason, we measured the association between the response rate of each practice and the average patient satisfaction. In case of self selection, those practices of which mainly the satisfied patients have participated in the survey would have had to have a higher rejection rate and thus a lower response rate. The correlation between response rate and satisfaction would therefore be negative. Such an association was not found for any of the 3 scales (practice furnishings and equipment, staff and organization, doctor-patient relationship). The test is meaningless if an identical extent of self selection is present in all practices. Whether lower validity of items may be a cause of the high satisfaction ratings should be subject to further studies.

The relevant relative differences in results that point at weaknesses include the waiting rooms. The PASQOC study already indicated this (including a lack of materials to entertain patients and information materials) (17). This is similar in many practices, but patients criticize only this one aspect, without relevance for their overall satisfaction with their healthcare. The situation is similar for the overall satisfaction reported for telephone access to the practice in an emergency. This was obviously mainly commented on by patients who had had negative experiences in this regard, so this criticism carries weight in terms of the need for improvement. The appointments system received good ratings in the survey, whereas the poorer rating of waiting times indicates problems in some practices. Waiting times in spite of an appointment having been made were mentioned repeatedly as a problem that substantially affects the overall satisfaction of cancer patients (4, 5).

In the doctor-patient relationship, questions about or complementary medicine have an important role. For some patients, this is a total irrelevance, but many responses reflected patients’ wishes for more medical information about this subject. The results are consistent with recent findings about the doctor as information provider for cancer patients (18). The wide use of complementary and alternative therapies in cancer patients has been shown in many international studies, as has the patients’ need to communicate with their doctors about this topic (19). Several findings exist about the challenge that this presents to traditionally trained doctors (20). Since many doctors received good ratings in oncology practices, doctors’ competence with regard to this topic should be studied in greater detail to benefit doctors’ advice to patients.

Although modern cancer therapies are better tolerated by patients, they still have many side effects (4, 17) that cannot easily be anticipated. Dissatisfaction with education and advice about such risks highlights the common risk of misunderstandings between patient and oncologist with regard to complex situations. Further studies are required into how doctors’ communication can be adapted to patients’ needs. This does not relate to quantitative aspects only but also to appropriate awareness of different communication levels and media (7).

Including the patient in decision-making about treatment influences the overall rating of individual doctors, and individual doctors fare rather better than others. The problem of how to capture patients’ desires for participation in treatment decisions is well documented (21, 22). However, this is not the only basis for a trusting relationship (23) as emotional aspects are also important.

The findings imply that more attention should be paid to involving relatives. Cancer patients in office-based practices mostly live with relatives who accompany them to the doctor’s practice and assume important communication tasks (authors’ personal data).

Conclusions

The results of a survey of 15 272 patients in 145 oncology practices have created an important reference base for the observation and comparison of practices. Without wanting to diminish the finding of high overall patient satisfaction, we wish to remind readers that the survey items that are common in many instruments and that we used in our survey allow only for limited analysis of weaknesses.

Further developments should aim to produce practical routine instruments that create differences and thus generate information. In future, patient satisfaction should not be the only process indicator, but closer associations between perceived satisfaction, measured quality of results, and patient-based disease situation (5, 24).

Further to better theoretical understanding, more systematic intervention studies are required (25).

Acknowledgments

Translated from the original German by Dr Birte Twisselmann.

Footnotes

Conflict of interest statement

Dr Baumann is managing director of the the scientific institute of hematologists and oncologists in private practice (WINHO, Wissenschaftliches Instituts der niedergelassenen Hämatologen und Onkologen).

PD Dr Schmitz is the chair of the German professional association of hematologists and oncologists in private practice (BNHO, Berufsverband der niedergelassenen Hämatologen und Onkologen in Deutschland e. V.).

Dr Nonnenmacher and Dr Weiß are in receipt of honoraria for their work in WINHO.

References

  • 1.Consumer Assessment of Healthcare Providers and Systems (CAHPS). CAHPS Clinician & Group Survey. https://www.cahps.ahrq.gov/content/products/CG/PROD_CG_CG40Products.asp.
  • 2.Fitzpatrick R, Bowling A, Gibbons E, et al. A structured review of patient-reported measures in relation to selected chronic conditions, perceptions of quality of care and career impact. Patient-reported Health Instruments Group, National Center for Health Outcomes Development, University of Oxford. 2006 Nov; http://phi.uhce.ox.ac.uk/pdf/ChronicConditions. [Google Scholar]
  • 3.Watson D, Mooney D, Peterson S. Patient experiences with ambulatory cancer care in British Columbia 2005/2006. http://www.health.gov.bc.ca/socsec/pdf/UBC_OncologyReport.pdf.
  • 4.Kleeberg UR, Tews JT, Ruprecht T, Höing M, Kuhlmann A, Runge C. Patient satisfaction and quality of life in cancer outpatients: results of the PASQOC study. Supportive Care in Cancer. 2005;13:303–310. doi: 10.1007/s00520-004-0727-x. [DOI] [PubMed] [Google Scholar]
  • 5.Sandoval GA, Levinton C, Blackstien-Hirsch P, Brown AD. Selecting predictors of cancer patients’ overall perceptions of the quality of care received. Annals of Oncology. 2006;17:151–156. doi: 10.1093/annonc/mdj020. [DOI] [PubMed] [Google Scholar]
  • 6.Davidson R, Mills ME. Cancer patients’ satisfaction with communication, information and quality of care in a UK region. European Journal of Cancer Care. 2005;14:83–90. doi: 10.1111/j.1365-2354.2005.00530.x. [DOI] [PubMed] [Google Scholar]
  • 7.Bredart A, Bouleuc C, Dolbeault S. Doctor-patient communication and satisfaction with care in oncology. Current Opinion in Oncology. 2005;17:351–354. doi: 10.1097/01.cco.0000167734.26454.30. [DOI] [PubMed] [Google Scholar]
  • 8.Waldmann A, Pritzkuleit R, Raspe H, Katalinic A. Wie werden Krebspatienten über die Diagnose aufgeklärt und wie stellt sich die Zufriedenheit mit dem Aufklärungsgespräch dar? In: Hey M, Maschewsky-Scheider U, editors. Kursbuch Versorgungsforschung. Berlin: MWV; 2006. pp. 174–189. [Google Scholar]
  • 9.EQUAM-Stiftung, Patientenfragebogen. www.equam.ch; Q-M-A-Homepage, ZAP-Fragebogen (Dirks). www.q-m-a.de; Kölner Patientenfragebogen (Pfaff): QEP®-Manual, Kernziel-Version, Köln 2005; Patientenfragebogen der Stiftung Gesundheit. http://www.stiftung-gesundheit.de/PDF/pzi/fragebogen.pdf. Patientenfragebogen (Deutsches Ärzteblatt) www.aerzteblatt.de/v4/archiv/bild.asp?id=10958 [Google Scholar]
  • 10.Qualitätsbericht der onkologischen Schwerpunkpraxen. Köln: BNHO; 2007. [Google Scholar]
  • 11.Kassenärztliche Bundesvereinigung. Zahlen, Daten, Fakten. www.kbv.de/themen/6303.html.
  • 12.Statistisches Bundesamt. Datenreport 2006. Bonn: 2006. [Google Scholar]
  • 13.Handl A. Springer: Heidelberg; 2002. Multivariate Analysemethoden. [Google Scholar]
  • 14.Bühner M. Einführung in die Test- und Fragebogenkonstruktion. München: Pearson; 2006. [Google Scholar]
  • 15.Kassenärztliche Bundesvereinigung. Versichertenbefragung der Kassenärztlichen Bundesvereinigung. www.kbv.de//8700.html.
  • 16.Brinkmann A, Steffen P, Pfaff H. Patientenbefragungen als Bestandteil des Qualitätsmanagements in Arztpraxen: Entwicklung und Erprobung eines Instruments. Gesundheitswesen. 2007;69:585–592. doi: 10.1055/s-2007-990307. [DOI] [PubMed] [Google Scholar]
  • 17.Runge C, Kleeberg U, Tewe S, Bartsch HH, Weiss J, et al. Gemeinsame Entscheidung in der Krebstherapie. Basel: Karger; 2004. Wird die gemeinsame Entscheidung in der Onkologie gelebt? Antworten aus Patientensicht; pp. 66–78. [Google Scholar]
  • 18.Oskay-Ozcelik G, Lehmacher W, Könsgen D, et al. Breast cancer patients’ expectations in respect of physician-patient relationship and treatment management: results of a survey of 617 patients. Ann Oncol. 2007;18:479–484. doi: 10.1093/annonc/mdl456. [DOI] [PubMed] [Google Scholar]
  • 19.Molassiotis A, Fernadez-Ortega P, Pud D, et al. Use of complementary and alternative medicine in cancer patients: a European survey. Annals of Oncology. 2005;16:655–663. doi: 10.1093/annonc/mdi110. [DOI] [PubMed] [Google Scholar]
  • 20.Evans M, Shaw A, Thomson EA, et al. Decisions to use complementary and alternative medicine (CAM) by male cancer patients: information-seeking roles and types of evidence used. BMC Complement Altern Med. 2007;7 doi: 10.1186/1472-6882-7-25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Bruera E, Seeney C, Calder K, Palmer L, Benisch-Tolley S. Patient preferences versus physician perceptions of treatment decisions in cancer care. J Clin Oncol. 2001;19:2883–2885. doi: 10.1200/JCO.2001.19.11.2883. [DOI] [PubMed] [Google Scholar]
  • 22.Elkin EB, Kim SH, Casper ES, Kissane DW, Schrag D. Desire for information and involvement in treatment decisions: elderly cancer patients’ preferences and their physicians’ perceptions. J Clin Oncol. 2007;25:5275–5280. doi: 10.1200/JCO.2007.11.1922. [DOI] [PubMed] [Google Scholar]
  • 23.Ernst J, Claus S, Schwarz R, Kuhn U. Participation of cancer patients with solid and haematological tumors in medical decision making. Onkologie. 2008;31(suppl 1) [Google Scholar]
  • 24.Bitar R, Bezjak A, Mah K, Loblaw DA, Gotowiec AP, Dewin GM. Does tumor status influence cancer patients’ satisfaction with the doctor patient interaction? Support Care Cancer. 2004;12:34–40. doi: 10.1007/s00520-003-0534-9. [DOI] [PubMed] [Google Scholar]
  • 25.Neumann M, Steffen P, Wirtz M, Ernstmann N, Ommen O, Pfaff H. Patientenzufriedenheit in der onkologischen Versorgung: Relevanz, Einflussfaktoren und Praxisbeispiele. Forum DKG. 2007:39–44. [Google Scholar]

Articles from Deutsches Ärzteblatt International are provided here courtesy of Deutscher Arzte-Verlag GmbH

RESOURCES