Skip to main content
Informa Healthcare Open Access logoLink to Informa Healthcare Open Access
. 2012 Jan 18;21(1):57–71. doi: 10.3109/09638237.2011.629240

VOICE: Developing a new measure of service users’ perceptions of inpatient care, using a participatory methodology

Jo Evans 1,, Diana Rose 1,*, Clare Flach 1, Emese Csipke 1, Helen Glossop 3, Paul Mccrone 1, Tom Craig 1, Til Wykes 2
PMCID: PMC4018995  EMSID: EMS58310  PMID: 22257131

Abstract

Background

Service users express dissatisfaction with inpatient care and their concerns revolve around staff interactions, involvement in treatment decisions, the availability of activities and safety. Traditionally, satisfaction with acute care has been assessed using measures designed by clinicians or academics.

Aims

To develop a patient-reported outcome measure of perceptions of acute care. An innovative participatory methodology was used to involve services users throughout the research process.

Method

A total of 397 participants were recruited for the study. Focus groups of service users were convened to discuss their experiences and views of acute care. Service user researchers constructed a measure from the qualitative data, which was validated by expert panels of service users and tested for its psychometric properties.

Results

Views on Inpatient Care (VOICE) is easy to understand and complete and therefore is suitable for use by service users while in hospital. The 19-item measure has good validity and internal and test–retest reliability. Service users who have been compulsorily admitted have significantly worse perceptions of the inpatient environment.

Conclusions

A participatory methodology has been used to generate a self-report questionnaire measuring service users’ perceptions of acute care. VOICE encompasses the issues that service users consider most important and has strong psychometric properties.

Keywords: service users’ perceptions, participatory methodology, service user involvement, acute care, inpatient services

Introduction

Dissatisfaction with adult acute inpatient care is not a new issue and is well documented both in Britain and internationally. Inpatient wards are often viewed by service users as untherapeutic and unsafe environments (Department of Health, 2002). Limited interaction between staff and service users is commonly reported and users express a need for good interpersonal relationships and support which is sensitive to individual needs (Edwards, 2008; Ford et al., 1998; Shattell et al., 2008). Poor levels of involvement and a lack of information associated with medication, care and treatment have also been identified (Walsh & Boyle, 2009). On many wards, there is little organised activity and service users experience intense boredom (MIND, 2004). Security is of particular concern: many service users feel they are not treated with respect or dignity, have significant safety concerns and report high levels of verbal and physical violence (MIND, 2004). Although there are objective measures of activities in the inpatient environment, as reviewed recently by Sharac et al. (2010), these are not adequate as a reflection of the quality of inpatient care.

Recently there has been a focus on patient-reported outcome measures (PROMS) as a measure of quality and appropriateness of services and therapies. Despite service user involvement being considered an essential element in improving mental health services (Department of Health, 1999), PROMS are rarely developed using an inclusive methodology and research suggests user dissatisfaction with many outcome measures currently in use (Crawford et al., 2011). Service users can often have different perspectives from professionals and can provide insight into how services and treatments feel (Rose, 2003). Redefining outcomes according to users’ priorities can help to make greater sense of clinical research and develop a more valid evidence base (Faulkner & Thomas, 2002; Trivedi & Wykes, 2002). Studies comparing the impact of traditional and user researchers on research show some differences in qualitative data analysis (Gillard et al., 2010) but none on quantitative research findings (Hamilton et al., 2011; Rose et al., 2011a, 2011b). Given this, we believe that research methodologies should aim to be as inclusive as possible.

What is needed in the literature on acute care is a psychometrically robust, brief, self-report measure reflecting service users’ experiences of care. This type of measure would allow clear measurement of inpatient care changes following specific interventions to improve the environment and therapy provided. Our study was designed to generate such a measure.

Method

Sampling and recruitment

Ethical approval (07/H0809/49) was given for the study to be carried out in four boroughs within an inner city London NHS trust.

For the measure development phase, purposive sampling was adopted to reflect local inpatient demographics and participants were recruited through a local mental health voluntary organisation and community mental health teams across the four boroughs. The only inclusion criterion was that participants had been inpatients in the previous 2 years, although this may have excluded long-term forensic inpatients. Members of the reference group were recruited from local user groups and national voluntary mental health organisations.

Participants for the feasibility study were recruited from acute wards and psychiatric intensive care units, and test–retest participants were engaged on acute and forensic wards. For the larger psychometric testing phase, participants were recruited from acute wards. The inclusion criteria were that the person could provide informed consent and that they had been present on the ward for at least 7 days during the 4-week data collection phase. 45% of eligible people on the wards agreed to take part. All potential participants gave written informed consent following an explanation of the study.

Demographic and clinical data for focus group participants were collected on a self-report basis. For the large-scale data collection, age, gender, ethnicity and employment status were collected by self-report, while diagnosis, legal and admission status were taken from NHS records.

Measure generation

The measure Views on Inpatient Care (VOICE) was developed iteratively using an innovative participatory methodology to maximise the opportunity for service user involvement (Rose et al., 2009, 2011a, 2011b). This followed several stages. Firstly a topic guide was developed through a literature search, reference group and pilot study. Repeated focus groups of service users were convened to generate qualitative data (Morgan, 1993). One of the groups was specifically for participants who had been detained under the Mental Health Act (1983) as it was anticipated that they may have had different experiences. The data were thematically analysed by service user researchers, who then generated a draft measure which was refined by expert panels of users and the reference group.

Feasibility and acceptability

VOICE was evaluated in accordance with standard criteria for outcome measures (Fitzpatrick et al., 1998; Harvey et al., 2005), which include feasibility, acceptability, reliability and validity.

Psychometric testing

The internal reliability of VOICE was assessed using Cronbach’s alpha (Cronbach, 1951), with data from a large sample of inpatients. Test–retest reliability was conducted with inpatients who completed VOICE on two occasions with an interval of 6–10 days. This was assessed by Lin’s concordance coefficient (Lin, 1989), kappa and proportion of maximum kappa (Sim & Wright, 2005) to measure the level of agreement between total scores and individual item responses at time one and two, respectively.

Criterion validity was assessed by comparing scores on VOICE with responses on the Service Satisfaction Scale: residential services evaluation (Greenfield et al., 2008). This is a derivative measure adapted from the Service Satisfaction Scale-30 (Greenfield & Attkisson, 1989), designed to evaluate residential services for people with serious mental illness. The original SSS-30 has been used in a variety of settings and demonstrates sound psychometric properties (Greenfield & Attkisson, 2004). It was anticipated that some elements of a perceptions measure would overlap with services satisfaction but that there would also be key differences.

We expected differences in views between service users from different populations and clinical settings. So we assessed by one-way analyses of variance whether service users’ perceptions differed by borough, gender, ethnicity, age, diagnosis, admission and legal status as predictive factors. The majority of the analyses were exploratory. However, we had specific hypotheses relating to ethnicity and legal status, where we expected poorer perceptions from participants who were compulsorily admitted and those from minority ethnic communities.

Results

Sample characteristics

As Table I shows, a total of 397 participants were recruited for the study: 37 for the measure generation phase and 360 for the feasibility study and psychometric testing. Schizophrenia was the most frequent diagnosis for both groups and approximately half of all participants were from black and minority ethnic communities. In the measure development phase, 43% of participants were men and the median age was 45 (range 20–66). In the psychometric phase, 60% of participants were men and the mean age was 40 (range 18–75).

Table I.

Demographic data (approximately P7).

Measure development phase
Feasibility and psychometric assessment phase
N  =  37 % n  =  360 %
Ethnicity
 White 18 48.6 168 47.0
 Black/minority ethnic 19 51.4 185 51.0
 Not disclosed 0 0.0 7 2.0
Legal status
 Formal 20 54.1 222 62.0
 Informal 12 32.4 106 29.0
 Not disclosed 5 13.5 32 9.0
Diagnosis
 Schizophrenia/psychosis 18 48.7 183 51.0
 Bipolar affective disorder 7 18.9 51 14.0
 Depression/anxiety 6 16.2 38 11.0
 Personality disorder 2 5.4 19 5.0
 Substance misuse 0 0.0 16 4.0
 Other 4 10.8 46 13.0
 Not disclosed 0 0.0 7 2.0
Employment
 Employed 0 0.0 62 17.3
 Unemployed 0 0.0 248 68.9
 Student 0 0.0 13 3.6
 Retired 0 0.0 25 6.9
 Other 0 0.0 5 1.4
 Not disclosed 37 100.0 7 1.9
Admission
 First admission 0 0.0 65 18.1
 Previous admissions 0 0.0 260 72.2
 Not disclosed 37 100.0 35 9.7

Measure generation

Thematic analysis of the full data set resulted in an initial bank of 34 items, which were formed into brief statements and grouped into domains. A six-point Likert scale was chosen, ranging from “strongly agree” to “strongly disagree” and optional free-text sections were included to capture additional qualitative data. The items were unweighted and one question was reverse scored. The self-report measure was designed to provide a final total score, with a higher score indicating a more negative perception. The inter-rater reliability of the focus group data coding, using NVIVO7, showed between 97% and 99% agreement. Item reduction based on relevance and preventing duplication produced 22 items. The expert panels considered the measure to be an appropriate length and breadth and following some minor changes in wording, the reference group concluded that the measure was appropriate for use by service users in hospital.

Feasibility and acceptability

Feasibility took place in two waves (n  =  40 and n  =  106). In the first wave, 98% of participants found the measure both easy to understand and complete and in the second, 82% of participants considered the measure to be an appropriate length. Two participants (2%) disliked completing the measure and six (6%) found some of the questions upsetting. VOICE took between 5 and 15 min to complete and was easy to administer. The measure was found to be suitable for completion by participants with a range of diagnoses and at levels of acute illness found on inpatient units. The Flesch Reading Ease score was 78.8 (ages 9–10) indicating the measure was easy to understand (Flesch, 1948). Following the feasibility study, one item was removed as it was considered to be a duplicate. This left the measure with 21 items.

Psychometric testing

Three hundred and sixty participants took part in testing the psychometric properties of the measure. One hundred and ninety-two of these had full data for all items on the VOICE scale and 348 participants had over 80% response to VOICE items. For participants responding to at least 80% of the items, a pro-rated score was calculated. Less than 80% response was considered as a missing total VOICE score.

Reliability

One hundred and ninety-two participants had complete data on the VOICE scale and were used in assessing the internal consistency. After removing items with poor reliability, this left a 19-item scale with high internal consistency (α  =  0.92). The test–retest reliability (n  =  40) was high (ρ  =  0.88, CI  =  0.81–0.95) and there was no difference in score between the two assessments.

Validity

The measure has high face validity. The wide range of items was determined by service users during the focus groups and the measure reflected the domains which they considered most important. The feasibility study participants felt that the measure was comprehensive and therefore had high content validity.

Pearson’s correlation coefficient showed a significant association between the total scores on VOICE and the SSS: residential measure (r  =  0.82, p  <  0.001), indicating high criterion validity.

The ability of VOICE to discriminate between groups is indicated in Table II. Bivariate analyses showed significant differences for legal status. Participants who had been compulsorily admitted had significantly worse perceptions (t  =  − 3.82, p  <  0.001). A multi-variate analysis revealed that legal status remains significant even when adjusted for the other factors (p  =  0.001).

Table II.

Differences in mean VOICE scores by demographic and clinical group (approximately P9).

Number Mean score Standard deviation 95% confidence intervals Significance
Gender
 Male 199 55.5 19.2 52.8–58.1 0.146
 Female 147 52.5 17.8 49.6–55.4
Ethnicity
 White 162 55.6 19.1 52.6–58.5 0.218
 BME 180 53.1 18.1 50.4–55.7
Legal status
 Informal 102 48.9 16 45.7–52.0 <0.001
 Formal 215 57.4 19.6 54.7–60.0
Borough
 Borough 1 132 54.5 19.5 51.2–57.8 0.149
 Borough 2 100 52.7 18.5 49.1–56.4
 Borough 3 75 57.8 17.7 53. 8–61.8
 Borough 4 40 50.1 16.9 44.9–55.4
Diagnosis
 Schizophrenia/psychosis 179 54.8 18.2 52.2–57.5 0.404
 Bipolar affective disorder 51 56.1 21.5 50.2–62.1
 Depression/anxiety 38 50.9 13.8 46.5–55.3
 Personality disorder 18 59.3 16.6 51.6–67.0
 Substance misuse 13 53.7 18.9 43.4–64.0
 Other 42 50.3 21.3 43.8–56.8
Age
  ≤ 20 15 61.0 23.7 49.0–73.1 0.287
 21–30 77 51.6 15.5 48.1–55.1
 31–40 93 53.5 16.7 50.1–56.9
 41–50 87 55.6 21.9 50.0–60.2
 51–60 45 57.2 19.1 51.6–62.8
 61 +  25 50.6 18.3 43.4–57.9
Admission
 First admission 62 50.7 17.6 46.3–55.2 0.110
 Previous admissions 255 55.0 18.9 52.6–57.3

The final measure is provided in Appendix and at www.perceive.iop.kcl.ac.uk.

Discussion

Using a participatory methodology, we have developed a service-user generated, self-report measure of perceptions of acute care. VOICE (Appendix) encompasses the issues that service users consider most important, has strong psychometric properties and is suitable for use in research settings. The internal consistency is high, which suggests that the items are measuring the same underlying construct. The measure has high criterion validity and test retest data show that it is stable over time. The full involvement of service users throughout the development of the measure has ensured that VOICE has good face and content validity and is accessible to the intended client group.

Can VOICE distinguish differences in views?

In this study, detained participants held more negative perceptions of inpatient services. This supports previous studies showing that service users who are admitted involuntarily are less satisfied with their care (Svensson & Hansson, 1994). More recently, lower levels of satisfaction have been linked with the accumulation of coercive events and perceived coercion (Iversen et al., 2007; Katsakou et al., 2010). This presents a more complex picture and one which is worth further analysis.

We anticipated, but did not find, any differences on either VOICE or SSS: residential scores for ethnicity. Methodology, timing and setting can all impact upon research findings (Wensing et al., 1994). Previous quantitative studies have shown differences for legal status but not ethnicity (Bhugra et al., 2000; Greenwood et al., 1999), whereas qualitative research has revealed that black and minority ethnic users hold relatively poor perceptions of acute care (Secker & Harding, 2002; The Sainsbury Centre for Mental Health, 2002). Our study was set in areas of London with high levels of ethnicity (Kirkbride et al., 2007; Morgan et al., 2006). Staff demographics tended to mirror those of inpatients and it may be that services were better tailored towards black and minority ethnic groups. Additionally, interviewing users while in hospital may well have inhibited openness and honesty, particularly on sensitive issues.

Is VOICE different from other measures?

Although the total scores were correlated, there were distinct differences in content between VOICE and the comparison satisfaction measure. We believe this is due to the use of a participatory methodology. In particular, safety and security issues were given more weight in VOICE and items on diversity were included which did not appear in the conventionally generated measure. Conversely, items regarding the physical environment and office procedures featured in the SSS: residential (Greenfield et al., 2008), but were not deemed as important by the users in our study and therefore not included in VOICE. Although the issue of discharge planning arose in our focus group data and as an item in the SSS: residential, we did not include it in the measure as the intention was to administer VOICE relatively soon after admission. We do recommend its inclusion, however, in future studies.

It is often assumed that the only construct to measure is satisfaction with acute care. However, there are difficulties in encapsulating complex sets of beliefs, expectations and evaluations in satisfaction measures. Caution should be taken when making inferences from the results of such measures as they may not accurately reflect the views of users (Williams, 1994). VOICE is unique in that it captures users’ perceptions and we anticipate this will depict the inpatient experience more accurately.

Strengths and limitations

It is impossible to accurately assess inpatient care without involving the people directly affected by that service. Developing an outcome measure valued by service users is essential in evaluating and developing inpatient services. The main strength of this piece of research is that it fully exploits a participatory methodology: service users were involved in a collaborative way throughout the whole research process. VOICE is the only robust measure of acute inpatient services designed in such a way. This has resulted in a measure which encompasses the issues that service users prioritise and is both acceptable and accessible to people with a range of diagnoses and severity of illness.

This study was not designed to test hypotheses about differences in perceptions between clinical and demographic groups and may not have been large enough to detect such differences. The completion rate was twice that of a similar satisfaction survey (Care Quality Commission, 2009), suggesting that VOICE is more representative of users’ views and this is higher than many other studies reported in the literature. We do not have data from non-responders, but we have little reason to consider that they were different from our sample. Our study was conducted in London boroughs with high levels of deprivation, ethnicity and psychiatric morbidity (Kirkbride et al., 2007; Morgan et al., 2006) and so may not be directly generalisable to other settings. Additionally, our sample included a high proportion of participants from black and minority ethnic communities. While this is a strength, it may be that different items would have been produced by other groups. We intend to develop versions of VOICE for use in other populations, including Mother and Baby units.

Conclusion

The study has demonstrated that a participatory methodology can generate items which are prioritised by users but not included in traditionally developed measures. VOICE is the first service-user generated, psychometrically robust measure of perceptions of acute care. It directly reflects the experiences and perceptions of service users in acute settings and as such, is a valuable addition to the PROMS library.

Acknowledgements

We also acknowledge the financial support of the NIHR Biomedical Research Centre for Mental Health, South London and Maudsley NHS Foundation Trust/Institute of Psychiatry (King’s College London).

Appendix

graphic file with name JMH-21-57-g001.jpg

graphic file with name JMH-21-57-g002.jpg

graphic file with name JMH-21-57-g003.jpg

graphic file with name JMH-21-57-g004.jpg

graphic file with name JMH-21-57-g005.jpg

graphic file with name JMH-21-57-g006.jpg

Footnotes

Declaration of interest This article presents independent research commissioned by the National Institute for Health Research (NIHR) under its Programme Grants for Applied Research scheme (RP-PG-0606-1050). The views expressed in this publication are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.

References

  1. Bhugra D, La Grenade J, Dazzan P. Psychiatric inpatients’ satisfaction with services: A pilot study. International Journal of Psychiatry in Clinical Practice. 2000;4:327–332. doi: 10.1080/13651500050517902. [DOI] [PubMed] [Google Scholar]
  2. Care Quality Commission. Mental Health Acute Inpatient Service Users Survey 2009: South London and Maudsley NHS Foundation Trust. London: NatCen; 2009. [Google Scholar]
  3. Crawford M, Robotham D, Thana L, Patterson S, Weaver T, Barber R, et al. Selecting outcome measures in mental health: The views of service users. Journal of Mental Health. 2011;20:336–346. doi: 10.3109/09638237.2011.577114. [DOI] [PubMed] [Google Scholar]
  4. Cronbach L. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297–334. [Google Scholar]
  5. Department of Health. National Service Framework for Mental Health. London: HMSO; 1999. [Google Scholar]
  6. Department of Health. Mental Health Policy Implementation Guide: Adult Acute Inpatient Care Provision. London: HMSO; 2002. [Google Scholar]
  7. Edwards K. Service users and mental health nursing. Journal of Psychiatric and Mental Health Nursing. 2008;7:555–565. doi: 10.1046/j.1365-2850.2000.00353.x. [DOI] [PubMed] [Google Scholar]
  8. Faulkner A, Thomas P. User-led research and evidence based medicine. British Journal of Psychiatry. 2002;180:1–3. doi: 10.1192/bjp.180.1.1. [DOI] [PubMed] [Google Scholar]
  9. Fitzpatrick R, Davey C, Buxton M, Jones D. Evaluating patient based outcome measures for use in clinical trials. Health Technology Assessment. 1998;2:1–86. [PubMed] [Google Scholar]
  10. Flesch R. A new readability yardstick. Journal of Applied Psychology. 1948;32:221–233. doi: 10.1037/h0057532. [DOI] [PubMed] [Google Scholar]
  11. Ford R, Durcan G, Warner L, Hardy P, Muijen M. One day survey by the mental health act commission of acute adult psychiatric inpatient wards in England and Wales. British Medical Journal. 1998;317:1279–1283. doi: 10.1136/bmj.317.7168.1279. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Gillard S, Borschmann R, Turner K, Goodrich-Purnell N, Lovell K, Chambers M. What difference does it make? Finding evidence of the impact of mental health service user researchers on research into the experiences of detained psychiatric patients. Health Expectations. 2010;13:185–194. doi: 10.1111/j.1369-7625.2010.00596.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Greenfield T, Attkisson C. Steps toward a multifactorial satisfaction scale for primary care and mental health services. Evaluation and Program Planning. 1989;12:271–278. [Google Scholar]
  14. Greenfield T, Attkisson C. The UCSF client satisfaction scales: II. The service satisfaction scale-30. In: Maruish M., editor. Psychological Testing: Treatment Planning and Outcome Assessment. London: Lawrence Erlbaum Associates; 2004. pp. 813–837. [Google Scholar]
  15. Greenfield T, Stoneking B, Humphreys K, Sundby E, Bond J. A randomized trial of a mental health consumer-managed alternative to civil commitment for acute psychiatric crisis. American Journal of Community Psychology. 2008;42:135–144. doi: 10.1007/s10464-008-9180-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Greenwood N, Key A, Burns T, Bristow M, Sedgwick P. Satisfaction with in-patient psychiatric services. Relationship to patient and treatment factors. The British Journal of Psychiatry. 1999;174:159–163. doi: 10.1192/bjp.174.2.159. [DOI] [PubMed] [Google Scholar]
  17. Hamilton S, Pinfold V, Rose D, Henderson C, Lewis-Holmes E, Flach C, et al. The effect of disclosure of mental illness by interviewers on reports of discrimination experienced by service users: A randomized study. International Review of Psychiatry. 2011;23:47–54. doi: 10.3109/09540261.2010.545367. [DOI] [PubMed] [Google Scholar]
  18. Harvey K, Langman A, Winfield H, Catty J, Clement S, White S, et al. Measuring Outcomes for Carers for People with Mental Health Problems. London: NCCSDO; 2005. [DOI] [PubMed] [Google Scholar]
  19. Iversen K, Høyer G, Sexton H. Coercion and patient satisfaction on psychiatric acute wards. International Journal of Law and Psychiatry. 2007;30:504–511. doi: 10.1016/j.ijlp.2007.09.001. [DOI] [PubMed] [Google Scholar]
  20. Katsakou C, Bowers L, Amos T, Morriss R, Rose D, Wykes T, et al. Coercion and treatment satisfaction among involuntary patients. Psychiatric Services. 2010;61:286–292. doi: 10.1176/ps.2010.61.3.286. [DOI] [PubMed] [Google Scholar]
  21. Kirkbride J, Morgan C, Fearon P, Dazzan P, Murray R, Jones P. Neighbourhood level effects on psychoses: Re-examining the role of context. Psychological Medicine. 2007;37:1413–1425. doi: 10.1017/S0033291707000499. [DOI] [PubMed] [Google Scholar]
  22. Lin L. A concordance correlation coefficient to evaluate reproducibility. Biometrics. 1989;45:255–268. [PubMed] [Google Scholar]
  23. MIND. Ward Watch: Mind’s Campaign to Improve Hospital Conditions for Mental Health Patients. London: MIND; 2004. [Google Scholar]
  24. Morgan D. Successful Focus Group Interviews: Advancing the State of the Art. London: SAGE Publications; 1993. [Google Scholar]
  25. Morgan C, Dazzan P, Morgan K, Jones P, Harrison G, Leff J, et al. First episode psychosis and ethnicity: Initial findings from the AESOP study. World Psychiatry. 2006;5:40–46. [PMC free article] [PubMed] [Google Scholar]
  26. Rose D. Collaborative research between users and professionals: Peaks and pitfalls. The Psychiatrist. 2003;27:404–406. [Google Scholar]
  27. Rose D, Evans J, Sweeney A, Wykes T. A model for developing outcome measures from the perspectives of mental health service users. International Review of Psychiatry. 2011a;23:41–46. doi: 10.3109/09540261.2010.545990. [DOI] [PubMed] [Google Scholar]
  28. Rose D, Leese M, Oliver D, Sidhu R, Bennewith O, Priebe S, et al. A comparison of participant information elicited by service user and non-service user researchers. Psychiatric Services. 2011b;62:210–213. doi: 10.1176/ps.62.2.pss6202_0210. [DOI] [PubMed] [Google Scholar]
  29. Rose D, Sweeney A, Leese M, Clement S, Burns T, Catty J, et al. Developing a user-generated measure of continuity of care: Brief report. Acta Psychiatrica Scandinavica. 2009;119:320–324. doi: 10.1111/j.1600-0447.2008.01296.x. [DOI] [PubMed] [Google Scholar]
  30. Secker J, Harding C. African and African Caribbean users’ perceptions of inpatient services. Journal of Psychiatric and Mental Health Nursing. 2002;9:161–167. doi: 10.1046/j.1365-2850.2002.00455.x. [DOI] [PubMed] [Google Scholar]
  31. Sharac J, McCrone P, Sabes-Figuera R, Csipke E, Wood A, Wykes T. Nurse and patient activities and interaction on psychiatric inpatients wards: A literature review. International Journal of Nursing Studies. 2010;47:909–917. doi: 10.1016/j.ijnurstu.2010.03.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Shattell M, Andes M, Thomas S. How patients and nurses experience the acute care psychiatric environment. Nursing Inquiry. 2008;15:242–250. doi: 10.1111/j.1440-1800.2008.00397.x. [DOI] [PubMed] [Google Scholar]
  33. Sim J, Wright C. The kappa statistic in reliability studies: Use, interpretation and sample size requirements. Physical Therapy. 2005;85:257–268. [PubMed] [Google Scholar]
  34. Svensson B, Hansson L. Patient satisfaction with inpatient psychiatric care. Acta Psychiatrica Scandinavica. 1994;90:379–384. doi: 10.1111/j.1600-0447.1994.tb01610.x. [DOI] [PubMed] [Google Scholar]
  35. The Sainsbury Centre for Mental Health. Breaking the Circles of Fear. A Review of the Relationship Between Mental Health Services and African and Caribbean Communities. London: The Sainsbury Centre for Mental Health; 2002. [Google Scholar]
  36. Trivedi P, Wykes T. From passive subjects to equal partners: Qualitative review of user involvement in research. British Journal of Psychiatry. 2002;181:468–472. doi: 10.1192/bjp.181.6.468. [DOI] [PubMed] [Google Scholar]
  37. Walsh J, Boyle J. Improving acute psychiatric hospital services according to inpatient experiences. A user-led piece of research as a means to empowerment. Issues in Mental Health Nursing. 2009;30:31–38. doi: 10.1080/01612840802500733. [DOI] [PubMed] [Google Scholar]
  38. Wensing M, Grol R, Smits A. Quality judgements by patients on general practice care: A literature analysis. Social Science and Medicine. 1994;38:45–53. doi: 10.1016/0277-9536(94)90298-4. [DOI] [PubMed] [Google Scholar]
  39. Williams B. Patient satisfaction: A valid concept? Social Science and Medicine. 1994;38:509–516. doi: 10.1016/0277-9536(94)90247-x. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Mental Health (Abingdon, England) are provided here courtesy of Informa Healthcare

RESOURCES