Skip to main content
Substance Abuse Treatment, Prevention, and Policy logoLink to Substance Abuse Treatment, Prevention, and Policy
. 2016 Jul 19;11:25. doi: 10.1186/s13011-016-0070-5

“It is not just about the alcohol”: service users’ views about individualised and standardised clinical assessment in a therapeutic community for alcohol dependence

Paula Cristina Gomes Alves 1,2,, Célia Maria Dias Sales 2, Mark Ashworth 3
PMCID: PMC4949765  PMID: 27430578

Abstract

Background

The involvement of service users in health care provision in general, and specifically in substance use disorder treatment, is of growing importance. This paper explores the views of patients in a therapeutic community for alcohol dependence about clinical assessment, including general aspects about the evaluation process, and the specific characteristics of four measures: two individualised and two standardised.

Methods

A focus group was conducted and data were analysed using a framework synthesis approach.

Results

Service users welcomed the experience of clinical assessment, particularly when conducted by therapists. The duration of the evaluation process was seen as satisfactory and most of its contents were regarded as relevant for their population. Regarding the evaluation measures, patients diverged in their preferences for delivery formats (self-report vs. interview). Service users enjoyed the freedom given by individualised measures to discuss topics of their own choosing. However, they felt that part of the standardised questions were difficult to answer, inadequate (e.g. quantification of health status in 0–20 points) and sensitive (e.g. suicide-related issues), particularly for pre-treatment assessments.

Conclusions

Patients perceived clinical assessment as helpful for their therapeutic journey, including the opportunity to reflect about their problems, either related or unrelated to alcohol use. Our study suggests that patients prefer to have evaluation protocols administered by therapists, and that measures should ideally be flexible in their formats to accommodate for patient preferences and needs during the evaluation.

Keywords: User involvement, Clinical assessment, Personalised assessment, Evaluation measures, Patient views, Individualised measures, Qualitative research

Background

Most mental health literature is based on a professional perspective, generated by researchers or practitioners [1]. However, service users have expertise by experience, which is why their involvement is increasingly acknowledged as a crucial part of the health care agenda [25].

One area where service user involvement is paramount is the selection of measures to evaluate the patient’s clinical condition [68]. Evaluation measures are helpful for clinical work at different points in time during treatment. At treatment intake, they allow the assessment of patients’ distress, and if administered at pre-post treatment, they provide data for outcome assessment purposes. Several authors have suggested that, to maximize their clinical utility, these measures should be relevant, acceptable and valuable for both professionals and service users [6, 9]. The reality, though, is that many popularly used measures do not reflect the service users’ perspective [1012]. Consequently, we have little information on whether the existing evaluation tools are meaningful, personally relevant, and expressed in terms which make sense to users [6, 8, 13].

User involvement in health care is even more challenging among socially excluded and stigmatised groups, since their views tend to be discredited, undermined and regarded as unworthy [1417]. This often applies to patients in substance use disorder treatment services, who seldom participate actively in shared decision-making activities [14, 18, 19].

Just as with patients in general, patients in substance use disorder treatment services have firsthand knowledge about their clinical condition and are in a privileged position to inform providers about which outcomes of interest best reflect their reality [14, 20]. According to the European Monitoring Centre for Drugs and Drug Addiction, there are currently over 50 tools to measure treatment outcomes in this population. The vast majority of these are standardised, and do not take the patients’ perspective into account. A recent study gathered 76 variables commonly used by professionals to evaluate recovery from substance use disorder, and service users were asked their views about those criteria [20]. Patients reported that some variables were unrealistic and hard to achieve (e.g. to be completely anxiety-free). This study also highlighted the frustration expressed by patients that most existing variables did not capture individual idiosyncrasies and personal preferences, stating that service providers “had no idea of their experiences” (p. 31).

There has been a recent call for the use of individualised data in the evaluation of substance use disorder treatment [18, 21, 22]. Such data can be collected with individualised measures, which are tailor-made lists of items (problems or goals), generated in patients’ own words [23]. Similarly to pre-set standardised measures, these individualised items are rated for intensity in quantitative scales (e.g. Likert scales). This allows an evaluation of patients’ level of distress, based on their unique problems.

Our study seeks to address three main concerns in this field. First, there are a growing number of studies exploring what users of mental health services think about clinical assessment, including views about the measures and the process by which they are administered [24]. With the exception of the study by Neale and colleagues [20], little is known about what patients in substance use disorder treatment think about clinical assessment. Second, a pioneer study published by Duong et al. [25] has compared patients’ perspectives about standardised and individualised measures in school mental health. To the best of our knowledge, there are no reports on the use of individualised measures in the field of substance use disorder treatment, nor do we know how this population perceives such measures. Third, the literature has suggested that the majority of measures and patient-focused materials in substance use disorder treatment tend to require literacy skills above the average level of literacy among this population [26, 27]. However, those who are most likely to have low reading / writing skills (e.g. low socio-economic status, limited education, marginalised populations, and rural settings) are seldom asked to contribute with their views on clinical assessment.

We were interested in understanding what patients in substance use disorder treatment services, with low literacy skills, think about clinical assessment, in general, and in particular about standardised and individualised measures. More specifically, our aims for this study were two-fold: to explore patients’ overall perspectives about their experience with the evaluation process; and to investigate patients’ views about what is helpful and hindering about each of the four measures in the evaluation protocol. Ultimately, our goal was to understand what makes patients engage, feel (de)motivated or (un)comfortable whilst using evaluation measures as part of their treatment.

Methods

A single focus group with 10 service users was conducted in a therapeutic inpatient community for females with alcohol dependence, based in a rural area of northern Portugal. This service targets women with severe alcohol dependence problems, who are referred to this facility by local drug and alcohol outpatient units, child protection and social security services and general practitioners. The treatment programme in this facility lasts approximately for 8 months.

On sample characteristics, service users had a mean age of 45 years (SD = 7). Six had completed primary school,1 whilst the remaining four were illiterate. The majority were unemployed (six participants) and nearly all (eight participants) had a previous history of substance use treatment episodes. The group took place in the community and was moderated by the first author (PA), assisted by the community’s therapist. Ethical approval was granted by the community’s clinical director. As explained earlier, we opted for a sample with these characteristics (i.e. severe addiction problems, disadvantaged socio-economic status, low literacy skills) since this is likely to represent patients with greater difficulties understanding evaluation measures.

The evaluation protocol used in the therapeutic community consisted of four measures. Two were standardised measures, namely, Treatment Outcomes Profile (TOP; [28]) and Clinical Outcome Routine Evaluation – Outcome Measure (CORE-OM; [29]); and two were individualised, Psychological Outcome Profiles (PSYCHLOPS, [30]) and Personal Questionnaire (PQ; [31]) (see Table 1 for more information). These measures were chosen for being widely used in an international context.

Table 1.

Summary of measures used in the research protocol

Type of measure Measure Generation of items/problems Domains covered Nr. items Size (A4 pages) Type of items Delivery format Example of item
Standardised TOP Researchers/professionals Drug and alcohol use, injecting risk behaviours, offending and criminal involvement, health and social functioning 21 1 Yes/no, Sliding scales Interview Q3a: “Committed assault or violence”
CORE-OM Researchers/professionals Subjective well-being, symptoms, functioning, risk 34 2 Likert scale Self-report Q2: “I have felt tense, nervous or anxious”
Individualised PSYCHLOPS Patients Any of patient’s own choosing 4 1 Likert scale Self-report Q1 response: “People in my neighborhood disrespect me”
PQ Patients Any of patient’s own choosing Unlimited 2 Likert scale Interview PQ response: “I miss my family”

a“Q”, followed by a number (“Q1”) refers to the number of the item in the questionnaire.

The focus group was conducted in December 2013 and lasted for 2.5-3 h. Eight participants completed the measures at treatment intake only (between Oct-Nov 2013). The remaining two completed the measures twice i.e. at treatment intake (June 2013) and 7 months after (December 2013).

The group discussion was guided by a semi-structured interview focusing on patients’ views about: 1) the evaluation process, i.e. overall satisfaction, duration in time, administration and adequacy of contents of the evaluation protocol; and 2) the helpful and hindering characteristics of each measure in the evaluation protocol i.e. questionnaire length, delivery format and topics covered by the items.

The session was audio-recorded and transcribed verbatim. The transcripts were analysed following a framework synthesis approach [32], based on categories, created a priori, that reflected the information which we aimed to extract (i.e. general aspects about the evaluation process and helpful and hindering aspects of each measure). Data extraction and synthesis was made by one of us (PA) and later discussed with two senior academics (CS and a senior lecturer in Philosophy with expertise in health ethics).

Results

General views about the evaluation process

The evaluation process was reported by most service users as a positive experience, because it helped them to reflect about their clinical situation. The overall duration of the evaluation protocol was considered as adequate (“The bigger it is, the more we discover things that we did not know about ourselves”, P72). Patients found it helpful to have their own therapist administering the measures, since “these things are very intimate… if it wasn’t our therapist, we wouldn’t have cared” (P4).

Among those that completed the questionnaires twice, patients felt that certain topics had been difficult to address at treatment intake (“The questions are not wrong, but we’re not used to being honest with ourselves, I was still sort of numb”, P9). However, when answering later in treatment, another patient reported that the questionnaires made her aware of how much she had changed since starting the therapeutic community programme (“It made me think about how different I am. When I arrived I was at the bottom and now I am a new woman”, P7). Patients also considered that all evaluations performed after treatment intake should have been focused on other aspects besides their personal problems, particularly their progress in treatment and the changes that they perceive (“We were given the chance to talk about the problems that we still had, but we could also talk about how we were recovering (…) and I have come such a long way”, P7).

Helpful and hindering aspects of the evaluation measures

Nearly all measures in the protocol were deemed as adequate in their length, except for CORE-OM, which was considered as “too big” (P4). There was some variability regarding the preferred delivery format, with some patients finding the self-report structure to be more appealing, as “it was easier to tick boxes… we don’t have to think so hard about our problems” (P9) and that “we can be more honest by using a pen” (P3); and others reporting that “if we are forced to talk, it is better because we end up saying something” (P7). Regarding the topics covered by the items, particularly among the standardised measures, there were certain questions that patients found impropriate and hard to answer. Table 2 summarizes the helpful and hindering aspects of each measure as identified by patients.

Table 2.

Helpful and hindering aspects of TOP, CORE-OM, PSYCHLOPS and PQ as reported by patients

Helpful Hindering
Key aspects Patients’ voices Key aspects Patients’ voices
TOP Raises awareness about the quantity of drugs/alcohol used
Promotes emotional/breakthrough experiences
It is a way of getting yourself together, we have no idea about how much alcohol we used to drink and the money we spent”, P3 0–20 scale questions to rate psychological/physical health and quality of life difficult to understand and meaningless “When I was asked about this I answered by chance. It meant nothing to me. Later we are able to answer in another way”, P5
CORE-OM User-friendly
Contents relevant to this population
Enhances self-awareness
“This instrument is related to what we are”, P7 Large number of items
Contains questions about sensitive topics (e.g. suicide) Items not generated by patients
“The questions were made by other people and the words didn’t come from inside of us”, P7
PSYCHLOPS Easy to understand
Helps reflecting about personal difficulties
Provides freedom of expression to talk about any topic, related or not to substance use
Makes patients feel like “normal” people
“It not just about the alcohol, we feel bad about many other things in life. My sister doesn’t drink alcohol but could answer this too, because everyone has problems”, P8 Requires personal exposure
The self-completion format may lead to misleading or incomplete answers
“We want to hide our real problems for fears of being judged (…) if the words are already written by someone else, it is easier to just say yes or no”, P7
PQ Opportunity for self-reflection
Oral format encourages to talk about personal problems
“When a person encourages us to talk, we become more comfortable and open. I talked about my drinking problem...” P7 Patients reported none. “It is fine as it is”, P1

Discussion

The purpose of this study was to explore the thoughts of a sample of patients in substance use disorder treatment about the process of clinical assessment. It also aimed to hear those patients’ voices about the characteristics of four evaluation measures that all of them used at treatment intake, and some also later in treatment. Among these were two individualised measures, in addition to two traditional and widely used standardised measures.

Our first goal was to investigate patients’ general views about the evaluation process and the findings were encouraging. We learned that patients not only welcomed clinical assessment, but also perceived it as a valuable task for their therapeutic journey. Patients were satisfied with the duration of the evaluation protocol (which included six A4 size pages) and there was even openness for the inclusion of further items. Previous studies [6, 7] have shown that patients tend to be concerned about the brevity of several measures, for being “too simplistic”. In contrast, studies of services and therapists, report that evaluation measures can become a burden for patients and potentially interfere with the time assigned for the consultations and treatment [33].

There was a general preference to have therapists administering the evaluation protocol, making it a meaningful part of the therapeutic process and potentially leading to a greater commitment with the task. As such, we believe that clinical assessment could be formally included as part of treatment, which has already been proposed by authors such as Valderas [34]. The major advantage of this is that using evaluation measures would not require extra human and time resources from the service, making it a potentially more feasible task in real clinical settings. As a downside, one must bear in mind that when therapists administer the protocol directly, patients’ answers are likely to be biased, particularly in oral interviews. In such cases, patients may feel the need to provide desirable answers and underreport undesirable behaviours, to satisfy their therapist, as reported by Bowling [35]. However, unless patients are under court-ordered treatment, they tend to be disposed and motivated to disclose personal and clinically relevant information to their therapists. Hence, we believe that if the interviewer is also the therapist, the risk of offering socially desirable answers is likely to decrease. Considering that most research about social desirability in mental health has been conducted with non-clinical samples [35] further studies are needed to ascertain the pros and cons of having therapists as interviewers in clinical assessment, which is something that, as we have seen, patients seem to prefer.

Our second main goal was to learn what was helpful and hindering about the measures in the evaluation protocol, from the patient perspective. There was a tension regarding service users’ preferences about the delivery format of measures, with some favouring the simplicity of ticking boxes, and others keener on talking about their problems. This suggests that a one-size-fits-all approach to evaluation is not enough and flexibility is desirable, so that patients’ preferences can be considered. Such flexibility has already been suggested by Gordon and colleagues [24]. As such, we need to further explore to what extent the psychometric properties of an instrument remain unaltered in multiple formats of application, i.e. allowing a flexible administration of measures while providing reliable information for treatment evaluation.

However, when it came to eliciting personalised information, most patients in our group preferred the dialogue, oral format of PQ, rather than describing their problems in writing, as required by the PSYCHLOPS questionnaire. This is consistent with the study by Ashworth and colleagues [36], where therapists felt that PSYCHLOPS was challenging because patients not only had to identify problems on their own, but also to use their own words to write their problems down.

In our study, standardised and individualised measures were seen as relevant for clinical assessment, despite having certain disadvantages. TOP and CORE-OM were perceived as useful and relevant for this population, suggesting a good level of acceptability among patients. Nevertheless, not all contents covered by these two standardised measures were regarded as meaningful or appropriate (e.g. rating psychological health in a 20-point scale). Also, service users expressed some reservations about the disclosure of sensitive personal information in certain TOP and CORE-OM items, as shown in other studies [37]. One likely consequence of patients feeling uncomfortable or dissatisfied with the evaluation questionnaires is the likelihood of misleading and/or missing responses. Thus, further research is needed to ascertain which topics are likely to trigger negative reactions to the evaluation process.

As expected, patients appreciated the freedom given by both individualised measures, PSYCHLOPS and PQ, to express any type of personal concern, regardless of topic. This was in line with Duong and colleagues [25], who demonstrated that recipients of mental health care consider individualised measures to be less confining than their standardised counterparts. Hence, our findings indicate that accommodating a great diversity of topics is important to patients, since misusing substances can lead to/or be the consequence of problems that drug-focused instruments might not address. Future research should compare the topics elicited from standardised and individualised measures, so that we understand if the former tend to overlook aspects of relevance for patients that the latter are able to capture.

Finally, it is also worth emphasizing that patients who responded to the evaluation measures at treatment intake and later in treatment valued the opportunity to focus on other aspects besides outcomes. This could be overcome by including items about the treatment process, giving patients the opportunity to share their thoughts about the care they are receiving. Such feedback about treatment could be used by clinicians to adjust the intervention to match the patient’s needs, as well as to increase therapeutic alliance [38].

This study is not without limitations. To have a female only, small sample size means that the findings are less generalisable and conclusions should be interpreted with caution. Also, the presence of the patients’ therapist in the group may have overstated their positive views about the evaluation process and the measures included in our study.

Conclusions

This study suggests that service users can actively contribute to improving the process of clinical assessment, guiding researchers and professionals towards developing evaluation measures that are more meaningful and relevant for patients with alcohol dependency. Individualised outcome measures have the potential to broaden the range of viewpoints captured from patients compared to the more narrowly focused standardised instruments.

Abbreviations

CORE-OM, clinical outcome routine evaluation – outcome measure; PQ, personal questionnaire; PSYCHLOPS, psychological outcome profiles; TOP, treatment outcomes profile

Acknowledgements

The authors of this paper would like to thank Dr. Ana Sofia Cruz, the clinical director and therapist of the therapeutic community in which this study was carried out. We would also like to thank Professor Teresa Santos, from the University of Évora, for her help in reflecting about the data analysis.

Funding

This study was funded by two fellowships, one awarded to PA by the Portuguese Foundation for Science and Technology (FCT SFRH/BD/87308/2012) and the second to CS by the Center for Psychology at the University of Porto, Portuguese Foundation for Science and Technology (FCT UID/PSI/00050/2013) and EU FEDER and COMPETE programs (POCI-01-0145-FEDER-007294).

Availability of data and materials

All materials, i.e., the evaluation instruments used in this study are freely available for widespread use and can be downloaded at no cost from the internet. However, contacting the authors of each instrument is highly recommended, not only for networking purposes, but also to inform the authors about how, where and under which conditions their questionnaires are being used. Regarding the data, the authors of this manuscript have decided not to make the focus group’s transcript available to ensure that all participants remain de-identified.

Authors’ contributions

All authors have made substantial contributions to this manuscript. In particular, PA, CS and MA have all contributed to the conception and design of the study. All authors have been actively involved in drafting the manuscript and revising its intellectual content. PA was responsible for the acquisition and analysis of data, supervised by CS and MA. All authors have given final approval of the manuscript’s version to be published and confirm the accuracy and integrity of all the work being presented.

Competing interests

PA and CS, the first and second author, respectively, declare that they have no competing interests in this manuscript. MA, the third author, chaired the mental health research group which developed PSYCHLOPS but has no financial interest in its use.

Ethics approval and consent to participate

Ethics approval for this study was granted by the clinical director of the therapeutic community where the study was held, Dr. Ana Sofia Cruz. Even though the participants of this study were residing in the community, their participation was completely voluntary and informed consent forms were completed by each participant. Participants were also ensured that in case of non-participation, their treatment or daily activities at the community would not be affected in any way.

Footnotes

1

In Portugal, the term “primary school” refers to 4 years of education, from the age of 6 to 10, and is also known as the 1st cycle of basic education.

2

“P”, followed by a number (e.g. “P3”) is an anonymous designator for each focus group participant.

Contributor Information

Paula Cristina Gomes Alves, Email: paulagomesalves@hotmail.com.

Célia Maria Dias Sales, Email: celiasales@soutodacasa.org.

Mark Ashworth, Email: mark.ashworth@kcl.ac.uk.

References

  • 1.Rose D, Thornicroft G, Slade M. Who decides what evidence is? Developing a multiple perspectives paradigm in mental health. Acta Psychiat Scand. 2006;429:109–14. doi: 10.1111/j.1600-0447.2005.00727.x. [DOI] [PubMed] [Google Scholar]
  • 2.INVOLVE. Strategy 2012-2015. In: National Institute for Health Research. 2015. http://www.invo.org.uk. Accessed 20 Nov 2015.
  • 3.Rose D. Patient and public involvement in health research: Ethical imperative and/or radical challenge? J Health Psychol. 2014;19:149–58. doi: 10.1177/1359105313500249. [DOI] [PubMed] [Google Scholar]
  • 4.Tait L, Lester H. Encouraging user involvement in mental health services. Adv Psych Treat. 2005;11:168–75. doi: 10.1192/apt.11.3.168. [DOI] [Google Scholar]
  • 5.Wu A, Snyder C, Clancy C, Steinwachs D. Adding the patient perspective to comparative effectiveness research. Health Aff. 2010;29:1863–71. doi: 10.1377/hlthaff.2010.0660. [DOI] [PubMed] [Google Scholar]
  • 6.Blount C, Evans C, Birch S, Warren F, Norton K. The properties of self-report research measures: beyond psychometrics. Psychother Couns. 2002;75:151–64. doi: 10.1348/147608302169616. [DOI] [PubMed] [Google Scholar]
  • 7.Crawford M, Robotham D, Thana L, Patterson S, Weaver T, Barber R. Selecting outcome measures in mental health: the views of service users. J Ment Health. 2011;20:336–46. doi: 10.3109/09638237.2011.577114. [DOI] [PubMed] [Google Scholar]
  • 8.Ennis L, Wykes T. Impact of patient involvement in mental health research: longitudinal study. Brit J Psychiat. 2013;203:381–6. doi: 10.1192/bjp.bp.112.119818. [DOI] [PubMed] [Google Scholar]
  • 9.Slade M, Thornicroft G, Glover G. The feasibility of routine outcome measures in mental health. Soc Psych Psych Epid. 1999;34:243–9. doi: 10.1007/s001270050139. [DOI] [PubMed] [Google Scholar]
  • 10.Gilbody S, House A, Shelton T. Outcome measurement in psychiatry: a critical review of outcome measurement in psychiatry research and practice. UK: University of York; 2013. [Google Scholar]
  • 11.Perry A, Gilbody S. User-defined outcomes in mental health: A qualitative study and consensus development exercise. J Ment Health. 2009;18:415–23. doi: 10.3109/09638230902968175. [DOI] [Google Scholar]
  • 12.Rose D, Evans J, Sweeney A, Wykes T. A model for developing outcome measures from the perspectives of mental health service users. Int Rev Psychiatr. 2011;23:41–6. doi: 10.3109/09540261.2010.545990. [DOI] [PubMed] [Google Scholar]
  • 13.Rose D. Service user views and service user research in the journal of mental health. J Ment Health. 2011;20:423–8. doi: 10.3109/09638237.2011.613959. [DOI] [PubMed] [Google Scholar]
  • 14.Bryant J, Saxton M, Madden A, Bath N, Robinson S. Consumer participation in the planning and delivery of drug treatment services: the current arrangements. Drug Alcohol Rev. 2008;27:130–7. doi: 10.1080/09595230701829397. [DOI] [PubMed] [Google Scholar]
  • 15.Hayter M. Involving service users in the development and evaluation of health care and services – good practice and the need for a research agenda. Contemp Nurse. 2011;40:103–5. doi: 10.1080/10376178.2011.11002577. [DOI] [PubMed] [Google Scholar]
  • 16.Livingston J, Milne T, Fang M, Amari E. The effectiveness of interventions for reducing stigma related to substance use disorders: a systematic review. Addiction. 2012;107:39–50. doi: 10.1111/j.1360-0443.2011.03601.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Ti L, Tzemis D, Buxton J. Engaging people who use drugs in policy and program development: a review of the literature. Subst Abuse Treat Pr. 2012;7:47–56. doi: 10.1186/1747-597X-7-47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Alves P, Sales C, Ashworth M. Personalising the evaluation of substance misuse treatment: a new approach to outcome measurement. Int J Drug Policy. 2015;26:333–5. doi: 10.1016/j.drugpo.2014.11.014. [DOI] [PubMed] [Google Scholar]
  • 19.Orford J. Asking the right questions in the right way: the need for a shift in research on psychological. Addiction. 2008;103:875–85. doi: 10.1111/j.1360-0443.2007.02092.x. [DOI] [PubMed] [Google Scholar]
  • 20.Neale J, Tompkins C, Wheeler C, Finch E, Marsden J, Mitcheson L, Wykes T, Strang J. “You’re all going to hate the word ‘recovery’ by the end of this”: Service users’ views of measuring addiction recovery. Drug-Educ Prev Polic. 2015;22:26–34. doi: 10.3109/09687637.2014.947564. [DOI] [Google Scholar]
  • 21.Neale J, Strang J. Blending qualitative and quantitative research methods to optimize patient reported outcome measures (PROMs) Addiction. 2015;10:1215–6. doi: 10.1111/add.12896. [DOI] [PubMed] [Google Scholar]
  • 22.Trujols J, Iraurgi I, Batlle F, Durán-Sindreu S, Pérez de Los Cobos J. Towards a genuinely user-centred evaluation of harm reduction and drug treatment programmes: a further proposal. Int J Drug Policy. 2015;26:1285–7. doi: 10.1016/j.drugpo.2015.08.012. [DOI] [PubMed] [Google Scholar]
  • 23.Sales C, Alves P. Patient centred assessment in psychotherapy: a review of individualised tools. Clin Psychol-Sci Pr; In press.
  • 24.Gordon S, Ellis P, Siegert R, Walkey F. Developmet of a self-assessed consumer recovery outcome measure: my voice, my life. Adm Policy Ment Health. 2013;40:199–210. doi: 10.1007/s10488-012-0417-9. [DOI] [PubMed] [Google Scholar]
  • 25.Duong M, Lyon A, Ludwig K, Wasse J, McCauley E. Student perceptions of the acceptability and utility of standardized and idiographic assessment in school mental health. Int J Ment Health Promot. 2016;18:49–63. doi: 10.1080/14623730.2015.1079429. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Greenfield S, Sugarman D, Nargiso J, Weiss R. Readability of patient handout materials in a nationwide sample of alcohol and drug abuse treatment programs. Am J Addict. 2005;14:339–45. doi: 10.1080/10550490591003666. [DOI] [PubMed] [Google Scholar]
  • 27.McHugh R, Sugarman D, Kaufman J, Park S, Weiss R, Greenfield S. Readability of self-report alcohol misuse measures. J Stud Alcohol Drugs. 2014;75:328–34. doi: 10.15288/jsad.2014.75.328. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Marsden J, Farrell M, Bradbury C, Dale-Perera A, Eastwood B, Roxburgh M, Taylor S. Development of the treatment outcomes profile. Addiction. 2008;103:1450–60. doi: 10.1111/j.1360-0443.2008.02284.x. [DOI] [PubMed] [Google Scholar]
  • 29.Evans C, Connell J, Barkham M, Mellor-Clark J, Audin K. Towards a standardised brief outcome measure: psychometric properties and utility of the CORE-OM. Brit J Psychiat. 2002;180:51–60. doi: 10.1192/bjp.180.1.51. [DOI] [PubMed] [Google Scholar]
  • 30.Ashworth M, Shepherd M, Christey J, Matthews V, Wright K, Parmentier H, Robinson S, Godfrey E. A client-generated psychometric instrument: the development of “PSYCHLOPS”. Couns Psychother Res. 2004;4:27–31. doi: 10.1080/14733140412331383913. [DOI] [Google Scholar]
  • 31.Elliott R, Wagner J, Sales C, Rodgers B, Alves P, Café M. Psychometrics of the personal questionnaire: a client-generated outcome measure. Psychol Assessment. 2015;28:263–78. doi: 10.1037/pas0000174. [DOI] [PubMed] [Google Scholar]
  • 32.Carroll C, Booth A, Leaviss J, Rick J. “Best fit” framework synthesis: refining the method. BMC Med Res Methodol. 2013;13:37–52. doi: 10.1186/1471-2288-13-37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Sales C, Goncalves S, Fragoeiro A, Noronha S, Elliott R. Psychotherapists’ openness to routine naturalistic idiographic research. Ment Health Learn Dis Res Pract. 2007;4:145–61. doi: 10.5920/mhldrp.2007.42145. [DOI] [Google Scholar]
  • 34.Valderas J, Kotzeva A, Espallargues M, Guyatt G, Ferrans C, Halyard M, Revicki A, Symonds T, Parada A, Alonso J. The impact of measuring patient-reported outcomes in clinical practice: a systematic review of the literature. Qual Life Res. 2008;17:179–93. doi: 10.1007/s11136-007-9295-0. [DOI] [PubMed] [Google Scholar]
  • 35.Bowling A. Mode of questionnaire administration can have serious effects on data quality. J Public Health. 2005;27:281–91. doi: 10.1093/pubmed/fdi031. [DOI] [PubMed] [Google Scholar]
  • 36.Ashworth M, Robinson S, Godfrey E, Parmentier H, Shepherd M, Christey J, Wright K, Matthews V. The experiences of therapists using a new client-centred psychometric instrument, PSYCHLOPS (Psychological Outcome Profiles) Couns Psychother Res. 2005;5:27–42. doi: 10.1080/14733140512331343886. [DOI] [Google Scholar]
  • 37.Stone C, Elliott R. Clients’ experience of research within a clinic setting. Couns Psych Rev. 2011;26:71–86. [Google Scholar]
  • 38.Flückiger C, Del Re A, Wampold B, Znoj H, Caspar F, Jörg U. Valuing clients’ perspective and th effects on the therapeutic alliance: a randomized controlled study of an adjunctive instruction. J Couns Psychol. 2012;59:18–26. doi: 10.1037/a0023648. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

All materials, i.e., the evaluation instruments used in this study are freely available for widespread use and can be downloaded at no cost from the internet. However, contacting the authors of each instrument is highly recommended, not only for networking purposes, but also to inform the authors about how, where and under which conditions their questionnaires are being used. Regarding the data, the authors of this manuscript have decided not to make the focus group’s transcript available to ensure that all participants remain de-identified.


Articles from Substance Abuse Treatment, Prevention, and Policy are provided here courtesy of BMC

RESOURCES