Skip to main content
Journal of Caring Sciences logoLink to Journal of Caring Sciences
. 2015 Jun 1;4(2):165–178. doi: 10.15171/jcs.2015.017

Design and Implementation Content Validity Study: Development of an instrument for measuring Patient-Centered Communication

Vahid Zamanzadeh 1, Akram Ghahramanian 1,*, Maryam Rassouli 2, Abbas Abbaszadeh 2, Hamid Alavi-Majd 3, Ali-Reza Nikanfar 4
PMCID: PMC4484991  PMID: 26161370

Abstract

Introduction: The importance of content validity in the instrument psychometric and its relevance with reliability, have made it an essential step in the instrument development. This article attempts to give an overview of the content validity process and to explain the complexity of this process by introducing an example.

Methods: We carried out a methodological study conducted to examine the content validity of the patient-centered communication instrument through a two-step process (development and judgment). At the first step, domain determination, sampling (item generation) and instrument formation and at the second step, content validity ratio, content validity index and modified kappa statistic was performed. Suggestions of expert panel and item impact scores are used to examine the instrument face validity.

Results: From a set of 188 items, content validity process identified seven dimensions includes trust building (eight items), informational support (seven items), emotional support (five items), problem solving (seven items), patient activation (10 items), intimacy/friendship (six items) and spirituality strengthening (14 items). Content validity study revealed that this instrument enjoys an appropriate level of content validity. The overall content validity index of the instrument using universal agreement approach was low; however, it can be advocated with respect to the high number of content experts that makes consensus difficult and high value of the S-CVI with the average approach, which was equal to 0.93.

Conclusion: This article illustrates acceptable quantities indices for content validity a new instrument and outlines them during design and psychometrics of patient-centered communication measuring instrument.

Keywords: Cancer, Communication, Content validity, Data collection

Introduction

In most studies, researchers study complex constructs for which valid and reliable instruments are needed.1 Validity, which is defined as the ability of an instrument to measure the properties of the construct under study,2 is a vital factor in selecting or applying an instrument. It is determined as its three common forms including content, construct, and criterion-related validity.3 Since content validity is a prerequisite for other validity, it should receive the highest priority during instrument development. Validity is not the property of an instrument, but the property of the scores achieved by an instrument used for a specific purpose on a special group of respondents. Therefore, validity evidence should be obtained on each study for which an instrument is used.4

Content validity, also known as definition validity and logical validity,5 can be defined as the ability of the selected items to reflect the variables of the construct in the measure. This type of validity addresses the degree to which items‏ of‏ an instrument sufficiently represent the content domain. It also answers the question that to what extent the selected sample in an instrument or instrument‏ items‏ is a comprehensive sample of the content.1,6-8This type validity provides the preliminary evidence on construct validity of an instrument‏.9 In addition, it can provide information on the representativeness and clarity of items and help improve an instrument through achieving recommend- dations from‏ an expert panel‏.6,10 If an instrument lacks content validity, It is impossible to establish reliability for it‏.11 On the other hand although more resources should be spent for a content validity study initially, it decreases the need for resources in the future reviews of an instrument during psychometric process‏.1

Despite the fact that in instrument development, content validity is a critical step12and a trigger mechanism to link abstract concepts to visible and measurable indices,7 it is studied superficially and transiently. This problem might be due the fact that the methods used to assess content validity in medical research literature are not referred to profoundly‏ 12 and sufficient details have rarely been provided on content validity process in a single resource.13 It is possible that students do not realize complexities in this critical process.12 Meanwhile, a number of experts have questioned historical legitimacy of content validity as a real type of validity.14-16 These challenges about value and merit of content validity have arisen from lack of distinction between content validity and face validity, un-standardized mechanisms to determine content validity and the previously its un-quantified nature.3 This article aims to discuss on the content validity process, to train quantifying of it with a example instrument. This is designed to measure the patient-centered communication between the patients with cancer and nurses as a key member of the health care providers in oncology wards of Iran.

Nurse-patient communication

For improving patients’ outcomes, nurses cannot perform health services such as physical cares, emotional support and exchanging information with their patients without establishing a relationship with them.17During recent decades, patient-centered communication was defined as a communication in which patients’ viewpoints are actively sought by a treatment team,18 a relationship with patients, based on trust, respect, and reciprocity, and with mutually negotiated goals and expectations that can be an important support and buffer for cancer patients experiencing distress.19

Communication serves to build and maintain this relationship, to transmit information, to provide support, and to make treatment decisions. Although patient-centered communication between providers and cancer patients can significantly affect clinical outcomes20and as an important element improves patient satisfaction, treatment compliance, and health outcomes,21,22 however, recent evidence demonstrates that communication in cancer care may often be suboptimal, particularly with regard to the emotional experience of the patient.23

Despite the public acceptance, there is little consensus on the meaning and operationalization of the concept of patient-centered communication,19,24 so that a serious limitation is caused by lack of standard instruments to review and promote patient-centeredness in patient-healthcare communication. Part of this issue is related to the extended nature of patient-centeredness construct that has led to creating different and almost dissimilar instruments caused by researchers’ conceptualization and psychometrics.25 Few‏ instruments can provide a comprehensive definition of this concept in cancer care and in a single tool.26 Whereas, reviewing the literature in Iran shows that this concept has never been studied in the form of a research study. As one of the research priorities is to conduct research on cancer,27 no quantitative and qualitative study has been carried out and no instrument has been made yet.

It is obvious that evaluating abilities of nurses in oncology wards to establish a patient-centered communication and its consequences require application of a reliable‏ instrument based on the context and culture of the target group.26 When a new instrument is designed, measurement and report of its content validity have fundamental importance.8 Therefore, this study was conducted to design and to examine content validity of the instrument measuring patient-centered communication in oncology wards in northwest of Iran.

Materials and methods

This methodological study is part of a larger study carried out through the exploratory mixed method research (qualitative-quantitative) to design and psychometrics the instrument measuring patient-centered communication in oncology wards in northwest of Iran. Data in the qualitative phase of study with qualitative content analysis approach was collected by semi-structured in-depth interview with 10 patients with cancer, three family members and seven oncology nurses in the Ali-Nasab and Shahid Ayatollah Qazi‏ Tabatabai‏ Hos- pitals of Tabriz and in the quantities phase of study, during a two-step process (design – judgment), the qualitative and quantities viewpoints of 15 experts were collected.3

Ethical considerations such as approval of the ethic committee of Tabriz University of Medical Sciences, Permissions of administrators of Ali-Nasab and Shahid Ayatollah Qazi‏ Tabatabai‏ Hospitals, anonymity, informed consent, withdrawal from the study, and recording permission was respected.

Stage 1: Instrument Design

Instrument design is performed through three-‏ steps process, including determining content domain, sampling from content (item generation) and instrument construction.11,14 the first step is determining the content domain of a construct that the instrument is made to measure it. Content domain is the content area related to the variables that being‏ measured.28 It can be identified by literature review on the topic being measured, interviewing with the respondents and focus groups. Through a precise definition on the attributes and characteristics of the desired construct, a clear image of its boundaries, dimensions, and components is obtained. The qualitative research methods can also be applied to determine the variables and concepts of the pertinent construct.29 The qualitative data collected in the interview with the respondents familiar with concept help enrich and develop what has been identified on concept, and are considered as an invaluable resource to generate instrument items.30 To determine content domain in emotional instruments and cognitive instruments, we can use literature review and table of specifications, respectively.3 In practice, table of specifications reviews alignment of a set of items (placed in rows) with the concepts forming the construct under study (placed in columns) through collecting quantitative and qualitative evidence from experts and by analyzing‏ data.5 Ridenour and Newman also introduced the application of mixed method‏ (deductive- inductive) for conceptualization at the step of content domain determination and items generation.31 However, generating items requires a preliminary task to determine the content domain an constract.32 In addition, a useful approach would consists of returning to research questions and ensuring that the instrument items are reflect of and relevant to research questions.33

Instrument construction is the third step in instrument design in which the items are refined and organized in a suitable format and sequence so that the finalized items are collected in a usable form.3

Stage2: Judgment

This step entails‏ confirmation‏ by a specific number of experts, indicating that instrument items and the entire instrument have content validity. For this purpose, an expert panel is appointed. Determining the number of experts has always been partly arbitrary. At least five people are recommended to have sufficient control over chance agreement. The maximum number of judges has not been determined yet; however, it is unlikely that more than 10 people are used. Anyway, as the number of experts increases, the probability of chance agreement decreases. After determining an expert panel, we can collect and analyze their quantitative and qualitative viewpoints on the relevancy or representativeness, clarity and comprehend- siveness of the items to measure the construct operationally‏ defined by these items to ensure the content validity of the instrument.3,7,8

Quantification of Content Validity

The content validity of instrument can be determined using the viewpoints of the panel of experts. This panel consists of content experts and lay experts. Lay experts are the potential research subjects and content experts are professionals who have research experience or work in the field.34Using subjects of the target group as expert ensures that the population for whom the instrument is being developed is represented1

In qualitative content validity‏ method, content‏ experts and target group’s recommendations are adopted on observing grammar, using appropriate and correct words, applying correct and proper order of words in items and appropriate scoring.35 However, in‏ the quantitative content validity method, confidence is maintained in selecting the most important and correct content in an‏ instrument, which is quantified by content validity‏ ratio (CVR). In this way, the experts are requested to specify whether an item is necessary for operating a construct in a set of items or not. To this end, they are requested to score each item from 1 to 3 with a three-degree range of “not necessary,useful but not essential,essential”‏ respectively. Content validity ratio varies between 1 and -1. The higher score indicates further agreement of members of panel on the necessity of an item in an instrument. The formula of content validity ratio is CVR=(Ne - N/2)/(N/2), in which the Ne is the number of panelists indicating "essential" and N is the total number of panelists. The numeric value of content validity ratio is determined by Lawshe Table. For example, in our study that is number of panelists 15 members,‏ if‏ CVR is bigger than 0.49, the item in the instrument with an acceptable level of significance‏ will‏ be‏ accepted.36

In reports of instrument development, the most widely reported approach for content validity is the content validity index.3,34,37Panel members is asked to rate instrument items in terms of clarity and its relevancy to the construct underlying study as per the theoretical definitions of the construct itself and its dimensions on a 4-point ordinal scale (1[not relevant], 2[somewhat relevant], 3[quite relevant], 4[highly relevant]).34 A table like the one shown below(Table 1) was added to the cover letter to guide experts for scoring method.

Table 1. The table added to the cover letter to guide experts for scoring method.

Relevancy Clarity
1[ not relevant]
2[item need some revision]
3[relevant but need minor revision]
4[ very relevant]
1[not clear]
2[item need some revision]
3[clear but need minor revision]
4[very clear]

To obtain content validity index for relevancy and clarity of each item (I-CVIs), the number of those judging the item as relevant or clear (rating 3 or 4) was divided by the number of content experts but for relevancy, content validity index can be calculated both for item level (I-CVIs) and the scale-level (S-CVI). In item level, I-CVI is computed as the number of experts giving a rating 3 or 4 to the relevancy of each item, divided by the total number of experts.

The I-CVI expresses the proportion of agreement on the relevancy of each item, which is between zero and one3,38 and the SCVI is defined as “the proportion of total items judged content valid”3 or “the proportion of items on an instrument that achieved a rating of 3 or 4 by the content experts”.28

Although instrument developers almost never give report what method have used to computing the scale-level index of an instrument (S-CVI).6 There are two methods for calculating it, One method requires universal agreement among experts (S-CVI/UA), but a less conservative method is averages the item-level CVIs (S-CVI/Ave). For calculating them, first, the scale is dichotomized by combining values 3 and 4 together and 2 and 1 together and two dichotomous categories of responses including “relevant and not relevant” are formed for each item.3,34 Then, in the universal agreement approach, the number of items considered relevant by all the judges (or number of items with CVI equal to 1) is divided by the total number of items. In the average approach, the sum of I-CVIs is divided by the total number of items.10Table 2 provides data for better understanding on calculation CVI and S-CVI by both methods. Data of table has been extracted from judges of our panel about relevancy items of dimension of trust building as a variable (subscale) in measuring construct of patient-centered communication. As the values obtained from both methods might be different, instrument makers should mention the method used for calculating it.6 Davis proposes that researchers should consider 80 percent agreement or higher among judges for new instruments.34 Judgment on each item is made as follows: If the I-CVI is higher than 79 percent, the item will be appropriate. If it is between 70 and 79 percent, it needs revision. If it is less than 70 percent, it is eliminated.39

Table 2. Calculation of I-CVI and S-CVI by two approaches of S-CVI/UA and S-CVI/Ave for items of trust building dimension.

Items Relevant
(rating 3 or 4)
Not relevant
(rating 1 or 2)
I-CVIs * Interpretation
1
2
3
4
5
6
7
8
9
14
12
13
12
11
14
12
8
14
0
2
1
2
3
0
2
6
0
1
0.857
0.928
0.857
0.785
1
0.857
0.571
1
Appropriate
Appropriate
Appropriate
Appropriate
Need for Revision
Appropriate
Appropriate
Eliminated
Appropriate

Number of items considered relevant by all the panelists=3, Number of terms=9, S-CVI/Ave*** or Average of I-CVIs=0.872, S-CVI/UA**=3/9=.333NOTE: *Item-Content Validity Items, **Scale-Content Validity Item/Universal agreement,***Scale-Content Validity Item/Average Number of experts=14, Interpretation of I-CVIs: If the I-CVI is higher than 79 percent, the item will be appropriate. If it is between 70 and 79 percent, it needs revision. If it is less than 70 percent, it is eliminated.

Although content validity index is extensively used to estimate content validity by researchers, this index does not consider the possibility of inflated values because of the chance agreement. Therefore, Wynd‏ et al.,‏ propose both content validity index and multi-rater kappa statistic in content validity study‏ because, unlike the CVI, it adjusts for chance agreement. Chance agreement is an issue of concern while studying agreement indices among assessors, especially when we place four-point scoring within two relevant and not relevant classes.7 In other words, kappa statistic is a consensus index of inter-rater agreement that adjusts for chance agreement10 and is an important supplement to CVI because Kappa provides information about degree of agreement beyond chance.7Nevertheless, content validity index is mostly used by researchers because it is simple for calculation, easy to understand and provide information about each item, which can be used for modification or deletion of instrument items.6,10

To calculate modified kappa statistic, the probability of chance agreement was first calculated for each item by following formula:

PC= [N! /A! (N -A)!]* . 5N.

In this formula, N= number of experts in a panel and A= number of panelists who agree that the item is relevant.

After calculating I-CVI for all instrument items, finally, kappa was computed by entering the numerical values of probability of chance agreement (PC) and content validity index of each item (I-CVI) in following formula:

K= (I-CVI - PC) / (1- PC).

Evaluation criteria for kappa is the values above 0.74, between 0.60 and 0.74, and the ones between 0.40 and 0.59 are considered as excellent, good, and fair, respectively.40

Polit states that after controlling items by calculating adjusted kappa, each item with I-CVI equal or higher than 0.78 would be considered excellent. Researchers should note that, as the number of experts in panel increases, the probability of chance agreement diminishes and values of I-CVI and kappa converge.10

Requesting panel members to evaluate instrument in terms of comprehensiveness would be the last step of measuring the content validity. The panel members are requested to judge whether instrument items and any of its dimensions are a complete and comprehensive sample of content as far as the theoretical definitions of concepts and its dimensions are concerned. Is it needed to eliminate or add any item? According to members’ judgment, proportion of agreement is calculated for the comprehensiveness of each dimension and the entire instrument. In so doing, the number of experts who have identified instrument comprehensiveness as favorable is divided into the total number of experts.3,37

Determining face validity of an instrument

Face validity answers this question whether an instrument apparently has validity for subjects, patients and/or other participants. Face validity means if the designed instrument is apparently related to the construct underlying study. Do participants agree with items and wording of them in an instrument to realize research objectives? Face validity is related to the appearance and apparent attractiveness of an instrument, which may affect the instrument acceptability by respondents.11 In principle, face validity is not considered as validity as far as measurement principles are concerned. In fact, it does not consider what to measure, but it focuses on the appearance of instrument.9To determine face validity of an instrument, researchers use respondents and experts’ viewpoints. In the qualitative method, face-to-face interviews are carried out with some members of the target groups. Difficulty level of items, desired suitability and relationship between items and the main objective of an instrument, ambiguity and misinterpretations of items, and/or incomprehensibility of the meaning of words are the issues discussed in the interviews. 41

Although content experts play a vital role in content validity, instrument review by a sample of subjects drawn from the target population is another important component of content validation. These individuals are asked to review instrument items because of their familiarity with the construct through direct personal experience.37 Also they will be asked to identify the items they thought are the most important for them, and grade their importance on a 5-point Likert scale including very important5, important4,2relatively important3, slightly important2, and unimportant. In quantities method, for calculation item impact score, the first is calculated percent of patients who scored 4 or 5 to item importance (frequency), and the mean importance score of item (importance) and then item impact score of instrument items was calculated by following formula: Item Impact Score= frequency×Importance

If the item impact of an item is equal to or greater than 1.5 (which corresponds to a mean frequency of 50% and an importance mean of 3 on the 5-point Likert scale), it is maintained in the instrument; otherwise it is eliminated.42

Results

Results of stage1: Designing patient-centered communication measuring instrument

In the one step of our research, which was performed through qualitative content analysis by semi-structured in-depth interview with ten patients with cancer, three family members and seven oncology nurses, the results led to identifying content domain within seven dimensions including trust building, intimacy or friendship, patient activation, problem solving, emotional support, informational support, and spiritual strengthening. Each of these content domains was defined theoretically by combining qualitative study and literature review. In the item generation step, 260 items were generated from these dimensions and they were combined with 32 items obtained from literature and the related instruments. In research group, the items were studied in terms of overlapping and duplication. Finally, 188 items remained for the operational definition of the construct of patient-centered communication, and the preliminary instrument was made by 188 items (pool items) within seven dimensions.

Results of stage 2: Judgment of expert panel on validity of patient-centered communic- ation measuring instrument

In the second step and after selecting fifteen content experts including the instrument developer experts (four people), cancer research experts (four people),nurse-patient communication experts (three people) and four nurses experienced in cancer care, an expert panel was created for making quantitative and qualitative judgments on instrument items. The panel members were requested thrice to judge on content validity ratio, content validity index, and instrument comprehensiveness. In each round, they were requested to judge on face validity of instrument as well. In each round of correspondences via e-mail or in person, a letter of request was presented, which included study objectives and an account on instrument, scoring method, and required instructions on responding. Theoretical definitions of the construct underlying study, its dimensions, and items of each dimension were also mentioned in that letter. In case no reply was received for the reminder e-mail within a week, a telephone conversation would be made or a meeting would be arranged.

In the first round of judgment, 108 items out of 188 instrument items were eliminated. These eliminated items had content validity ratio lower than 0.49, (according to the expert numbers in our study that was 15, numerical values of the Lawshe table was 0.49) or those which combined to remained items based on the opinion of content experts through editing of item. Table 3 shows a sample of instrument items and CVR calculation method for them.

Table 3. Calculating of CVR for a sample of instrument items at the first round of judgment.

Items N e * CVR ** Interpretation
1 9 0.0667 Remained
2 5 -0.333 Eliminated
3 10 0.3333 Eliminated
4 15 0.8667 Remained
5 9 0.2 Eliminated
6 13 0.6 Remained
7 7 -0.2 Eliminated
8 7 -0.067 Eliminated
9 13 0.6 Remained

NOTE: * Number of experts evaluated the item essential, **CVR or Content Validity Ratio = (Ne-N/2)/(N/2) with 15 person at the expert panel (N=15), the items with the CVR bigger than 0.49 remained at the instrument and the rest eliminated.

The remaining items were modified according to the recommendations of panel members in the first round of judgment and for a second time to determine content validity index and instrument modification, the panel members were requested to judge by scoring 1 to 4 on the relevancy and clarity of instrument items according to Waltz and Bussel content validity index.38

In the second round, the proportion of agreement among panel members on the relevancy and clarity of 80 remaining items of the first round of judgment was calculated.

To obtain content validity index for each item, the number of those judging the item as relevant was divided by the number of content experts (N=14). (As one of the 15 members of panel had not scored some items, the analyses were made by 14 judges). This work was also carried out to clarify the items of the instrument. The agreement among the judges for the entire instrument was only calculated for relevancy according to average and universal agreement approach.

In this round, among the 80 instrument items, 4 items with a CVI score lower than 0.70 were eliminated. Eight items with a CVI between 0.70 and 0.79 were modified (Modification of items was performed according to the recommendation of panel members and research group forums). Two items were eliminated despite having favorable CVI scores, one of which was eliminated due to ethical issues (As some content experts believed that the item “I often think to death but I don’t speak about it with my nurses.” might cause moral harm to a patient, it was eliminated). On another eliminated item, “Nurses know that how to communicate with me”, some experts believed that if that item is eliminated, it would not harm the definition of trust building dimension. According to experts suggestions, an item (Nurses try that I face no problem during care) was added in this round. After modification, the instrument containing 57 items was sent to the panel members for the third time to judge on the relevancy, clarity and comprehensiveness of the items in each dimension and need for deletion or addition of the items. In this round, four items had a CVI lower than 0.70, which were eliminated.

The proportion of agreement among the experts was also calculated in this round in terms of comprehensiveness for each dimension of the construct underlying study. Table 4 shows the calculation of I-CVI, S-CVI and modified kappa for items in the instrument for 53 remaining items at the end of the third round of judgment. We also used panel members’ judgment on the clarity of items as well as their recommendations on the modification of items.

Table 4. Content Validity Index, Modified Kappa and comprehensiveness of instrument dimensions and total instrument at the third round of judgment.

Dimensions of construct of study:
Patient-centered communication
Number giving
rating of 3 or 4 to
relevancy of item
I-CVI * p c ** K *** Interpretation Comprehensiveness of
instrument dimensions and
total instrument
Agree Proportion of consensus
D1: Trust Building
D1-1 14 1 6.103 1 Excellent 14 1
D1-2 12 0.857 0.022 0.85 Excellent
D1-3 12 0.857 0.022 0.85 Excellent
D1-4 12 0.857 6.103 0.85 Excellent
D1-5 12 0.857 0.022 0.85 Excellent
D1-6 14 1 6.103 1 Excellent
D1-7 12 0.857 6.103 0.85 Excellent
D1-8 14 1 6.103 1 Excellent
D2:Intimacy/Friendship
D2-1 14 1 6.103 1 Excellent 13 0.928
D2-2 14 1 6.103 1 Excellent
D2-3 14 1 6.103 1 Excellent
D2-4 13 0.928 .0008 0.928 Excellent
D2-5 14 1 6.103 1 Excellent
D2-6 14 1 6.103 1 Excellent
D2-7 12 0.857 0.022 0.85 Excellent
D3: Patient activation
D3-1 12 0.857 0.022 0.85 Excellent 14 1
D3-2 13 0.928 0 0.928 Excellent
D3-3 13 0.928 0 0.928 Excellent
D3-4 13 0.928 0 0.928 Excellent
D3-5 14 1 6.103 1 Excellent
D4: Problem Solving
D4-1 14 1 6.103 1 Excellent 14 1
D4-2 14 1 6.103 1 Excellent
D4-3 12 0.857 0.022 0.85 Excellent
D4-4 12 0.857 0.022 0.85 Excellent
D4-5 13 0.928 0 0.928 Excellent
D4-6 14 1 6.103 1 Excellent
D4-7 14 1 6.103 1 Excellent
D5: Emotional support
D5-1 13 0.928 0 0.928 Excellent 14 1
D5-2 14 1 6.103 1 Excellent
D5-3 12 0.857 0.022 0.85 Excellent
D5-4 14 1 6.103 1 Excellent
D5-5 13 0.928 0 0.928 Excellent
D5-6 12 0.857 0.02 0.85 Excellent
D6: Informational support
D6-1 14 1 6.103 1 Excellent 13 0.928
D6-2 13 0.928 0 0.928 Excellent
D6-3 14 1 6.103 1 Excellent
D6-4 13 0.928 0 0.928 Excellent
D6-5 14 1 6.103 1 Excellent
D6-6 14 1 6.103 1 Excellent
D7:Spirituality strengthening
D7-1 12 0.857 0.022 0.85 Excellent 14 1
D7-2 12 0.857 0.022 0.85 Excellent
D7-3 12 0.857 0.022 0.85 Excellent
D7-4 14 1 6.103 1 Excellent
D7-5 14 1 6.103 1 Excellent
D7-6 12 0.857 0.022 0.85 Excellent
D7-7 12 0.857 0.022 0.85 Excellent
D7-8 13 0.928 0 0.928 Excellent
D7-9 14 1 6.103 1 Excellent
D7-10 13 0.928 0 0.928 Excellent
D7-11 14 1 6.103 1 Excellent
D7-12 13 0.928 0 0.928 Excellent
D7-13 13 0.928 0 0.928 Excellent
D7-14 13 0.928 0 0.928 Excellent
53 Items S-CVI/Ave= 0.939S-CVI/UN= 0.434 Agreement on total comprehensiveness=14Comprehensiveness of entire instrument=1

NOTE: *I-CVI: item-level content validity index,**pc (probability of a chance occurrence) was computed using the formula: pc= [N! /A! (N -A)!] *.5Nwhere N= number of experts and A= number of panelists who agree that the item is relevant. Number of experts=14,***K(Modified Kappa) was computed using the formula: K= (I-CVI- PC)/(1- PC). Interpretation criteria for Kappa, using guidelines described in Cicchetti and Sparrow (1981): Fair=K of 0.40 to 0.59; Good=K of 0.60 to 0.74; and Excellent=K>0.74

Face validity results of patient-centered communication measuring instrument

A sample of 10 people of patients with cancer who had a long-term history of hospitalization in oncology wards (lay experts) was requested to judge on the importance, simplicity and understandability of items in an interview with one of the members of research team. According to their opinions, to make some items more understandable, objective examples were included in an item. “For instance, the item “Nurses try not to cause any problem for me” was changed into “During care (e.g. preparation an intravenous line), Nurses try not to cause any problem for me”. The item “Care decisions are made without paying attention to my needs” was changed to “Nurses didn’t ask my opinion about care(e.g. time of care or type of interventions)”. In addition the quantitative analysis was also performed as calculating impact score of each item. Nine items had item impact score less than 1.5 and they were eliminated from the final instrument for preliminary test. Finally, at the end of the content validity and face validity process, our instrument was prepared with seven dimensions and 44 items for the next steps and doing the rest of psychometric testing.

Discussion

Present paper demonstrates quantities indices for content validity a new instrument and outlines them during design and psychometrics of patient centered communication measuring instrument. It should be said that validation is a lengthy process, in the first-step of which, the content validity should be studied and the following analyses should be directed include reliability evaluation (through internal consistency and test-retest), construct validity (through factor analysis) and criterion-related validity.37

Some limitations of content validity studies should be noted, Experts’ feedback is subjective; thus, the study is subjected to bias that may exist among the experts. If content domain is not well identified, this type of study does not necessarily identify content that might have been omitted from the instrument. However, experts are asked to suggest other items for the instrument, which may help minimize this limitation.11

Conclusion

Content validity study is a systematic, subjective and two-stage process. In the first stage, instrument design is carried out and in the second stage, judgment/quantification on instrument items is performed and content experts study the accordance between theoretical and operational definitions. Such process should be the leading study in the process of making instrument to guarantee instrument reliability and prepare a valid instrument in terms of content for preliminary test phase. Validation is a lengthy process, in the first step of which, the content validity should be studied. The following analyses should be directed include reliability evaluation (through internal consistency and test-retest), construct validity by factor analysis and criterion-related validity. Meanwhile, we showed that although content validity is a subjective process, it is possible to objectify it.

Understanding content validity is important for clinician groups and researchers because they should realize if the instruments they use for their studies are suitable for the construct, population under study, and socio-cultural background in which the study is carried out, or there is a need for new or modified instruments.

Training on content validity study helps students, researchers, and clinical staffs better understand, use and criticize research instruments with a more accurate approach.

In general, content validity study revealed that this instrument enjoys an appropriate level of content validity. The overall content validity index of the instrument using a conservative approach (universal agree- ment approach) was low; however, it can be advocated with respect to the high number of content experts that makes consensus difficult and high value of the S-CVI with the average approach, which was equal to 0.93.

Acknowledgments

The researchers appreciate patients, nurses, managers, and administrators of Ali-Nasab and Shahid Ayatollah Qazi Tabatabaee hospitals. Approval to conduct this research with no. 5/74/474 was granted by the Hematology and Oncology Research Center affiliated to Tabriz University of Medical Sciences.

Ethical issues

None to be declared.

Conflict of interest

The authors declare no conflict of interest in this study.

References

  • 1.Rubio DM, Berg-Weger M, Tebb SS, Lee ES, Rauch S. Objectifying content validity: Conducting a content validity study in social work research. Social Work Research. 2003;27(2):94–104. doi: 10.1093/swr/27.2.94. [DOI] [Google Scholar]
  • 2.DeVon HA, Block ME, Moyle-Wright P, Ernst DM, Hayden SJ, Lazzara DJ. et al. A psychometric toolbox for testing validity and reliability. J Nurs Scholarsh. 2007;39(2):155–64. doi: 10.1111/j.1547-5069.2007.00161.x. [DOI] [PubMed] [Google Scholar]
  • 3.Lynn MR. Determination and quantification of content validity. Nurs Res. 1986;35(6):382–5. [PubMed] [Google Scholar]
  • 4.Waltz CF, Strickland O, Lenz ER. Measurement in nursing and health research. 4th ed. New York: Springer Publishing Company; 2010. [Google Scholar]
  • 5.Newman I, Lim J, Pineda F. Content validity using a mixed methods approach: Its application and development through the use of a table of specifications methodology. Journal of Mixed Methods Research. 2013;7(3):243–60. doi: 10.1177/1558689813476922. [DOI] [Google Scholar]
  • 6.Polit DF, Beck CT. The content validity index: are you sure you know what's being reported? Critique and recommendations. Res Nurs Health. 2006;29(5):489–97. doi: 10.1002/nur.20147. [DOI] [PubMed] [Google Scholar]
  • 7.Wynd CA, Schmidt B, Schaefer MA. Two quantitative approaches for estimating content validity. West J Nurs Res. 2003;25(5):508–18. doi: 10.1177/0193945903252998. [DOI] [PubMed] [Google Scholar]
  • 8.Yaghmale f. Content validity and its estimation. Journal of Medical Education. 2003;3(1):25–7. [Google Scholar]
  • 9.Anastasi A. Psychological testing. 6th ed. New York: Macmillan; 1988. [Google Scholar]
  • 10.Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommend- ations. Res Nurs Health. 2007;30(4):459–67. doi: 10.1002/nur.20199. [DOI] [PubMed] [Google Scholar]
  • 11.Nunnally JC, Bernstein IH. Psychometric theory. 3rd ed. New York: McGraw-Hill; 1994. [Google Scholar]
  • 12.Beck CT. Content validity exercises for nursing students. J Nurs Educ. 1999;38(3):133–5. doi: 10.3928/0148-4834-19990301-08. [DOI] [PubMed] [Google Scholar]
  • 13.Rattray J, Jones MC. Essential elements of questionnaire design and develop- ment. J Clin Nurs. 2007;16(2):234–43. doi: 10.1111/j.1365-2702.2006.01573.x. [DOI] [PubMed] [Google Scholar]
  • 14. Carmines EG, Zeller RA. Reliability and validity assessment. Beverly Hills, CA: Sage publications; 1979.
  • 15.Cronbach LJ. Essentials of psychological testing. 3rd ed. New York: harper and Row publishers; 1970. [Google Scholar]
  • 16.Messick S. Evidence and ethics in the evaluation of tests. Educational Researcher. 1981;10(9):9–20. doi: 10.3102/0013189X010009009. [DOI] [Google Scholar]
  • 17.Fakhr-Movahedi A, Salsali M, Negharandeh R, Rahnavard Z. A qualitative content analysis of nurse-patient communication in Iranian nursing. Int Nurs Rev. 2011;58(2):171–80. doi: 10.1111/j.1466-7657.2010.00861.x. [DOI] [PubMed] [Google Scholar]
  • 18.Stewart MA. What is a successful doctor-patient interview? A study of interactions and outcomes. Soc Sci Med. 1984;19(2):167–75. doi: 10.1016/0277-9536(84)90284-3. [DOI] [PubMed] [Google Scholar]
  • 19.Mead N, Bower P. Patient-centredness: a conceptual framework and review of the empirical literature. Soc Sci Med. 2000;51(7):1087–110. doi: 10.1016/S0277-9536(00)00098-8. [DOI] [PubMed] [Google Scholar]
  • 20.Rodin G, Zimmermann C, Mayer C, Howell D, Katz M, Sussman J. et al. Clinician-patient communication: evidence-based recommendations to guide practice in cancer. Curr Oncol. 2009;16(6):42–9. doi: 10.3747/co.v16i6.432. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Nanchoff-Glatt M. Clinician-patient communication to enhance health outcomes. J Dent Hyg. 2009;83(4):179. [PubMed] [Google Scholar]
  • 22.Zandbelt LC, Smets EM, Oort FJ, Godfried MH, de Haes HC. Medical specialists' patient-centered communic- ation and patient-reported outcomes. Med Care. 2007;45(4):330–9. doi: 10.1097/01.mlr.0000250482.07970.5f. [DOI] [PubMed] [Google Scholar]
  • 23.Pollak KI, Arnold RM, Jeffreys AS, Alexander SC, Olsen MK, Abernethy AP. et al. Oncologist communication about emotion during visits with patients with advanced cancer. J Clin Oncol. 2007;25(36):5748–52. doi: 10.1200/JCO.2007.12.4180. [DOI] [PubMed] [Google Scholar]
  • 24.Mead N, Bower P. Measuring patient-centredness: a comparison of three observation-based instruments. Patient Educ Couns. 2000;39(1):71–80. doi: 10.1016/S0738-3991(99)00092-0. [DOI] [PubMed] [Google Scholar]
  • 25.van Campen C, Sixma H, Friele RD, Kerssens JJ, Peters L. Quality of care and patient satisfaction: a review of measuring instruments. Med Care Res Rev. 1995;52(1):109–33. doi: 10.1177/107755879505200107. [DOI] [PubMed] [Google Scholar]
  • 26. Epstein RM, Street RL Jr. Patient-Centered Communication in cancer care: Promoting healing and reducing suffering. Bethesda, Maryland: National Cancer Institute, NIH Publication; 2007.
  • 27.Ghanbari A, Baghaei-Lakeh M. Nursing research priorities in the care of cancer patients from viewpoint of nurses. Bimonthly of Iranian Journal of Nursing. 2009;22(57):87–97. [Google Scholar]
  • 28.Beck CT, Gable RK. Ensuring content validity: an illustration of the process. J Nurs Meas. 2001;9(2):201–15. [PubMed] [Google Scholar]
  • 29.Wilson HS. Research in nursing. 2nd ed. California: Addison-Wesley; 1989. [Google Scholar]
  • 30.Tilden VP, Nelson CA, May BA. Use of qualitative methods to enhance content validity. Nurs Res. 1990;39(3):172–5. [PubMed] [Google Scholar]
  • 31.Benz CR, Newman I. Mixed methods research: Exploring the interactive continuum. 2nd ed. Carbondale, IL: Southern Illinois University Press; 2008. [Google Scholar]
  • 32.Priest J, McColl E, Thomas L, Bond S. Developing and refining a new measurement tool. Nursing Researcher. 1995;2(4):69–81. [Google Scholar]
  • 33. Bowling A. Research Methods in Health. Buckingham: Open University Press; 1997.
  • 34.Davis LL. Instrument review: Getting the most from a panel of experts. Applied Nursing Research. 1992;5(4):194–7. doi: 10.1016/S0897-1897(05)80008-4. [DOI] [Google Scholar]
  • 35.Safikhani S, Sundaram M, Bao Y, Mulani P, Revicki DA. Qualitative assessment of the content validity of the Dermatology Life Quality Index in patients with moderate to severe psoriasis. J Dermatolog Treat. 2013;24(1):50–9. doi: 10.3109/09546634.2011.631980. [DOI] [PubMed] [Google Scholar]
  • 36.Lawshe CH. A quantitative approach to content validity. Personnel psychology. 1975;28(4):563–75. doi: 10.1111/j.1744-6570.1975.tb01393.x. [DOI] [Google Scholar]
  • 37.Grant JS, Davis LL. Selection and use of content experts for instrument development. Res Nurs Health. 1997;20(3):269–74. doi: 10.1002/(sici)1098-240x. [DOI] [PubMed] [Google Scholar]
  • 38. Waltz C, Bausell BR. Nursing research: design statistics and computer analysis. Philadelphia: Davis FA; 1981.
  • 39.Abdollahpour E, Nejat S, Nourozian M, Majdzadeh R. The process of content validity in instrument development. Iranian Epidemiology. 2010;6(4):66–74. [Google Scholar]
  • 40.Cicchetti DV, Sparrow SA. Developing criteria for establishing interrater reliability of specific items: applications to assessment of adaptive behavior. Am J Ment Defic. 1981;86(2):127–37. [PubMed] [Google Scholar]
  • 41.Banna JC, Vera Becerra LE, Kaiser LL, Townsend MS. Using qualitative methods to improve questionnaires for Spanish speakers: assessing face validity of a food behavior checklist. J Am Diet Assoc. 2010;110(1):80–90. doi: 10.1016/j.jada.2009.10.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Lacasse Y, Godbout C, Sériès F. Health-related quality of life in obstructive sleep apnoea. Eur Respir J. 2002;19(3):499–503. doi: 10.1183/09031936.02.00216902. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Caring Sciences are provided here courtesy of Tabriz University of Medical Sciences

RESOURCES