Skip to main content
PLOS One logoLink to PLOS One
. 2022 Dec 21;17(12):e0278996. doi: 10.1371/journal.pone.0278996

Revised Hammersmith Scale for spinal muscular atrophy: Inter and intra-rater reliability and agreement

Danielle Ramsey 1,¤,*, Gita Ramdharry 2, Mariacristina Scoto 1, Francesco Muntoni 1,3, Amanda Wallace 2,4; on behalf of the SMA REACH UK network
Editor: Jae-Young Hong5
PMCID: PMC9770369  PMID: 36542615

Abstract

The Revised Hammersmith Scale (RHS) for Spinal Muscular Atrophy (SMA) was designed as a psychometrically robust clinical outcome assessment to assess physical abilities of patients with type 2 and 3 SMA. The reliability properties of the RHS have not yet been reported. A prospective RHS reliability study was undertaken in a UK cohort of experienced neuromuscular paediatric Physiotherapists. Reliability testing was conducted via a virtual survey platform two weeks apart. Through the virtual platform participants scored videos of two RHS assessments, one of a child with SMA 2 and one of a child with SMA 3. Inter and intra-rater reliability was analysed using a type 3 Intraclass Correlation Coefficient (ICC). Intra-rater agreement was further analysed using Bland Altman (BA) Limits of Agreement (LOA) and plots. The acceptable inter and intra-rater variability was set as a change of ± 2 by the international team of expert physiotherapists who developed the RHS. Inter-rater agreement, n = 22 raters, type 3 ICC was 0.989 (95% CI 0.944 to 1.00), 97.7% of scores were within the acceptable limits of ± 2 points. Intra-rater agreement, n = 21 raters, type 3 ICC ranged from 0.922 to 1.0, with 97.6% of scores within the acceptable limits of ± 2 points. The mean SMA 2 intra-rater difference was -0.10 (-0.6 to 0.4), with lower LOA -2.24 and upper LOA +2.04. Intra-rater difference between tests for SMA 3 intra-rater difference was -0.05 (-0.6 to 0.5), with lower LOA -2.48 and upper LOA +2.38. Intra-rater scoring precision fell within BA agreement limits of ±2 points. The results demonstrate that the RHS is highly reliable when used by experienced UK physiotherapists, and variability of test scores regarding inter and intra-rater reliability was confirmed to lie within ±2 points.

Introduction

Spinal Muscular Atrophy (SMA) is a neuromuscular condition characterised by biallelic mutations of the Survival Motor Neuron 1 (SMN1) gene [1]. The absence of SMN1 adversely affects the integrity of the anterior horn cell in the spinal cord leading to degeneration of alpha motor neurons and subsequent muscular atrophy, resulting in a varying clinical phenotype of SMA [24]. In the severest forms of SMA, type 0 and 1, patients will never achieve the ability to sit and survival ranges from the first few days or weeks of life to less than two years [4]. In types 2, 3 and 4 SMA survival into adulthood is expected and physical presentation differs with sitting being the highest achieved physical ability for type 2 and walking the highest ability achieved in type 3 and 4 [4, 5].

The first targeted treatment for SMA, Nusinersen, was licensed by both the Food and Drug Administration (USA), in December 2016, and the European Medicines Agency, in June 2017 [68]. Several other potential therapeutics are also under investigation, or have received recent approval, such as risdiplam and onsaemnogene abeparvovec [3, 6, 7, 911]. Functional scales are key clinical outcome measurement tools used to monitor SMA both in the clinical setting and to measure efficacy of therapeutics being tested in clinical trials [1, 1216]. The rapid progression of the field, promising early signs of therapeutics, and demands from regulatory authorities means there is greater need for scales which not only measure the disease-specific nature of this condition but also have the capacity to demonstrate potential improvement not seen before in the natural history of SMA.

The Revised Hammersmith Scale for SMA (RHS) was developed as a ‘next generation’ SMA specific scale to meet the requirements of today’s climate namely psychometrically robust, grounded in clinical sensibility and with the capacity to capture improvement in patients with type 2 and 3 SMA and the evolving treated phenotypes [17]. A large international pilot demonstrated the RHS was able to capture a broad spectrum of ability across SMA types 2 and 3, it was able to distinguish between clinically different groups, and although a small floor effect (n = 1) was noted it has no ceiling effect [17]. The International Rare Diseases Research Consortium Task Force (IRDiRC) on Patient Centred Outcome Measures (PCOMs) recommend Rasch Measurement Theory (RMT) methodology for the design of sophisticated and psychometrically robust PCOMs, and state that a distinct benefit of the RMT approach is the ability to detect treatment benefit [18]. The IRDiRC recently highlighted the Revised Hammersmith Scale for SMA (RHS) as a good example of the use of RMT [18]. The RHS is cited in the National Institute for Clinical Excellence (NICE) managed access agreement for Nusinersen as an endpoint for measuring treatment efficacy in patients with type 2 and 3 SMA [19, 20]. Furthermore the RHS is also being used in clinical trials to measure treatment efficacy [21]. Inter-rater reliability and intra-rater reliability have not yet been investigated and therefore reliability and agreement of this measurement tool is not documented. This study aimed to investigate and describe the inter and intra-rater reliability of the RHS when used by experienced neuromuscular physiotherapists in the UK for the assessment of patients with SMA type 2 and 3.

Methods

Reliability study design overview

A prospective reliability study was conducted, via a virtual platform, in a UK cohort of paediatric physiotherapists (raters) with experience in neuromuscular diseases. The raters viewed videos of an RHS assessment undertaken by the investigating physiotherapist on two patients with SMA: one with type 2 SMA and one with type 3 SMA. These videos were viewed and subsequently scored by the participating raters on two separate occasions via two secure online password protected surveys. This study is reported in keeping with the guidelines for reporting reliability and agreement studies (GRAAS) [22].

Revised Hammersmith Scale

The RHS is a clinician rated SMA specific outcome measure containing 36 items which assess physical motor performance [17]. The scale assesses motor functional activities related to sitting, supine, rolling, prone, ability to move and get up from the floor, balance, standing, run/walk, stairs, ascending and descending a step and the ability to jump. Thirty-three items are graded according to an ordinal 0, 1, 2 scale where 0 represents the least physical ability or function achieved, and 2 the highest. Three items are graded 0 and 1 where 0 represents an inability to complete the item, and 1 represents achieving the item. Two timed tests are included within the scale, and WHO motor milestones can also be completed concurrently. The scale was developed using modern psychometric techniques and latent measurement theory via the Rasch Unidimensional Measurement Model (unrestricted and simple logistic model) [17, 18]. Rasch analysis identified unidimensionality of the RHS as acceptable with t-test 7.3%, binomial test lower 95% confidence interval proportion, 0.05 [17]. Reliability of the RHS was demonstrated to be good with a high Person Separation Index (PSI) of 0.98 [17]. Dependency was seen between items tested on the right and left and rolling supine to prone and prone to supine however removing these items did not alter the PSI [17].

RHS training

Twenty-seven physiotherapists from 13 UK sites received training on the RHS at the North Star Network/SMA REACH UK meeting on 22nd April 2015. This is an annual meeting for North Star and SMA REACH UK centres/networks (https://www.northstardmd.com/ and http://www.smareachuk.org/) attended by neuromuscular physiotherapists who are involved in the care of patients with SMA and Duchenne Muscular Dystrophy. Additionally, physiotherapists at the SMA REACH UK sites (which in 2015 was London and Newcastle) who were unable to attend the meeting in April received direct training from the lead SMA REACH UK physiotherapists (DR, AM). Training consisted of provision of an RHS Manual (version 1 21.04.2015), and RHS testing proformas (version 17.03.2015). These documents correspond with the final published version of the RHS in 2017 [17]. There was detailed and comprehensive discussion with demonstration on how to test and score each RHS item. The physiotherapists had opportunities to ask questions throughout the training.

Raters

All UK physiotherapists trained in the use of the RHS were invited to participate in this study. As this scale had not been published at the time of conducting the reliability study the population of raters were the only ones in the UK who were trained in use of the RHS. As a result, the sample of raters invited to participate was representative of the whole population.

The inclusion criteria for the study were: all participants must have attended RHS training; have at least one SMA patient on their current clinical caseload; been a qualified physiotherapist for at least two years; in current employment as a physiotherapist; given their informed consent to participate; have completed and returned a non-disclosure agreement (required for viewing the testing videos via the virtual platform). Only participants who completed survey one (inter-rater testing) were invited to complete survey two (intra-rater testing).

The minimum number of participants was set as 20 for this study. This was calculated using a sample size estimator for the Bland Altman Limits of Agreement, assuming standard deviation of the repeats would be 2, precision would be 1.55 [23].

Reliability testing protocol

Reliability testing was conducted by two identical online surveys, where participating raters scored video clips of two SMA patients being assessed by the investigating physiotherapist (DR).

Survey design

Reliability testing was conducted via two online surveys, using UCL Opinio 7 survey platform [24]. All questions in the survey were mandatory and were designed with checks in place to ensure their completion. In each survey raters viewed item by item video clips of the RHS assessment of two patients, one with SMA type 2, and one with SMA type 3 who were enrolled on the SMA REACH UK study and who gave their explicit informed consent to participate as models for the reliability study. Each question on the survey was an item of the RHS and consisted of a video clip of that item being assessed, an online proforma of the RHS descriptors for that item [17], and a box to indicate their score for that item. Raters were permitted to use the RHS manual version 1.0 21.04.2015 when scoring the item to replicate good clinical practice when testing an item.

Inter-rater reliability was investigated with survey one (S1) and intra-rater reliability was assessed two weeks later in survey two (S2). S1 contained additional information pertaining to professional experience and experience of using relevant neuromuscular outcome measures. Participants completed two RHS assessments (one SMA 2 and one SMA 3) per survey. The same two assessments were then viewed and scored again in S2, two weeks later. Two weeks were chosen to ensure enough time between surveys so that the participants remained blind to their previous scores and to maintain currency and engagement with the study. Each survey took approximately 45mins to 1 hour to complete with the study occupying approximately 2 hours of the participants time in total.

Reliability testing occurred no sooner than one month following RHS training. This allowed participants to familiarise themselves with the RHS in the clinical setting prior to undertaking the study.

Each survey had a start and stop date to control and restrict responses within a specified time frame and was open for 3 days allowing participants a degree of flexibility to choose a convenient time to complete the survey. During survey completion raters could not go back and change their scores, and after completion they no longer had access to the survey ensuring they remained blind to their previous results.

Patient videos

The RHS assessment videos used in this study were taken using a Go Pro Hero 3 White Edition (assessments were undertaken and videoed by DR). Videos were stored in accordance with NHS information governance guidelines to meet the standards of the NHS Information Governance toolkit [25]. The videos were edited using Windows Movie Maker (version 2012). Ethical approval via the SMA REACH UK project was granted for the collection and use of these videos in this study (London–Bromley REC reference 13/LO/1748) and informed consent was obtained for the participation in and recording of the assessment videos (this included parental/guardian informed consent together with minors giving their informed assent).

The reliability testing surveys were designed to minimise any risks of accidental or deliberate disclosure of assessment footage. The resultant protocol was approved by the SLMS UCL Information Services Division Information Governance Lead, Caldicott Guardian and Deputy Medical Director at Great Ormond Street Hospital. Each survey was password protected, and participants received a unique survey URL link, username and key. Each unique URL could only be used once.

Statistical analysis

The RHS is an ordinal scale which produces an overall total numeric score. The numeric total score was used to analyse reliability. The level of agreement of the RHS total scores between raters and intra-rater was also used to determine reliability.

The type 3 Intra-class Correlation Co-efficient (ICC), two way mixed for absolute agreement model (single measures), was chosen for both inter and intra-rater analysis, this was due to the fixed population of raters studied, and absolute agreement was chosen to investigate systematic error [26]. The level of agreement between/within rater scoring, the degree to which scores differed, was also investigated [27]. The study design ensured consistency of patient assessment via a single videoed assessment of each patient which the raters then scored twice 2 weeks apart, patient change over time or performance in repeated assessments was therefore not a factor in this study. This study focussed solely on the precision and reliability of the physiotherapist raters scoring of these assessment videos. To investigate the level of agreement (precision) of scale scoring expert physiotherapists were consulted regarding setting the level of agreement for the Bland Altman analysis [17]. Evidenced values for the level of agreement were unavailable at the time of this study due to the RHS being a new scale and entering a pilot phase of testing, therefore properties such as scale variance had not yet been investigated. The experts agreed that the limits of agreement (precision) of the anticipated differences between raters/intra-rater for the total score of the test would lie within ±2 points overall and therefore ±2 was the set limits of agreement (precision) for this study. This level was based upon their expert experience of using the Hammersmith Functional Motor Scale (HFMS), Hammersmith Functional Motor Scale Expanded (HFMSE) and their involvement in developing and testing the RHS. The levels of agreement set by the experts, at the time of this study, were not dissimilar to the scale variance reported in the literature for the HFMS, HFMSE, and Modified HFMS [2831]. Descriptive statistics were used to interpret the level of agreement.

The Bland Altman (BA) Limits of Agreement (LOA) analysis was conducted to investigate intra-rater reliability with two replicate observations and multiple raters. This test was chosen as it is more grounded in the data, easier to interpret and clinically useful as it analyses the magnitude of measurement in addition to the agreement [3234]. Data was presented in the form of descriptive statistics: mean intra-rater difference and 95% CI, upper and lower LOA with 95% CI and BA plots. The pre-set limits of agreement for this test remained at an acceptable difference of ±2 points. Rater demographics were analysed using descriptive statistics.

Ethical approval

The study protocol was given a favourable ethical opinion by the UCL Research Ethics Committee (REC) on 11/05/2015 REC reference 6639/001 subsequent amendments were approved by the committee on 08/06/2015 and 14/08/2015. Research & Development approval was granted from Great Ormond Street Hospital joint research office to conduct this study with NHS staff. Local research and development approval from NSCN NHS sites was sought to include their staff in this study. All raters participating in the study gave their written informed consent to participate.

This study was affiliated to the longitudinal observational cohort study SMA REACH UK. SMA REACH UK was granted ethical approval (London–Bromley REC reference 13/LO/1748) to record assessment videos of participants, whom had given their informed consent for training purposes (for minors this included parents/guardians informed consent together with minors giving their informed assent). To use these videos in this reliability study a substantial amendment for SMA REACH UK 13/LO/1748 was submitted to the Bromley NHS REC on 28/01/2015 and was granted favourable opinion on 05/02/2015.

Results

When interpreting results presented for both inter and intra-rater reliability it is important to note that the RHS is scored in whole numbers, it is not possible to achieve a decimalised score. In order to allow for more in-depth understanding/analysis of this study decimalised scores are presented. However, regarding clinical meaningfulness/interpretation of the values these should be rounded up or down to a complete whole number as this would reflect the how the RHS would be scored clinically.

Participants

Twenty-two Physiotherapists gave their informed consent and were included as participants (raters) in this study. Twenty-one physiotherapists met the full inclusion criteria for participation and one physiotherapist contacted the investigator stating they met all but one of the inclusion criteria where they had not assessed an SMA patient in the last year. This participant had significant proven experience, 14 years, in the neuromuscular field. A check sub-analysis found that this participant was not a significant outlier from a clinical or statistical perspective and therefore this participant was included in this study.

A total of 22 participants completed S1 (inter-rater testing) and 21 participants completed S2 (intra-rater testing). The participants were all experienced physiotherapists with a minimum of 5 years post-qualification experience, median 15.25 years (IQR 10 to 26), and had at least one year of experience treating children with neuromuscular conditions, median 9.5 years (IQR 4 to 14). There was wide variability regarding the number of SMA patients seen by each participant in the last a year varying from 0 (n = 1) to 50, the distribution was positively skewed with median 11 SMA assessments in the last year (IQR 6 to 20) reflecting the difference in patient distribution across specialist centres in the UK.

The functional scales reported to be used routinely by the participants to assess patients with SMA were the North Star Ambulatory Assessment (45.5%) and the Hammersmith Functional Motor Scale (40.9%), the Revised Hammersmith Scale (RHS) was used routinely in 22.7% of participants and Hammersmith Functional Motor Scale Expanded in 18.2%. A very small number of participants, n = 2 (9.1%), stated they were not familiar with the Hammersmith Functional Motor Scale, Hammersmith Functional Motor Scale Expanded or North Star Ambulatory Assessment scales, however all were aware of the RHS following the training.

The surveys (S1 and S2) were completed by 81.8% of participants at 9–10 weeks following initial training (June-July 2015) and 18.2% (n = 4) of participants at 39 weeks following training. The four participants included in the second wave of testing (39 weeks post-training) involved three physiotherapist who were invited to participate in the original testing but were unavailable (n = 1) or unable due to increased work pressures (n = 2). With regards the final participant, the local trust research and development approval was received once the original study testing had already begun and so could not participate initially but was invited to the second round of testing. A sub-analysis, using Mann Whitney U Test, was conducted to investigate whether this discrepancy in timing of the investigation since training had any effect on the inter and intra-rater scoring and no clinical or statistical differences in scores were observed (inter-rater scoring SMA 2 p = 1.00, SMA 3 p = 0.081; intra-rater scoring SMA 2 p = 0.763, SMA 3 p = 0.120).

Inter-rater reliability–survey 1 (n = 22)

The inter-rater reliability results are presented in Table 1 and in Fig 1. The mean RHS total score for SMA 2 was 13.2 (95% CI 12.8, 13.7) and for SMA 3 was 41.5 (40.9, 42.0). The inter-rater reliability ICC (type 3) was 0.989, (0.944 to 1.00) demonstrating a very good level of agreement between raters according to the categories described by Altman [35]. With regards the ± 2 expert defined limits of acceptable agreement, for both SMA 2 and SMA 3 the 95% confidence intervals for RHS scores sat within ± 1 point difference of the mean. Furthermore, when looking at the entire set of values 100% of SMA 2 scores sat within ± 2 points, compared with 95.5% of values in the SMA 3 test, demonstrating a high level of agreement between raters. Inter-rater reliability of the RHS total scores in survey 2 returned an ICC (type 3) value of 0.997 (0.984, 1.0), again confirming high inter-rater reliability for this scale.

Table 1. Inter-rater reliability n = 22 raters.

RHS Total Score % RHS total scores sitting within ± points of the mean
Mean (95% CI) SD Range of difference either side of mean ± 3 ± 2 ± 1 ± 0
SMA 2 13.2 (12.8, 13.7) 1.07 -2 to +2 100 100 86.4 27.2
SMA 3 41.5 (40.9, 42.0) 1.14 -3 to +2 100 95.5 77.3 36.4
SMA 2 & 3 - - -3 to +2 100 97.7 84.1 29.5

SD: Standard Deviation; CI: Confidence Intervals.

Fig 1. SMA type 2 and 3 RHS inter-rater total scores with mean and ± 2 expert opinion of acceptable margin of error.

Fig 1

Fig 1 highlights rater 5 as an outlier in the SMA 3 assessment with the greatest difference in score from the mean as -3, their SMA 2 assessment did however sit within + 2 of the mean.

Intra-rater reliability–survey 2 (n = 21)

The intra-rater results are presented in Table 2. Intra-rater analysis in the form of Bland Altman plots are presented in Figs 2 and 3.

Table 2. Intra-rater reliability n = 21 raters.

RHS Total Score Intra-rater difference % RHS difference scores sitting within ± points of the mean
Mean (95% CI) SD Mean (95% CI) BA Lower LOA (95% CI) BA Upper LOA (95% CI) Range ± 3 ± 2 ± 1 ± 0
SMA 2 13.3 (12.8, 13.8) 1.15 -0.10 (-0.6, 0.4) -2.24 (-3.04, -1.44) +2.04 (1.24, 2.84) -2 to +3 100 95.2 85.7 52.4
SMA 3 41.6 (41.1, 42.0) 1.03 0.05 (-0.6, 0.5) -2.48 (-3.4, -1.56) +2.38 (1.46, 3.30) -2 to +2 100 100 76.2 23.8
SMA 2 & 3 - - - - - -2 to +3 100 97.6 81.0 38.1

BA: Bland Altman; LOA: Limits of agreement; CI: Confidence Intervals, SD: Standard Deviation.

Fig 2. SMA 2 Intra-rater Bland Altman plot.

Fig 2

Grey dotted line indicates 0 mean difference between tests, grey solid lines indicate the expert set limits of agreement. Dots–grey outline 1 rater, black outline 2 raters, grey filled dot 3 raters, black filled dot 4 raters.

Fig 3. SMA 3 Intra-rater Bland Altman plot.

Fig 3

Grey dotted line indicates 0 mean difference between tests, grey solid lines indicate the expert set limits of agreement. Dots–grey outline 1 rater, black outline 2 raters, grey filled dot 3 raters, black filled dot 4 raters.

The mean RHS total score for SMA 2 was 13.3 (12.8, 13.8) and SMA 3 was 41.6 (41.1, 42.0), these raw scores are almost identical to the mean scores rated at the inter-rater testing in survey 1, with the 95% confidence interval within ±1 point of the mean for both SMA types. Intra-rater reliability for the 21 raters was found to be very high with ICC (type 3) values for individual raters ranging from 0.922 to 1.0, Table 3.

Table 3. Intra-rater type 3 ICC values.

Rater ICC (95% CI)
1 0.997 (0.560, 1.00)
2 0.998 (0.923, 1.00)
3 1.00
4 0.999 (0.975, 1.00)
5 0.922 (-0.092, 1.00)
6 0.999 (0.973, 1.00)
7 0.999 (0.548, 1.00)
8 0.999 (0.230, 1.00)
9 0.999 (0.977, 1.00)
10 0.977 (0.906, 1.00)
11 0.999 (0.548, 1.00)
12 0.997 (0.578, 1.00)
13 1.00
14 0.998 (0.912, 1.00)
16 0.999 (0.548, 1.00)
17 0.992 (0.209, 1.00)
18 0.999 (0.978, 1.00)
19 1.00
20 1.00
21 0.999 (0.975, 1.00)
22 0.999 (0.174, 1.00)

ICC: Intra-class Correlation Co-efficient; CI: Confidence Intervals.

Within pair differences regarding S1 and S2 RHS total scores were calculated for each rater. The mean intra-rater difference was observed to be -0.10 (-0.6, 0.4) for SMA 2 and SMA 3 0.05 (-0.6, 0.5), indicating a high confidence that there was no observable difference between testing scores for SMA 2 or SMA 3. The BA plots, Figs 2 and 3, show random scatter for both the SMA 2 and SMA 3 assessments indicating no systematic bias in the results. The BA LOA for SMA 2 were -2.24 to + 2.04, and SMA 3–2.48 to +2.38, see Table 2. When rounding to the whole score value, as would be seen clinically, both LOA’s sat within the ± 2 set by the expert panel. The wide confidence intervals surrounding the upper and lower LOA may be indicative of a potential type 2 error due to small sample size, this is confirmed by 97.6% of actual values being within ± 2.

Discussion

This study has for the first time investigated the reliability properties of the Revised Hammersmith Scale. National testing was conducted using a free and secure online survey system and was deemed to be a success following feedback from participants with a high response rate from both inter and intra-rater testing, 85.7% and 94.4% respectively. The protocol employed for this study could easily be replicated for both national and international training, and the format of reliability testing via video item analysis is similar to that used to test clinical evaluator reliability in clinical trials [28, 36, 37]. Video analysis via a virtual platform is useful for establishing inter/intra observer agreement and quality with regards scoring the items, but does not represent how the physiotherapist would conduct an assessment in person. This is a limitation of this study and any interpretation of these results should take this into account. The raters within this study were highly experienced Neuromuscular Physiotherapists and were all active participants within a specialist national network (SMA REACH UK) which involves regular training and updates to improve clinical practice. Therefore, it could be assumed their clinical skills in undertaking this test with a patient would be sufficient. Furthermore, it would not have been feasible or indeed ethical to ask over 20 physiotherapists to assess the same patient(s) in this study due to the issues surrounding fatigue and burden for the patient. To overcome this limitation and ensure quality of RHS testing technique in the future, North Star/SMA REACH UK network physiotherapists could be asked to video an assessment which would then be reviewed for quality assurance purposes by the SMA REACH UK team, replicating the approach used in clinical trials.

This study has demonstrated that national reliability testing within the SMA REACH UK neuromuscular network can be conducted virtually. This further supports the function of the SMA REACH UK network in ensuring the UK is clinical trial ready. SMA REACH UK is co-ordinating the UK’s implementation of the nusinersen managed access agreement (MAA). This study has demonstrated the inter-and intra-rater reliability of the RHS (an end-point of treatment efficacy in the MAA) when used by physiotherapists within this network and has also demonstrated the network is effective in delivering training both in person and virtually. The raters in this study were extremely experienced physiotherapists with median 9.5 years’ experience in neuromuscular conditions, therefore caution should be applied in generalising results to less experienced physiotherapists.

This study has, for the first time, described reliability and agreement separately for patients with type 2 and 3 SMA. They are distinctly different phenotypes, and the results demonstrate high reliability and agreement for both types of SMA using this psychometrically robust scale.

Although recommended as the statistical test of choice to assess reliability by the FDA [38] the ICC value in absence of clinical context can easily be mis-interpreted. Kottner and Streiner [27] highlight reliability properties and agreement as distinctly separate concepts. The ICC is a ratio concerned with variability of scores, and agreement is the degree to which measures differ/agree, with the latter being more straightforward to interpret and grounded in clinical sensibility. This study is the first study to look at both reliability and agreement properties of an SMA functional scale. Bland Altman analysis has not been employed previously to assess the agreement of outcome measures in SMA. This form of analysis provides greater understanding of the scale with regards agreement in relation to test scoring (precision).

This study is transparent regarding clinical meaningfulness and interpretation of inter and intra-rater reliability of the RHS due to providing raters raw scores (Tables 1 and 2, Fig 1) and the expert set limits of agreement (precision) which these are compared against (Figs 2 and 3). This study has identified the inter and intra-rater measurement error/precision of the RHS, when used by UK physiotherapists within the SMA REACH UK network, is conservatively ±2 points. Therefore, observed changes in RHS scores between evaluations that lie within ±2 points should be interpreted with caution as they may not represent clinical change and rather reflect the reliability of the rater’s scoring. In cases where a physiotherapist has measured a difference in ability of ±3 points this is unlikely to be due to measurement error. It has not been within the scope of this study to investigate the natural history of change within patient over time using the RHS, further investigations regarding longitudinal natural history in SMA 2 and 3 using the RHS are currently in progress.

The RHS has high inter and intra-rater reliability and agreement when being used by experienced neuromuscular physiotherapists from the North Star/SMA REACH UK network.

Conclusions

This is the first study to report upon the inter and intra-rater reliability properties of the RHS. It has demonstrated the RHS has high inter and intra-rater reliability from a statistical perspective and anchors this to the clinical interpretation of agreement (precision of between/within raters scoring) as ±2 points for both inter and intra-rater reliability. The virtual approach of conducting the reliability testing nationally achieved a high response rate, was cost effective and could be repeated easily again in the future. Whilst the reliability of the RHS has been demonstrated in a UK cohort of experienced neuromuscular physiotherapists further work is required to determine the minimally clinically important difference of the RHS, test-retest reliability of the scale, and change over time regarding natural history and longitudinal trajectories.

Supporting information

S1 Dataset. Study minimal dataset.

This file contains the minimal dataset underpinning the results for this study. Demographic data is not included in this minimal dataset due to the risk of indirectly identifying participants. Please contact Dr Salma Samsuddin, SMA REACH UK & ISMAC UK Trial Manager via salma.samsuddin@gosh.nhs.uk for any queries regarding data access.

(XLSX)

Acknowledgments

Expert Physiotherapists involved in RHS development–Anna Mayhew, Marion Main, Elena Mazzone, Jacqueline Montes

North Star Network/SMA REACH UK physiotherapists

SMA REACH UK network:

Dr Francesco Muntoni (Chief investigator), Great Ormond Street Hospital & UCL Great Ormond Street institute of Child Health

Dr Anna Mayhew (Collaborator site), Dr Volker Straub, Dr Chiara Marini-Bettolo, Institute of Genetic Medicine, Newcastle University & The Newcastle upon Tyne Hospitals NHS Foundation Trust

Dr Deepak Parasuraman, Birmingham Heartlands Hospital, University Hospitals Birmingham NHS Foundation Trust

Dr Anirban Majumdar, Dr Kayal Vijayakumar, Bristol Royal Hospital for Children, University Hospitals Bristol NHS Foundation Trust

Dr Iain Horrocks, Royal Hospital for Children, NHS Greater Glasgow & Clyde

Dr Anne-Marie Childs, Leeds Teaching Hospitals NHS Trust

Dr Stefan Spinty, Alder Hey Children’s NHS Foundation Trust

Dr Elizabeth Wraige, Dr Vasantha Gowda, Evelina London Children’s Hospital, Guys & St Thomas’s NHS Foundation Trust

Dr Imelda Hughes, Royal Manchester Children’s Hospital, Manchester University NHS Foundation Trust

Dr Gabby Chow, Nottingham University Hospitals NHS Trust

Professor Tracey Willis, The Robert Jones and Agnes Hunt Orthopaedic Hospital NHS Foundation Trust

Dr Sithara Ramdas, Oxford Children’s Hospital, Oxford University Hospitals NHS Foundation Trust

Dr Christian deGoede, Royal Preston Hospital, Lancashire Teaching Hospitals NHS Foundation Trust

Dr Min Ong, Sheffield Children’s NHS Foundation Trust

Dr Marjorie Illingworth, Southampton General Hospital, University Hospital Southampton NHS Foundation Trust

Dr Nahim Hussain, Leicester Royal Infirmary, University Hospitals of Leicester NHS Trust

Dr Elma Stephens, Royal Aberdeen Children’s Hospital, NHS Grampian

Dr Deepa Krishnakumar, Addenbrooke’s Hospital, Cambridge University Hospitals NHS Foundation Trust

Data Availability

The minimal dataset underpinning the results of this study can be found in the Supporting Information S1 Dataset. This does not include detail regarding the demographics of the raters included in the study, this information, their affiliation with the SMA REACH UK sites and small population studied would mean even anonymised data would have several indirect identifiers which may risk breaking anonymity. Minimal information pertaining to the raters can be found within the participants section within results. The minimal dataset can also be found within the results section in tables 1, 2 and 3 and within Figs 1, 2 and 3. Data beyond the minimal dataset required for this study are not publicly accessible due to it containing sensitive data pertaining to human research participants and may compromise anonymity. However, external parties can request data in aggregate form from the SMA REACH UK steering committee for researchers who meet the criteria for access to confidential data. Please contact Dr Salma Samsuddin, SMA REACH UK & ISMAC UK Trial Manager via salma.samsuddin@gosh.nhs.uk for any queries regarding data access.

Funding Statement

This study was supported, in the UK, by the SMA REACH UK project (https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.smareachuk.org%2F&data=05%7C01%7Cd.ramsey%40uos.ac.uk%7C98a052c3094e439151f408dad3bba1fb%7Cee265dd904ad41b7b409e6699705d35d%7C0%7C0%7C638055101504580676%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=NbBTTjSSCi255fq%2BahVKpkKOoJQn5h%2BGUC%2FoX6%2FTYVE%3D&reserved=0). FM is the chief investigator of the SMA Reach project. Commercial funding for the SMA REACH UK project is provided by Biogen Inc (REC reference: 13/LO/1748, IRAS project ID: 122521), via UCL and GOSH. Historically, funding of the SMA REACH UK Project has also been provided by the SMA Trust and Muscular Dystrophy UK (07DN02; 37787 https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.musculardystrophyuk.org%2Fgrants%2Fclinical-trial-coordinators%2F&data=05%7C01%7Cd.ramsey%40uos.ac.uk%7C98a052c3094e439151f408dad3bba1fb%7Cee265dd904ad41b7b409e6699705d35d%7C0%7C0%7C638055101504580676%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=m5jiQW5nn8puWj1LU5QLJ2ePuM6hlsi05u5lvcf2%2F04%3D&reserved=0), the MRC Translational Research Centre at UCL and Newcastle (MR/K501074/1), and the National Institute for Health Research Biomedical Research Centre (515048) at Great Ormond Street Hospital for Children NHS Foundation Trust and University College London (https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gosh.nhs.uk%2Fresearch-and-innovation%2Fnihr-great-ormond-street-brc%2Fabout-brc&data=05%7C01%7Cd.ramsey%40uos.ac.uk%7C98a052c3094e439151f408dad3bba1fb%7Cee265dd904ad41b7b409e6699705d35d%7C0%7C0%7C638055101504580676%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=hP8CekkkaxQFQVJ6QOu0fFJYaWRhn0NxV0JRADs3sGU%3D&reserved=0). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Arnold ES, Fischbeck KH. Chapter 38—Spinal muscular atrophy. In: Geschwind DH, Paulson HL, Klein C, editors. Handbook of Clinical Neurology. Volume 148 (3rd Series) Neurogenetics, Part II: Elsevier; 2018. pp. 591–601. [DOI] [PubMed] [Google Scholar]
  • 2.Crawford TO. Spinal Muscular Atrophies. In: Jones HR, De Vivo DC, Darras BT, editors. Neuromuscular Disorders of Infancy, Childhood & Adolescence: Butterworth Heinemann; 2003. pp. 145–66. [Google Scholar]
  • 3.Groen EJN, Talbot K, Gillingwater TH. Advances in therapy for spinal muscular atrophy: promises and challenges. Nature Rev Neurol. 2018;14(4):214–24. doi: 10.1038/nrneurol.2018.4 [DOI] [PubMed] [Google Scholar]
  • 4.Mercuri E, Bertini E, Iannaccone ST. Childhood spinal muscular atrophy: controversies and challenges. Lancet Neurol. 2012;11(5):443–52. doi: 10.1016/S1474-4422(12)70061-3 [DOI] [PubMed] [Google Scholar]
  • 5.Darras BT, Markowitz JA, Monani UR, De Vivo DC. Chapter 8—Spinal Muscular Atrophies. Neuromuscular Disorders of Infancy, Childhood, and Adolescence (Second Edition). San Diego: Academic Press; 2015. pp. 117–45. [Google Scholar]
  • 6.Hoy SM. Nusinersen: First Global Approval. Drugs. 2017;77(4):473–9. doi: 10.1007/s40265-017-0711-7 [DOI] [PubMed] [Google Scholar]
  • 7.Christie-Brown V, Mitchell J, Talbot K. The SMA Trust: the role of a disease-focused research charity in developing treatments for SMA. Gene Ther. 2017;24(9):544–546. doi: 10.1038/gt.2017.47 [DOI] [PubMed] [Google Scholar]
  • 8.European Medicines Agency. Assessment report: Spinraza. Committee for Medicinal Products for Human Use (CHMP); 2017 April 21. [Cited 2021 October 29]. Available from: https://www.ema.europa.eu/en/documents/assessment-report/spinraza-epar-public-assessment-report_en.pdf.
  • 9.Scoto M, Finkel RS, Mercuri E, Muntoni F. Therapeutic approaches for spinal muscular atrophy (SMA). Gene therapy. 2017;24(9):514–519. doi: 10.1038/gt.2017.45 [DOI] [PubMed] [Google Scholar]
  • 10.Tizzano EF, Finkel RS. Spinal muscular atrophy: A changing phenotype beyond the clinical trials. Neuromuscular Disord. 2017;27(10):883–9. doi: 10.1016/j.nmd.2017.05.011 [DOI] [PubMed] [Google Scholar]
  • 11.Mendell JR, Al-Zaidy SA, Lehman KJ, McColly M, Lowes LP, Alfano LN, et al. Five-Year Extension Results of the Phase 1 START Trial of Onasemnogene Abeparvovec in Spinal Muscular Atrophy. JAMA Neurol. 2021;78(7):834–41. doi: 10.1001/jamaneurol.2021.1272 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Montes J, Gordon AM, Pandya S, De Vivo DC, Kaufmann P. Clinical outcome measures in spinal muscular atrophy. J Child Neurol. 2009;24(8):968–78. doi: 10.1177/0883073809332702 [DOI] [PubMed] [Google Scholar]
  • 13.Cano SJ, Mayhew A, Glanzman AM, Krosschell KJ, Swoboda KJ, Main M, et al. Rasch analysis of clinical outcome measures in spinal muscular atrophy. Muscle Nerve. 2014;49(3):422–30. doi: 10.1002/mus.23937 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Mazzone E, Montes J, Main M, Mayhew A, Ramsey D, Glanzman AM, et al. Old measures and new scores in spinal muscular atrophy patients. Muscle Nerve. 2015;52(3):435–7. doi: 10.1002/mus.24748 [DOI] [PubMed] [Google Scholar]
  • 15.Finkel R, Bertini E, Muntoni F, Mercuri E. 209th ENMC International Workshop: Outcome Measures and Clinical Trial Readiness in Spinal Muscular Atrophy 7–9 November 2014, Heemskerk, The Netherlands. Neuromuscular Disord. 2015;25(7):593–602. doi: 10.1016/j.nmd.2015.04.009 [DOI] [PubMed] [Google Scholar]
  • 16.Chiriboga CA, Swoboda KJ, Darras BT, Iannaccone ST, Montes J, De Vivo DC, et al. Results from a phase 1 study of nusinersen (ISIS-SMN (Rx)) in children with spinal muscular atrophy. Neurology. 2016;86(10):890–97. doi: 10.1212/WNL.0000000000002445 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Ramsey D, Scoto M, Mayhew A, Main M, Mazzone ES, Montes J, et al. Revised Hammersmith Scale for spinal muscular atrophy: A SMA specific clinical outcome assessment tool. PLoS One. 2017;12(2):e0172346. doi: 10.1371/journal.pone.0172346 eCollection 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Morel T, Cano SJ. Measuring what matters to rare disease patients–reflections on the work by the IRDiRC taskforce on patient-centered outcome measures. Orphanet J Rare Dis. 2017;12(1):171. doi: 10.1186/s13023-017-0718-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.National Insitute for Health and Care Excellence (NICE). Managed Access Agreement–Nusinersen (Spinraza) for the treatment of 5q Spinal Muscular Atrophy. 2019 July. [Cited 2021 October 29]. Available from: https://www.nice.org.uk/guidance/ta588/resources/managed-access-agreement-july-2019-pdf-6842812573.
  • 20.National Insitute for Health and Care Excellence (NICE). Nusinersen for treating spinal muscular atrophy. 2019 July 24. [Cited 2021 October 29]. Available from: https://www.nice.org.uk/guidance/ta588.
  • 21.Scholar Rock. Scholar Rock Announces Initiation of Patient Dosing in Phase 2 Trial of SRK-015 in Spinal Muscular Atrophy GlobeNewswire. 2019. May 8. [Cited 2021 October 29]. Available from: https://www.globenewswire.com/news-release/2019/05/08/1819741/0/en/Scholar-Rock-Announces-Initiation-of-Patient-Dosing-in-Phase-2-Trial-of-SRK-015-in-Spinal-Muscular-Atrophy.html. [Google Scholar]
  • 22.Kottner J, Audigé L, Brorson S, Donner A, Gajewski BJ, Hróbjartsson A, et al. Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were proposed. J Clin Epidemiol. 2011;64(1):96–106. doi: 10.1016/j.jclinepi.2010.03.002 [DOI] [PubMed] [Google Scholar]
  • 23.Altman D MD, Bryant T, Gardner M. Statistics with Confidence: Confidence Intervals and Statistical Guidelines. 2nd ed: BMJ Books; 2013. [Google Scholar]
  • 24.UCL. Opinio 2021. Available from: https://www.ucl.ac.uk/isd/services/learning-teaching/e-learning-services-for-staff/e-learning-core-tools/opinio.
  • 25.Department of Health. Information Governance Toolkit. 2015. Available from: https://web.archive.org/web/20150913073952/https:/www.igt.hscic.gov.uk/.
  • 26.Koo TK, Li MY. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. J Chiropr Med. 2016;15(2):155–63. doi: 10.1016/j.jcm.2016.02.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Kottner J, Streiner DL. The difference between reliability and agreement. J Clin Epidemiol. 2011;64(6):701–2. doi: 10.1016/j.jclinepi.2010.12.001 [DOI] [PubMed] [Google Scholar]
  • 28.Mercuri E, Messina S, Battini R, Berardinelli A, Boffi P, Bono R, et al. Reliability of the Hammersmith functional motor scale for spinal muscular atrophy in a multicentric study. Neuromuscular Disord. 2006;16(2):93–98. [DOI] [PubMed] [Google Scholar]
  • 29.Kaufmann P, McDermott MP, Darras BT, Finkel R, Kang P, Oskoui M, et al. Observational study of spinal muscular atrophy type 2 and 3: functional outcomes over 1 year. Arch Neurol. 2011;68(6):779–86. doi: 10.1001/archneurol.2010.373 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Kaufmann P, McDermott MP, Darras BT, Finkel RS, Sproule DM, Kang PB, et al. Prospective cohort study of spinal muscular atrophy types 2 and 3. Neurology. 2012;79(18):1889–1897. doi: 10.1212/WNL.0b013e318271f7e4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Krosschell KJ, Scott CB, Maczulski JA, Lewelt AJ, Reyna SP, Swoboda KJ. Reliability of the Modified Hammersmith Functional Motor Scale in young children with spinal muscular atrophy. Muscle Nerve. 2011;44(2):246–51. doi: 10.1002/mus.22040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet (London, England). 1986;1(8476):307–10. [PubMed] [Google Scholar]
  • 33.Bland JM, Altman DG. Comparing methods of measurement: why plotting difference against standard method is misleading. Lancet (London, England). 1995;346(8982):1085–7. doi: 10.1016/s0140-6736(95)91748-9 [DOI] [PubMed] [Google Scholar]
  • 34.Bland JM, Altman DG. Measuring agreement in method comparison studies. Stat Methods Med Res. 1999;8(2):135–60. doi: 10.1177/096228029900800204 [DOI] [PubMed] [Google Scholar]
  • 35.Altman DG. Practical Statistics for Medical Research. London: Chapman & Hall; 1991. [Google Scholar]
  • 36.Glanzman AM, Mazzone ES, Young SD, Gee R, Rose K, Mayhew A, et al. Evaluator Training and Reliability for SMA Global Nusinersen Trials1. J. Neuromuscul. Dis. 2018;5(2):159–66. doi: 10.3233/JND-180301 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Krosschell KJ, Maczulski JA, Crawford TO, Scott C, Swoboda KJ. A modified Hammersmith functional motor scale for use in multi-center research on spinal muscular atrophy. Neuromuscular Disorders. 2006;16(7):417–26. doi: 10.1016/j.nmd.2006.03.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.U.S Department of Health and Human Services Food and Drug Administration, Center for Drug Evaluation and Research (CDER), Center for Biologics Evaulation and Research (CBER), Center for Devices and Radiological Health (CDRH). Guidance for Industry. Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. 2009 December. [Cited 2021 October 29]. Available from: https://www.fda.gov/media/77832/download.

Decision Letter 0

Joseph Donlan

16 Jun 2022

PONE-D-21-36288Revised Hammersmith Scale for Spinal Muscular Atrophy: inter and intra-rater reliability and agreementPLOS ONE

Dear Dr. Ramsey,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please note that we have only been able to secure a single reviewer to assess your manuscript. We are issuing a decision on your manuscript at this point to prevent further delays in the evaluation of your manuscript. Please be aware that the editor who handles your revised manuscript might find it necessary to invite additional reviewers to assess this work once the revised manuscript is submitted. However, we will aim to proceed on the basis of this single review if possible.  Your manuscript has been assessed by an expert reviewer, whose comments are appended below. The reviewer has highlighted concerns about some aspects of the methodology and statistical analysis. Please ensure you respond to each point carefully in your response to reviewers document, and modify your manuscript accordingly.

Please submit your revised manuscript by Jul 30 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Joseph Donlan

Editorial Office

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. You indicated that you had ethical approval for your study. In your Methods section, please ensure you have also stated whether you obtained consent from parents or guardians of the minors included in the study or whether the research ethics committee or IRB specifically waived the need for their consent.

3. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. 

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

4. Thank you for stating the following in the Financial Disclosure section: "This study was supported, in the UK, by the SMA REACH UK project (www.smareachuk.org). FM is the chief investigator of the SMA Reach project. Commercial funding for the SMA REACH UK project is provided by Biogen Inc (REC reference: 13/LO/1748, IRAS project ID: 122521), via UCL and GOSH. Historically, funding of the SMA REACH UK Project has also been provided by the SMA Trust and Muscular Dystrophy UK (07DN02; 37787 http://www.musculardystrophyuk.org/grants/clinical-trial-coordinators/), the MRC Translational Research Centre at UCL and Newcastle (MR/K501074/1), and the National Institute for Health Research Biomedical Research Centre (515048) at Great Ormond Street Hospital for Children NHS Foundation Trust and University College London  (http://www.gosh.nhs.uk/research-and-innovation/nihr-great-ormond-street-brc/about-brc). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

We note that you received funding from a commercial source: Biogen Inc

Please provide an amended Competing Interests Statement that explicitly states this commercial funder, along with any other relevant declarations relating to employment, consultancy, patents, products in development, marketed products, etc. 

Within this Competing Interests Statement, please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. 

Please include your amended Competing Interests Statement within your cover letter. We will change the online submission form on your behalf.

5. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

6. Please note that in order to use the direct billing option the corresponding author must be affiliated with the chosen institute. Please either amend your manuscript to change the affiliation or corresponding author, or email us at plosone@plos.org with a request to remove this option.

7. Please upload a new copy of Figures 1-4 as the detail is not clear. Please follow the link for more information: https://blogs.plos.org/plos/2019/06/looking-good-tips-for-creating-your-plos-figures-graphics/" https://blogs.plos.org/plos/2019/06/looking-good-tips-for-creating-your-plos-figures-graphics/

8. We note that Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

a. You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license. 

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission. 

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

b. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

9. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have been rigorous in assessing the reliability of the HMS. On my part, I find few observations, but these observations may be fundamental. In particular, I think the study may be a good marker for further psychometric studies of the HMS.

1. The assessment of the reliability of the score first assumes that the unidimensionality of the items included in this score is satisfactory. Even in a small sample, the authors should report evidence of unidimensionality of the instrument, even with "simple" statistics, e.g., inter-item correlation, item-test correlation, etc.

2. In Table 1, they can also report the standard deviation or other univariate estimator of score variability.

3. "...were consulted regarding setting the level 190 of agreement for the Bland Altman analysis [17]. The experts agreed that the limits of agreement (precision) of the anticipated differences between raters/intra-rater for the total score of the test would lie within ±2 points overall and therefore ±2 was the set limits of agreement (precision) for this study..."

Here it is required to describe what was the rationale for arriving at this magnitude of anticipated differences.

4. It is advisable to summarily describe the content of the HMS items, perhaps in a section within the Method, e.g., Instrument.

5. I suggest including graphs of the distribution of scores, e.g., histograms.

6. The decision to use a two-week time interval may be a bit more applied.

7. In this first reliability study, internal consistency should also be reported, for example with coefficient alpha (with confidence intervals).

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: César Merino-Soto

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Dec 21;17(12):e0278996. doi: 10.1371/journal.pone.0278996.r002

Author response to Decision Letter 0


25 Sep 2022

Dear Mr Domini,

We thank you for the careful reading of our manuscript.

Please find attached our second revision of the manuscript PONE-D-21-36288 “Revised Hammersmith Scale for Spinal Muscular Atrophy: inter and intra-rater reliability and agreement" for further consideration for publication in PLOS ONE.

We have addressed the point in this letter & the manuscript related to journal requirements to:

‘include a non-author institutional contact for data access in the interest of maintaining long-term data accessibility. At this time, please provide a non-author point of contact that is abe to receive queries regarding data access’

Author’s response:

As requested we have attached a clean and updated version of the manuscript, a tracked changes version of the manuscript which we consider would now meet your required specifications. In the manuscript we have added detail on page 24, line 542-543 to contact “Dr Salma Samsuddin, SMA REACH UK & ISMAC UK Trial Manager via salma.samsuddin@gosh.ac.uk for any queries regarding data access.”

We have also further updated the data availability statement to include the same information.

Updated data availability statement:

“The minimal dataset underpinning the results of this study can be found in the supporting information S1 Dataset. This does not include detail regarding the demographics of the raters included in the study, this information, their affiliation with the SMA REACH UK sites and small population studied would mean even anonymised data would have several indirect identifiers which may risk breaking anonymity. Minimal information pertaining to the raters can be found within the participants section within results. The minimal dataset can also be found within the results section in tables 1, 2 and 3 and within Figures 1, 2 and 3.

Data beyond the minimal dataset required for this study are not publicly accessible due to it containing sensitive data pertaining to human research participants and may compromise anonymity. However, external parties can request data in aggregate form from the SMA REACH UK steering committee for researchers who meet the criteria for access to confidential data. Please contact Dr Salma Samsuddin, SMA REACH UK & ISMAC UK Trial Manager via salma.samsuddin@gosh.nhs.uk for any queries regarding data access.”

We hope this revised version will now be acceptable for publication in the PLOS ONE. If you require any additional information, please do not hesitate to contact us.

Yours sincerely,

Danielle Ramsey

Attachment

Submitted filename: Response to reviewers.pdf

Decision Letter 1

Jae-Young Hong

29 Nov 2022

Revised Hammersmith Scale for Spinal Muscular Atrophy: inter and intra-rater reliability and agreement

PONE-D-21-36288R1

Dear Dr. Danielle Ramsey

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Jae-Young Hong

Academic Editor

PLOS ONE

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have effectively addressed each point made in the review. I am almost completely satisfied with the modifications and justifications made.

I state "almost" because I would still like to insist that the authors report some concurrent evidence with their data on item-test relationship and internal consistency reliability. Although the authors have supported these properties based on a previous study (reference 17) with Rasch modeling results, this is not necessarily replicable in other contexts and samples, even more so in small samples. The authors induce validity from another study, but do not corroborate it in the present sample (please see references to the problem below).

Overall, this observation does not create a substantial limit to the publication of the manuscript. Also, pointing out these references does not compel the authors to cite them.

References

https://www.reumatologiaclinica.org/en-metric-studies-compliance-questionnaire-on-articulo-S2173574321001660

https://www.elsevier.es/es-revista-revista-colombiana-reumatologia-374-estadisticas-S0121812320300566

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Cesar Merino-Soto

**********

Acceptance letter

Jae-Young Hong

12 Dec 2022

PONE-D-21-36288R1

Revised Hammersmith Scale for Spinal Muscular Atrophy: inter and intra-rater reliability and agreement

Dear Dr. Ramsey:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Jae-Young Hong

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Dataset. Study minimal dataset.

    This file contains the minimal dataset underpinning the results for this study. Demographic data is not included in this minimal dataset due to the risk of indirectly identifying participants. Please contact Dr Salma Samsuddin, SMA REACH UK & ISMAC UK Trial Manager via salma.samsuddin@gosh.nhs.uk for any queries regarding data access.

    (XLSX)

    Attachment

    Submitted filename: Response to reviewers.pdf

    Data Availability Statement

    The minimal dataset underpinning the results of this study can be found in the Supporting Information S1 Dataset. This does not include detail regarding the demographics of the raters included in the study, this information, their affiliation with the SMA REACH UK sites and small population studied would mean even anonymised data would have several indirect identifiers which may risk breaking anonymity. Minimal information pertaining to the raters can be found within the participants section within results. The minimal dataset can also be found within the results section in tables 1, 2 and 3 and within Figs 1, 2 and 3. Data beyond the minimal dataset required for this study are not publicly accessible due to it containing sensitive data pertaining to human research participants and may compromise anonymity. However, external parties can request data in aggregate form from the SMA REACH UK steering committee for researchers who meet the criteria for access to confidential data. Please contact Dr Salma Samsuddin, SMA REACH UK & ISMAC UK Trial Manager via salma.samsuddin@gosh.nhs.uk for any queries regarding data access.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES