Abstract
Introduction
Evoked compound action potentials (ECAPs) are neurophysiological biomarkers of neural activation during spinal cord stimulation (SCS). Clear distinction between ECAPs and nonphysiological signals is critical to the application of contemporary, ECAP-based closed-loop (CL) SCS therapies. Herein, we evaluated the performance and user acceptability of a novel programming software that automates generation of ECAP-based CL-SCS programs—the Assisted Programming Module (APM).
Methods
We report results from two prospective, multicenter, single-arm, feasibility studies: Freshwater (NCT04662905) and Rosella (NCT06057480). APM performance was compared with the previous generation programming software and other published methods. Performance was assessed by comparing signal-to-noise ratios, artifact rejection, and other objective parameters. User acceptability was assessed using questionnaires administered to SCS users.
Results
The APM successfully generated a CL program in 96% of initial programming sessions (n = 81/84; Freshwater, 31/34; Rosella, 50/50). In the Rosella study, median time to generate an automated CL program (n = 68) was 11.9 min [interquartile range (IQR) 9.9–14.0]. Median dose ratio was 1.31 (IQR 1.20–1.46) at end of trial (n = 24), 1.34 (IQR 1.13–1.51) at 1 month post implant (n = 16), and 1.32 (IQR 1.21–1.48) at 3 months post implant (n = 15). At least 90% of patients [trial, 90% (27/30); implant, 94% (17/18)] were satisfied with their programming experience, and ≥ 90% of patients [trial, 90% (26/29); implant, 94% (16/17)] felt in control of their therapy. The APM achieved a mean signal-to-noise ratio of 4.6 ± 1.2, a 35% improvement over the previous generation ECAP dose-controlled CL-SCS system. Detectable artifact leakage rates decreased by 75% when compared with other published methods without compromise to signal-to-noise performance.
Conclusions
Next-generation ECAP dose-controlled CL technology demonstrated strong feasibility, high patient satisfaction and therapy control, and superior ECAP signal fidelity compared with existing methods. By standardizing CL-SCS programming and enhancing signal fidelity, the APM may improve workflow efficiency and long-term therapy outcomes in chronic pain management.
Trial Registration
ClinicalTrials.gov identifiers: NCT04662905, NCT06057480.
Keywords: Spinal cord stimulation, ECAP, Closed-loop neuromodulation, Automated programming, Chronic pain
Key Summary Points
| Why carry out the study? |
| There is a critical unmet need for objective, efficient, and reproducible programming tools in spinal cord stimulation (SCS). |
| This study evaluated the feasibility, performance, and user acceptability of a novel Assisted Programming Module (APM) designed to automate evoked compound action potential (ECAP)-based closed-loop SCS programming. |
| What was learned from the study? |
| The APM achieved a 96% automation success rate, improved signal-to-noise ratio by 35%, reduced artifact leakage by 75%, and was associated with high patient satisfaction and perceived control over therapy. |
| These findings demonstrate that automated ECAP-based programming can standardize therapy delivery, enhance neural dosing precision, and reduce clinician burden, representing a paradigm shift in neuromodulation practice. |
Introduction
Chronic pain remains a persistent clinical challenge with nearly 70 million adults suffering from conditions refractory to conventional medical management [1]. Spinal cord stimulation (SCS) is a well-established therapeutic modality for managing chronic pain, offering an alternative to pharmacological and surgical interventions [2, 3].
Physiological closed-loop control (PCLC) SCS systems utilize evoked compound action potentials (ECAPs) that serve as objective, physiological biomarkers of spinal cord activation and provide a measurable target for achieving optimal neural recruitment to guide programming and optimize therapy [4, 5]. By continuously monitoring the spinal cord’s response to stimulation, ECAP dose-controlled closed-loop SCS (a PCLC system) dynamically adjusts stimulation parameters in real time with each stimulation pulse, providing optimized and consistent neural activation [6–8]. In the pivotal EVOKE trial that was conducted through 3 years, ECAP dose-controlled closed-loop SCS resulted in superior pain relief and also superior improvements in secondary outcomes when compared with open-loop, fixed-output SCS [6–8].
A major challenge in ECAP-enabled SCS is detecting these small-amplitude signals amid large artifacts [9–11]. ECAP signals in the spinal cord are extremely small—in the order of tens of microvolts—and can be contaminated by much larger stimulation artifacts. A stimulus pulse of several volts could induce an electrode artifact in the order of millivolts, whereas the true neural response (ECAP) is in the microvolt range. Distinguishing a clear ECAP waveform is dependent on accurate artifact rejection, which is critical to therapy delivery since closed-loop control is dependent on unambiguous ECAP sensing [9–11]. This imposes a rigorous requirement on the recording system, as it must capture the smaller neural signal while rejecting larger stimulus artifacts and background noise.
The Assisted Programming Module (APM) is a next generation ECAP dose-controlled closed-loop SCS platform. The APM introduces a novel, state-of-the-art filter technology, automated parameter testing, and a simplified user interface. The platform rapidly scans through multiple configurations on a given lead, records the ECAP evoked by each configuration and ranks them by signal quality. This enables the system to identify the electrode configuration that yields the highest-fidelity ECAP signal with the largest amplitude. The APM automates analysis of the ECAP signal across the lead array, delivering optimized programming recommendations through a single-button interface designed for clinical efficiency.
In this study, we characterize the neural sensing filter performance of the APM platform and compare it to the previous generation ECAP dose-controlled closed-loop SCS technology [6–8] and other published methods [12]. We also report the feasibility of using the novel platform to program patients with SCS with chronic pain.
Methods
Study Population
Data from two prospective studies evaluating the APM were analyzed: Freshwater (NCT04662905) and Rosella (NCT06057480). Both studies included patients with chronic, intractable trunk and/or limb pain.
Each study was designed, implemented and reported in accordance with the ICH Guidelines for Good Clinical Practice, with applicable local regulations, and with the ethical principles laid down in the Declaration of Helsinki. Ethical approval for each study was granted by the Ethics Committee and/or Institutional Review Board (IRB) for Freshwater (NCT04662905) and Rosella (NCT06057480) studies. All patients provided written (signed) informed consent prior to participation in each study.
Next-Generation APM Platform (EVA™, Saluda Medical, Minnetonka, MN, USA)
The APM consists of a streamlined patient-centric user interface that guides the programmer through one of two workflows:
New Program Assistant (NPA): a four-step, complete, automated workflow where the system independently establishes an optimal stimulation configuration based on the patient’s ECAP profile and interaction with the software (Fig. 1a). The NPA evaluates between four and eight distinct stimulation configuration candidates, each consisting of automatically selected anode/cathode locations and stimulation pulse-widths. Stimulation candidates are evaluated in sequence via a simple press-and-hold button interface used to ramp the stimulation level up to maximal intensity, and upon button release, stimulation is immediately ramped down. For each candidate, NPA objectively evaluates multiple sensing configurations concurrently to ascertain neural activation and spinal cord sensitivity, which is used to predict and configure personalized therapeutic settings (see Fig. 1b). Stimulation candidates are refined between ramps, and at most four candidates are short-listed for final evaluation by the patient. Short-listed candidates are interleaved, and any combination of candidates may be isolated and evaluated by the patient, significantly expanding the pool of available programs without increasing workflow complexity or duration. Patient preference then determines a subset of these stimulation candidates to deploy in the final program. Finally, the optimal closed-loop program is algorithmically determined and activated on the device.
ECAP Assistant (EA): in this mode, the APM facilitates the refinement of an existing stimulation program via a two-step workflow (steps 1 and 4 as per Fig. 1a), offering automated adjustments derived from automated ECAP recordings and their corresponding activation plots.
Fig. 1.
APM workflow and data collection. a Four-step APM New Program Assistant workflow: stimulation candidate testing, patient feedback refinement, coverage evaluation of shortlisted candidates, and closed-loop deployment. ECAP Assistant implements only testing and deployment steps. b Data from one stimulation candidate evaluation: stimulation current is increased to a level where the patient first reports discomfort, with ECAP recordings processed for six interleaved configurations, yielding six activation plots per stimulation candidate
The APM provides an automated workflow to assist the user when generating a new program (via the NPA) or to configure the neural measurement and closed-loop parameters of an existing program (via the EA). The dual-mode operation of APM makes it suitable for both initial programming sessions and subsequent adjustments as patient needs evolve over time. All APM programming parameters remain within on-label settings.
The APM performs parallel data processing on the interleaved sensing configurations on-the-fly. The APM introduces a new filter that is configured using the set of recorded signals from each data-stream, resulting in a set of activation plots that are updated with each incoming signal in real-time (Fig. 1b). The sensing configuration with the highest signal-to-noise ratio (SNR) (Fig. 2a) identifies the optimal sensing configuration associated with the stimulation candidate under test and finally the optimal sensing configuration for closed-loop therapy from the highest SNR across all selected stimulation candidates.
Fig. 2.
Signal to noise ratio performance with the new APM filter. a SNR quantification method using feedback variable (ECAP amplitude) at 1.4 × threshold normalized by feedback variable noise standard deviation. b SNR statistics for the Freshwater (FW) cohort pre and post APM testing. The asterisk () denotes a statistically significant two-sample t-test () for the within patient pre (FW) versus post (APM) programming comparison. c Comparative SNR analysis between APM filter and Chakravarthy et al. method [12] for ECAPs recorded with 6 (7) recording electrode gap and 8 (9) reference electrode gap. The asterisk () indicates a statistically significant one-sample t-test (). d Graphic of (6, 8) and (7, 9) sensing electrode configurations with reference to the Chakravarthy el al. (7, 8) sensing configuration. Diamonds indicate population means for all box plots
APM Performance
Signal-to-Noise Ratio (SNR)
SNR is measured from activation plots collected for each patient as the ratio of the ECAP amplitude to the standard deviation of the noise present in the activation plot. To ensure fair comparison of SNR across different patients, ECAP amplitude is taken at 1.4× ECAP threshold (see Fig. 2a).
SNR for patients programmed manually in the Freshwater study were compared with the program generated by the APM. This comparative analysis was not possible in the Rosella study as initial programming was performed using only APM workflows.
To demonstrate state-of-the-art filtering performance, SNR is compared against the method published by Chakravarthy et al. [12]. In their paper, the recording electrode was placed seven contacts from the end of the lead where stimulation occurred, and the reference electrode was placed eight contacts from the end of the lead, adjacent to the recording electrode. For brevity, we refer to this hereafter as a (7, 8) recording configuration. Recordings from a (7, 8) configuration were not collected as part of this work. However, data was collected using similar [(7, 9) and (6, 8)] configurations which utilize the same recording and reference electrode locations, but with a gap of one contact separating them (Fig. 2d). Data are pooled from Freshwater and Rosella studies and limited to stimulation candidates where a tight tripole was used. Signals for the (7, 9) and (6, 8) recording configurations were reprocessed using the best performing algorithm from Chakravarthy et al., named the Artefact Model Method [12], which we will refer to as the Chakravarthy filter henceforth.
Pairwise comparisons of SNR are performed by taking the ratio of the measured SNRs under each condition.
Artifact Rejection
To accurately demonstrate state-of-the-art artifact rejection, we compare the filtering method used by the APM against the Chakravarthy filter [12]. We implemented a new test methodology for filter comparison that improves upon the test method of Chakravarthy et al. [12] by directly measuring the impact of artifact leakage as opposed to estimating it from an activation plot collected in a single posture. Patients from the Freshwater study were stimulated at both half and one-fourth of ECAP threshold while sitting and then standing (see Fig. 3a). Signals were recorded for at least 5 s for each of the four conditions (Fig. 3b); (, Sitting), (, Standing), (, Sitting), (, Standing). The use of subthreshold stimulation levels is intended to ensure that signals contain only artifact and no neural response, which was visually confirmed in the dataset prior to analysis.
Fig. 3.
Artifact rejection performance. a Experimental methodology, whereby each patient assumed a seated and a standing posture and experienced stimulation at two currents IT/2 and IT/4. b Example epidural recordings for one patient under each of the four conditions and corresponding Vrms distributions that highlight magnitude of artifact variability within a single patient. c Distributions of median filter outputs for the APM and Chakravarthy signal processing methods. d Rates of statistically significant artifact change at the output of the APM, Chakravarthy et al. and Vrms filters measured as null-hypothesis acceptance rates for Kruskal–Wallis tests () comparing feedback variable distributions across the four conditions for all n = 22 patients
The APM and Chakravarthy filters [12] are then applied to these artifact-only signals retrospectively, producing statistical samples for each of the four conditions (see example in Fig. 3b). A Kruskal–Wallis test () is then applied to determine whether a statistically significant difference between the populations exists. The RMS voltage of the artifact signals is applied as a reference filtering method for comparison.
Outcome Assessments
Freshwater Study
The Freshwater study, a first-in-human (FIH), prospective, multicenter, single-arm study, evaluated the feasibility of programming using APM in patients implanted with the EVOKE® System (Saluda Medical, Minnetonka, MN). Patients were initially programmed manually by trained representatives using the Clarity™ Programming Application and later programmed using the NPA. The EA workflow was not tested. Outcomes assessed included automation success rates and evaluation of the patient experience during programming.
Rosella Study
The Rosella study was a prospective, multicenter, single-arm trial designed to evaluate both automated programming workflows (i.e., NPA and EA) in both temporary trial and permanent implant phases of the EVOKE System. Outcomes included patient questionnaires regarding programming session experience, time required to successfully generate a closed-loop program, and objective neural metrics during both trial and postimplant phases.
Objective Device Metrics
Objective metrics obtained from the EVOKE system log files consist of various neurophysiologic dose metrics [6, 13, 14]. If no device log file was available for the visit or the device log file was incomplete (data missing for greater than 70% of analysis time range), the patient was excluded as the purpose of the analysis was to investigate the objective neural dose metrics from at-home use of the therapy. Variables and their definitions have been previously published [6, 13–15]. A summary is presented below:
Dose ratio: the ratio of current (mA) at the median ECAP level to the ECAP threshold current, normalizing for anatomical variations and individual spinal cord sensitivity (see Fig. 5a).
Dose accuracy: the root mean square error (RMSE) of the feedback variable in excess of expected system noise.
Therapy utilization: the percentage of time that measured neural activation was supra-ECAP threshold while the program was active.
Fig. 5.

SCS device metrics. a Dose ratio determination through activation function evaluation and inversion. Blue and red points correspond to suprathreshold and subthreshold feedback variables respectively. b–d Patient outcomes at trial, 1-month, and 3-month intervals showing b dose ratios, c device utilization, and d dose accuracy. Maximum analgesic effect guidelines [14, 15] shown in gray on all plots
Statistical Analysis
Continuous variables are reported as mean ± standard deviation (SD) [or median (IQR) when non-normal]. Categorical variables are summarized as n (%). Sample sizes for each comparison are given in the text and on figure axes.
The distributions of SNR ratios were inspected visually with violin/box plots and judged approximately symmetric, with no extreme skew; therefore, a parametric one-sample two-sided t-test (null hypothesis H0: mean SNR ratio = 1, , scipy.stats.ttest_1samp) was deemed appropriate (Fig. 2c). For a within patient SNR comparison (Fig. 2b) a two-sample t-test (null hypothesis H0: mean SNR ratio = 1, , scipy.stats.ttest_rel) was used.
For the artifact performance characterization (Fig. 3), unequal feedback variable variances across stimulation states were observed for some patients; hence the non-parametric Kruskal–Wallis omnibus test was used (, scipy.stats.kruskal).
All analyses were performed with Python 3.12 (pandas 1.5, SciPy 1.15, NumPy 1.26). Graphs were created with seaborn/Matplotlib (seaborn 0.13, Matplotlib 3.8).
Results
Filter Performance
SNR was compared pairwise immediately before and after programming with the APM for Freshwater study patients where data was available.
The APM generated programs with a mean SNR of 4.77 ± 0.23 (Fig. 2b), improving upon the pre-APM SNR (denoted FW in Fig. 2b) of 4.03 ± 0.27 by 35% ± 0.14% (, ) in a within-patient comparison.
For the (6, 8) recording reference configuration, the APM filter improves upon Chakravarthy et al. by 66% ± 0.26% (, ). A 14 ± 0.07% improvement was observed for the (7, 9) configuration but was not found to be statistically significant (, ) (Fig. 2d).
Artifact Rejection
To quantify artifact rejection, the median filter output was measured for each of the four sub-ECAP threshold current and posture conditions across all tested patients (n = 22) (see Fig. 3c). The APM methodology exhibited negligible leakage with a minimum (maximum) population average 0.03 µV (0.17 µV) with corresponding standard deviation of 0.66 µV (0.89 µV). Conversely, the Chakravarthy et al. method exhibited artifact leakage with a minimum (maximum) population average of 6.79 µV (7.03 µV) and corresponding standard deviation of 1.32 µV (1.14 µV).
Differences between filter outputs were tested for statistical significance (Kruskal–Wallis, ) as a quantitative indication of sensation changes that patients may experience with the posture and current changes expected to accompany daily activity with a closed-loop device (Fig. 3d). For reference, the root mean squared (RMS) voltage of the artifact traces was tested to identify the proportion of patients where morphological changes between the four posture and current conditions occurred. The null hypothesis was rejected in 100% of these cases, indicating that all (n = 22) patients exhibited some change in artifact morphology. Artifact variation was detected in 36.4% of patients using the Chakravarthy filter. A significant improvement was observed using the APM signal processing pipeline, where a statistically significant artifact variability was brought down to 9.1% (Fig. 3d). Two test failures are in line with the expected type-II error rate for an α = 0.05 omnibus test applied to a sample size of n = 22. Closely examining the two failures, we found that the mean filter output remains consistent and near-zero across the four posture and current conditions despite the statistical test failures, suggesting that true artifact rejection performance may be underestimated here.
Outcome Assessments
The Freshwater study evaluated 34 patients with previously implanted EVOKE® SmartSCS™ systems. Baseline demographics included similar sex representation (40.6% female), and an average pain duration of 12.3 years (SD = 13.8).
All 34 patients had a permanently implanted system that was manually programmed prior to evaluating the feasibility of programming with the APM. This enabled the patients to compare their experiences with manual and automated programming.
The Rosella study included 30 patients during the trial phase and 20 patients post-permanent implant. Baseline demographics included 60% female representation, and mean pain duration of 7.4 years (SD = 11.2). There was a combined total of 49 de novo programming sessions during the trial (n = 30) and permanent implant (n = 20).
In the Freshwater study, the NPA successfully generated closed-loop programs in 31/34 (91%) of study patients. Using a combination of NPA and EA, at least one closed-loop program was successfully generated for 100% of Rosella study patients across 50 initial programming sessions (trial: n = 30; implant: n = 20).
In the Rosella study, median time to generate a de novo automated CL program using the NPA was 11.9 min (IQR 9.9–14.0 min) across 68 separate programming sessions including trial initial (n = 30) and permanent initial (n = 20). Upon successful creation of a closed loop program by the APM, patients completed a survey questionnaire. For Rosella study programming sessions, 27/30 (90.0%) trial patients and 17/18 (94.4%) patients who went on to receive a permanent implant reported being very satisfied or satisfied with the APM programming experience (Fig. 4a), and patients found programming comfortable with no discomfort for 22/30 (73.3%) trial APM uses and 16/18 (88.9%) post-permanent-implant programming sessions (Fig. 4b). A total of 26/30 (86.7%) Freshwater patients with prior programming experience found programming with the APM more comfortable or just as comfortable as manual programming (Fig. 4b).
Fig. 4.
Survey results. a Rosella study: patient satisfaction with programming. b Comfort with programming: freshwater study compares with prior experience; Rosella evaluates as experienced. c Rosella study: ratings for control, ease of use, and process burden
Additionally, Rosella study patients were asked to rate aspects of their programming experience on a scale from 0 (strongly disagree) to 10 (strongly agree) during the trial and post-permanent-implant visits (n = 46) (Fig. 4c). Pooling trial and permanent survey results, patients reported feeling in-control of their therapy (score ≥ 6) for 42/46 (91.3%) initial programming sessions. Patients found the programming experience with the APM easy and time-efficient in 41/46 (89.1%) and 43/46 (93.5%) sessions, respectively (score ≥ 6).
Objective Device Metrics
For all stimulation candidates, the APM determines a comfortable dose derived from the observed neural activation during the workflow and the maximum stimulation intensity. This dose is also set to maintain supra-ECAP threshold stimulation and provide a comfortable intensity for programming. Post programming, patients could adjust the stimulation level freely and no guidance was provided in relation to dosing. For the Rosella study participants, dose ratio, dose accuracy, and therapy utilization were monitored across time.
The median dose ratio (see Fig. 5b) was consistent across time; 1.31 (IQR 1.20–1.46) at end of trial (n = 24), 1.34 (IQR 1.13–1.51) at 1 month post implant (n = 16), and 1.32 (IQR 1.21–1.48) at 3 months post implant (n = 15), aligning with dose requirements for maximum analgesia reported in prior studies [14, 15].
Therapy utilization is similarly consistent across time with a median of 92% (IQR 80–99%) at end of trial (n = 24), 92% (IQR 77–98%) at 1-month post implant (n = 16), and 96% (IQR: 81–99%) at 3 months post implant (n = 15) (Fig. 5c). Median dose accuracy was 8.2 µV (IQR: 5.2–9.9 µV) at end of trial (n = 24), 7.1 µV (IQR: 5.1–10.0 µV) at 1-month post-implant (n = 16), and 6.2 µV (IQR: 5.9–7.9 µV) at 3 months post-implant (n = 15) (see Fig. 5d).
Discussion
Signal contamination is a fundamental challenge for PCLC SCS systems. Specifically signal processing algorithms must perform robustly in the presence of dynamic stimulus artifacts while effectively isolating the ECAP signal above the noise. The number of patients where closed loop control can both be enabled and effectively adjust neural activation is impacted by signal-to-noise ratio (SNR). Artifact rejection determines the magnitude and dynamics of systematic errors on the measured ECAP amplitude that impact stimulation consistency and neural activation. To develop and compare such algorithms, it is therefore essential to have robust methods for assessing performance with respect to both signal to noise ratio (SNR) and artifact rejection. Chakravarthy et al. published the first examples of these methods; however, shortcomings in their characterization approach have motivated the development of new filter comparison methods in this work. Instead of comparing different filters with respect to SNR, Chakravarthy et al. opt for directly comparing the output signal amplitude, measured as the suprathreshold slope of the activation plot. Although ECAP estimation algorithms typically measure amplitude in units of microvolts, there may be differences in unit scaling that need to be accounted for during comparison, as would be the case comparing a root mean squared (RMS) voltage to peak-to-peak voltage for example. This scaling applies equally to the signal as it does to the noise and so SNR offers a metric that is unit invariant. Comparing different filters by signal amplitude alone therefore misrepresents true performance with respect to the clinically relevant metric of SNR and favors the peak-to-peak measurement approach employed by the Chakravarthy filter. Further, Chakravarthy et al. [12] quantify artifact rejection by comparing the subthreshold filter output during the collection of an activation plot to the suprathreshold slope. This approach does not assess the impact of posture change and is limited in sensitivity as it relies on estimating regression coefficients from inherently noisy activation plot data. In this work, we present an improved technique for assessing dynamic artifact suppression.
The APM sensing platform combines a new filter with recording electrode optimization, improving upon SNR (36%) in a within-patient comparison. We attribute the high success rate (96%) of the New Program Assistant workflow to this notable SNR improvement. Further, the SNR performance of the APM filter improves upon the Chakravarthy filter [12] in an evaluation of ECAPs recorded under comparable conditions. Operating closed loop SCS at a target ECAP amplitude within the recording noise window increases the proportion of stimuli that are delivered below threshold, which has been shown to negatively correlate with patient outcomes [14]. As such, we recommend that feedback targets are set at levels greater than 2σ (i.e., SNR ≥ 2) to ensure that closed loop therapies do not disproportionately respond to noise rather than a true physiological signal, titrating the effective dose. To assess artifact rejection performance, we have developed a robust and more sensitive methodology by applying omnibus statistical tests to artifact recorded under different posture and current conditions across n = 22 patients. Statistically significant changes in artifact RMS voltage were observed for all patients (see Fig. 3d), demonstrating pervasive morphological changes in artifact with both current and posture. The APM filter exhibits exceptional performance, improving upon the Chakravarthy filter [12] substantially where the percentage of patients likely to experience postural intensity changes is decreased by 75%. One limitation of our methodology is that physiological noise changes across posture can cause test failures independent of artifact change, which likely caused one or both of the omnibus test failures observed.
Our improvements to the foundational sensing technology that underpin ECAP acquisition and robust closed-loop SCS programming enabled the development of the APM, a first of its kind end-to-end automated programming workflow. The simple patient-friendly user interface was designed to be operable by clinicians with minimal to no training, where patients have full view of the programming interface and are expected to be able to engage with the process. This approach represents a significant departure from the complex and technical programming interfaces that reflect standard clinical practice today where clinicians require significant training or depend on support from manufacturer engineers, and patients are often insulated from decision making related to their therapy. To assess patient acceptance of this new paradigm we collected surveys across both Freshwater and Rosella clinical studies. Patient satisfaction with APM programming was notably high, where patients often preferred APM programming to manual programming when they had prior experience. Patients reported feeling in-control of their therapy, indicating broad acceptance and a sense of empowerment. Closed loop programming was achieved sub 15 min, reducing the attentional burden required by patients during programming. Objective neural metrics (i.e., dose ratio and dose accuracy) received by the patient at home further validated the efficacy of APM programming. Dose-ratio evaluations align with recent randomized controlled trials (RCTs) [6, 13] and real-world data on ECAP dose levels for optimal pain relief [14, 15]. These findings demonstrate that the APM calibrates therapy to deliver neural dosing that parallels values for sustained maximum analgesic benefit [14, 15].
The development of the APM as an automated programming platform represents a paradigm shift in the application of closed-loop neuromodulation for chronic pain management. Traditional programming methods for SCS are not only labor intensive but also highly subjective, relying on iterative adjustments based on patient feedback that can vary widely from session to session [16]. This iterative and subjective approach contributes to high reprogramming burden seen in SCS [17]. The APM not only addresses these issues, but it also empowers physicians with objective, reproducible data to guide therapy decisions. This data in turn can facilitate the development of consensus guidelines for SCS programming that are based on empirical evidence rather than expert opinion alone, while future updates to the APM provide an opportunity to deploy these guidelines to patients at scale. This approach aligns with the growing emphasis on data-driven, patient-centered care in healthcare systems.
The clinical implications of adopting an automated programming platform such as the APM extend well beyond mere technical improvements. In addition to streamlining workflow, the standardization offered by the APM has the potential to improve long-term outcomes in chronic pain management. Consistent and optimized dosing is likely to translate into more predictable analgesic benefits [14, 15]. This in turn has the potential to reduce the high rates of explantation observed in manual open-loop programming paradigms [18–24] where the leading causes of device removal is due to loss of efficacy (37%) [18–24]. ECAP dose-controlled closed-loop therapy is more durable [6–8] and cost-effective [25, 26] than open-loop SCS and conventional medical management, as shown by the EVOKE randomized clinical trial (RCT) [6–8, 25] and a network meta-analysis [26]. There were zero explants due to loss of therapeutic efficacy in the closed-loop cohort over 3 years [6]. Additionally, participants in the closed-loop group required minimal device reprogramming [8]. APM’s advanced filtering technology offers greater flexibility and scalability, potentially improving clinical outcomes and cost savings. As the first commercially viable automated programming platform for SCS, the APM sets a new benchmark for precision adaptive neuromodulation poised to elevate the standard of care in chronic pain therapy.
Limitations
First, while the Freshwater study compared manual and automated programming, the reported data are limited to in-clinic findings. A prospective study is needed to evaluate differences in clinical outcomes and patient satisfaction between manual programming and APM generated programs. Second, objective neural metrics were only reported in the Rosella study due to the lack of comparable data collection in the Freshwater study. Third, neither study collected baseline pain characteristics or therapy efficacy outcomes such as pain relief or functional improvement, which are critical to understanding the broader clinical impact of automated programming. A prospective study is underway (NCT06229470) to assess clinical outcomes associated with the APM.
Despite these limitations, the results provide compelling evidence supporting the feasibility and potential benefits of automation in SCS programming.
Although the APM filter achieves superior performance over the Chakravarthy filter [12], the authors note that the signal processing algorithm is only one stage in the acquisition chain and cannot be used as a basis for comparing commercially available PCLC SCS systems where noise characteristics of measurement electronics may differ.
Conclusions
The APM platform marks a transformative leap in neuromodulation, combining precise ECAP acquisition, scalability, and patient-centric design to address longstanding challenges in SCS programming. By automating the process of ECAP-based closed loop calibration, the APM eliminates variability, reduces programming time, and enhances therapy consistency. By integrating high-fidelity ECAP sensing, advanced artifact rejection and objective programming within a streamlined workflow, the APM delivers consistent, personalized therapy with minimal clinician burden.
Acknowledgements
The authors gratefully acknowledge participating study participants and clinics for their support and assistance in this work.
Medical Writing/Editorial Assistance
Editorial assistance in the preparation of this article was provided by Lalit Venkatesan (Saluda Medical), Weirong Ge (Saluda Medical), Martin Wong (Saluda Medical), Ian Gould (Saluda Medical), Angela Leitner (Saluda Medical), Erin Hanson (Saluda Medical), Dave Mugan (Saluda Medical), and Loren Buchanan (Saluda Medical). All editorial assistance was funded by Saluda Medical. The authors thank these individuals for their thoughtful contributions to this work.
Author Contributions
Daniel J. Parker designed the study, conducted statistical analyses, interpreted the data, and wrote the manuscript. Ajay B. Antony, Gregory L. Smith, Jonathan H. Goree, Marc A. Russo, Erika A. Petersen, Chau M. Vu, Paul Verrills, Christopher Gilmore, Leonardo Kapural, and Jason E. Pope conducted the study including data collection. Darayus Nanavati performed data processing and statistical analyses and interpreted the data. Dean M. Karantonis supervised and contributed to the study design, study execution and data interpretation. Gregory L. Smith, Erika A. Petersen, Paul Verrills, Darayus Nanavati and Dean M. Karantonis critically revised the manuscript. All authors critically reviewed the manuscript, provided final approval of the submitted version, and agreed to be held accountable for the accuracy and integrity of the finished publication.
Funding
This study was sponsored by Saluda Medical. The journal’s Rapid Service Fee was additionally funded by Saluda Medical.
Data Availability
The data that support the findings of this study are available from the corresponding author, DJP, upon reasonable request.
Declarations
Conflict of Interest
Daniel J. Parker, Darayus Nanavati, and Dean M. Karantonis report being employees of Saluda Medical. Ajay B. Antony serves as a consultant/speaker for Boston Scientific, Abbott, PainTEQ, Saluda Medical, Companion Spine, Stryker, and IZI Medical; he has received research support from Abbott, Boston Scientific, PainTEQ, Saluda, Vivex and Brixton. Marc A. Russo reports consultancies to Boston Scientific, Nevro, and Saluda Medical; research activities (paid to research institution) for Medtronic, Presidio Medical, and Saluda Medical; equity holdings in SPR Therapeutics; and options in Saluda Medical and Presidio Medical. Gregory L. Smith is a consultant for Saluda Medical and SPR Therapeutics. Johnathan H. Goree is a consultant for Saluda Medical, Abbott, and Stratus Medical and the recipient of research support paid to the institution by SPR Therapeutics and Mainstay Medical. Chau M. Vu is a consultant for Saluda Medical and PainTEQ. Erika A. Petersen has received research support from Mainstay, Medtronic, Neuros Medical, Nevro Corp, ReNeuron, SPR, and Saluda Medical outside the submitted work, as well as personal fees from Abbott Neuromodulation, Biotronik, Medtronic Neuromodulation, Nalu, Neuros Medical, Nevro, Presidio Medical, Saluda Medical, and Vertos outside the submitted work. She holds stock options from SynerFuse and neuro42. There are no other relationships that might lead to a conflict of interest in the current study. Paul Verrills is a consultant for Saluda Medical, Presidio Medical, and Vivex Biologics. Christopher Gilmore reports clinical trial funding from Saluda Medical during the conduct of the study; reports personal fees and other from SPR, and personal fees from Nevro, Nalu, Biotronik, and Boston Scientific outside the submitted work. Leonardo Kapural is a consultant for Saluda Medical, Biotronik and Teladoc, reports receiving research grants from Nevro, Nalu, Neuros, Saluda Medical, and Presidio, and is an owner of the Chronic Pain Research Institute. Jason E. Pope reports research and consulting fees from Saluda Medical during the conduct of the study; consultancy for Abbott, Medtronic, Saluda Medical, Flowonix, SpineThera, Vertos, Vertiflex, SPR Therapeutics, Tersera, Aurora, Spark, Ethos, Biotronik, Mainstay, WISE, Boston Scientific, and Thermaquil outside the submitted work; has received grant and research support from: Abbott, Flowonix, Aurora, Painteq, Ethos, Muse, Boston Scientific, SPR Therapeutics, Mainstay, Vertos, AIS, and Thermaquil outside the submitted work; and is a minority shareholder of Vertos, Stimgenics, SPR Therapeutics, Saluda Medical, Painteq, Aurora, Spark, Celeri Health, Neural Integrative Solutions, Pacific Research Institute, Thermaquil, Abbott and Anesthetic Gas Reclamation.
Ethical Approval
Each study was designed, implemented and reported in accordance with the ICH Guidelines for Good Clinical Practice, with applicable local regulations, and with the ethical principles laid down in the Declaration of Helsinki. Ethical approval for each study was granted by the Ethics Committee and/or Institutional Review Board (IRB) for Freshwater (NCT04662905) and Rosella (NCT06057480) studies. All patients provided written (signed) informed consent prior to participation in each study.
Footnotes
Prior Presentation: This work has not been previously published. A subset of the results have been presented as a poster presentations during the conferences; ASPN on 17 July 2025, NANS on 30 January 2025, and INS on 11 May 2024.
The original online version of this article was revised to correct the sentence beginning ‘Signals for the (7, 9) and (6, 8) recording configurations…’.
Change history
4/29/2026
Correct the sentence beginning ‘Signals for the (7, 9) and (6, 8) recording configurations…’.
Change history
4/28/2026
A Correction to this paper has been published: 10.1007/s40122-026-00838-7
References
- 1.Ferreira ML, de Luca K, Haile LM, Steinmetz JD, Culbreth GT, Cross M, et al. Global, regional, and national burden of low back pain, 1990–2020, its attributable risk factors, and projections to 2050: a systematic analysis of the Global Burden of Disease Study 2021. Lancet Rheumatol. 2023;5(6):e316–29. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Caylor J, Reddy R, Yin S, Cui C, Huang M, Huang C, et al. Spinal cord stimulation in chronic pain: evidence and theory for mechanisms of action. Bioelectron Med. 2019;5(1):12. [Google Scholar]
- 3.Deer TR, Krames E, Mekhail N, Pope J, Leong M, Stanton-Hicks M, et al. The appropriate use of neurostimulation: new and evolving neurostimulation therapies and applicable treatment for chronic pain and selected disease states. Neuromodulation. 2014;17(6):599–615. [DOI] [PubMed] [Google Scholar]
- 4.Health C for D and R. Technical considerations for medical devices with physiologic closed-loop control technology. FDA; 2023 [cited 2024 Dec 11]. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/technical-considerations-medical-devices-physiologic-closed-loop-control-technology.
- 5.Pope JE, Deer TR, Sayed D, Antony AB, Bhandal HS, Calodney AK, et al. The American Society of Pain and Neuroscience (ASPN) guidelines and consensus on the definition, current evidence, clinical use and future applications for physiologic closed-loop controlled neuromodulation in chronic pain: a NEURON group project. J Pain Res. 2025;18:531–51. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Mekhail NA, Levy RM, Deer TR, Kapural L, Li S, Amirdelfan K, et al. ECAP-controlled closed-loop versus open-loop SCS for the treatment of chronic pain: 36-month results of the EVOKE blinded randomized clinical trial. Reg Anesth Pain Med. 2024;49(5):346–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Mekhail N, Levy RM, Deer TR, Kapural L, Li S, Amirdelfan K, et al. Long-term safety and efficacy of closed-loop spinal cord stimulation to treat chronic back and leg pain (Evoke): a double-blind, randomised, controlled trial. Lancet Neurol. 2020;19(2):123–34. [DOI] [PubMed] [Google Scholar]
- 8.Mekhail N, Levy RM, Deer TR, Kapural L, Li S, Amirdelfan K, et al. Durability of clinical and quality-of-life outcomes of closed-loop spinal cord stimulation for chronic back and leg pain: a secondary analysis of the evoke randomized clinical trial. JAMA Neurol. 2022;79(3):251–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Parker JL, Karantonis DM, Single PS, Obradovic M, Cousins MJ. Compound action potentials recorded in the human spinal cord during neurostimulation for pain relief. Pain. 2012;153(3):593–601. [DOI] [PubMed] [Google Scholar]
- 10.Parker JL, Karantonis DM, Single PS, Obradovic M, Laird J, Gorman RB, et al. Electrically evoked compound action potentials recorded from the sheep spinal cord. Neuromodulation. 2013;16(4):295–303. [DOI] [PubMed] [Google Scholar]
- 11.Single P, Scott J. Cause of pulse artefacts inherent to the electrodes of neuromodulation implants. IEEE Trans Neural Syst Rehabil Eng. 2018;26(10):2078–83. [DOI] [PubMed] [Google Scholar]
- 12.Chakravarthy K, FitzGerald J, Will A, Trutnau K, Corey R, Dinsmoor D, et al. A clinical feasibility study of spinal evoked compound action potential estimation methods. Neuromodulation. 2022;25(1):75–84. [DOI] [PubMed] [Google Scholar]
- 13.Mekhail NA, Levy RM, Deer TR, Kapural L, Li S, Amirdelfan K, et al. Neurophysiological outcomes that sustained clinically significant improvements over 3 years of physiologic ECAP-controlled closed-loop spinal cord stimulation for the treatment of chronic pain. Reg Anesth Pain Med. 2024;50:495–502. [Google Scholar]
- 14.Muller L, Pope J, Verrills P, Petersen E, Kallewaard JW, Gould I, et al. First evidence of a biomarker-based dose-response relationship in chronic pain using physiological closed-loop spinal cord stimulation. Reg Anesth Pain Med. 2024;50:345–51. [Google Scholar]
- 15.Levy RM, Mekhail NA, Kapural L, Gilmore CA, Petersen EA, Goree JH, et al. Maximal analgesic effect attained by the use of objective neurophysiological measurements with closed-loop spinal cord stimulation. Neuromodulation. 2024;27(8):1393–405. [DOI] [PubMed] [Google Scholar]
- 16.Sheldon B, Staudt MD, Williams L, Harland TA, Pilitsis JG. Spinal cord stimulation programming: a crash course. Neurosurg Rev. 2021;44(2):709–20. [DOI] [PubMed] [Google Scholar]
- 17.Amirdelfan K, Antony A, Levy R, Pope J, Falowski S, Naidu R, et al. ID: 221060 Health-related and economic impacts of clinic visit burdens for spinal cord stimulation patients and caregivers. Neuromodulation. 2023;26(4):S122–3. [Google Scholar]
- 18.Wahezi SE, Yener U, Naeimi T, Lewis JB, Yerra S, Sgobba P, et al. Spinal cord stimulation explanation and chronic pain: a systematic review and technology recommendations. J Pain Res. 2025;18(18):1327–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Pope JE, Deer TR, Falowski S, Provenzano D, Hanes M, Hayek SM, et al. Multicenter retrospective study of neurostimulation with exit of therapy by explant. Neuromodulation. 2017;20(6):543–52. [DOI] [PubMed] [Google Scholar]
- 20.Van Buyten JP, Wille F, Smet I, Wensing C, Breel J, Karst E, et al. Therapy-related explants after spinal cord stimulation: results of an international retrospective chart review study. Neuromodulation. 2017;20(7):642–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Wang VC, Bounkousohn V, Fields K, Bernstein C, Paicius RM, Gilligan C. Explantation rates of high frequency spinal cord stimulation in two outpatient clinics. Neuromodulation. 2020;24:507–11. [DOI] [PubMed] [Google Scholar]
- 22.Dupré DA, Tomycz N, Whiting D, Oh M. Spinal cord stimulator explantation: motives for removal of surgically placed paddle systems. Pain Pract. 2018;18(4):500–4. [DOI] [PubMed] [Google Scholar]
- 23.Kirketeig T, Söreskog E, Jacobson T, Karlsten R, Zethraeus N, Borgström F. Real-world outcomes in spinal cord stimulation: predictors of reported effect and explantation using a comprehensive registry-based approach. Pain Rep. 2023;8(6):e1107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Hayek SM, Veizi E, Hanes M. Treatment-limiting complications of percutaneous spinal cord stimulator implants: a review of eight years of experience from an Academic Center Database. Neuromodulation. 2015;18(7):603–9. [DOI] [PubMed] [Google Scholar]
- 25.Duarte RV, Bentley A, Soliday N, Leitner A, Gulve A, Staats PS, et al. Cost-utility analysis of evoke closed-loop spinal cord stimulation for chronic back and leg pain. Clin J Pain. 2023;39(10):551–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Eldabe S, Nevitt S, Bentley A, Mekhail NA, Gilligan C, Billet B, et al. Network meta-analysis and economic evaluation of neurostimulation interventions for chronic nonsurgical refractory back pain. Clin J Pain. 2024;40(9):507–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author, DJP, upon reasonable request.




