Skip to main content
Digital Health logoLink to Digital Health
. 2026 Mar 3;12:20552076261430105. doi: 10.1177/20552076261430105

Bridging policy and practice in smart clinical trials: Quantifying regulatory friction and technology adoption in Korea and the UK

Jae Eun Yang 1, Ah Rim Kim 2,
PMCID: PMC12957607  PMID: 41788667

Abstract

Objective

Smart clinical trials integrating artificial intelligence (AI), wearable and Internet of Things (IoT) sensors, and data-driven platforms have the potential to make clinical research more efficient, inclusive, and patient-centered. However, regulatory permissiveness does not always translate into actual adoption in the real-world. This study aims to examine how national policy environments and operational factors interact to shape the uptake of digital trial technologies in Korea and the United Kingdom between 2015 and 2025.

Methods

We analyzed 1172 interventional trials registered on ClinicalTrials.gov using a multi-label classification pipeline to identify the use of AI, wearable/IoT technologies, clinical data integration, and digital platforms. Adoption patterns were linked to a policy friction index that quantified seven categories of regulatory barriers in each country. Cross-country comparisons were conducted to assess alignment between policy permissiveness and observed technology adoption.

Results

Despite relatively high policy openness, the United Kingdom demonstrated persistently low adoption of AI, wearable/IoT, and digital platform technologies, reflecting implementation barriers such as validation burden, governance requirements, and workflow integration challenges. In contrast, Korea exhibited strong uptake of clinical data integration technologies despite higher regulatory friction, driven by institutional data infrastructures and hospital-centric ecosystems. Overall, adoption patterns diverged systematically from policy expectations in both countries.

Conclusions

These findings suggest that digital transformation in clinical research requires more than permissive policy frameworks; it depends on effective alignment among regulation, infrastructure, and implementation science. By introducing a reproducible framework that links regulatory friction to observed technology adoption, this study provides actionable insights for accelerating safe, interoperable, and scalable smart clinical trial deployment within evolving digital health ecosystems.

Keywords: Smart clinical trials, digital health policy, regulatory science, implementation science, Korea–UK collaboration

Introduction

Smart clinical trials combine artificial intelligence (AI), wearables and Internet of Things (IoT) devices, data integration, and digital platforms. These approaches promise faster, more inclusive, and more efficient evidence generation, yet a persistent policy–practice gap limits routine uptake at scale. Despite post-pandemic enthusiasm and permissive guidance in many settings, diffusion remains uneven across technologies and health systems, suggesting that formal “allowance” alone is insufficient to drive implementation.13 This study addresses that gap by providing a compact, two-country baseline for 2015–2025 and by quantifying alignment between policy permissiveness and real-world use with a reproducible framework that links multi-label technology classification to a policy friction score and an implementation-gap readout. Focusing on Korea and the United Kingdom (UK), which have complementary capabilities and active digital-trial agendas, we ask how adoption differs by domain and over time, and to what extent observed patterns accord with, or diverge from, what policy would allow. These questions inform concrete priorities for managers and regulators aiming to accelerate safe, interoperable, and scalable smart-trial deployment.

In this study, digital health technologies are used as an umbrella term referring to a broad set of digital tools applied across healthcare settings. By contrast, smart clinical trials are defined more specifically as clinical trial models in which these digital health technologies are systematically integrated across the trial lifecycle, from design and recruitment to monitoring, data management, and analysis. Across contemporary smart-trial practice, four enabling domains recur with distinct evidentiary and operational profiles: clinical data integration (electronic health record (EHR)-integrated capture, EHR→ electronic data capture (EDC) interoperability, common data models), AI (design optimization, automated monitoring/analysis), Wearables/IoT (continuous, free-living measurement), and digital platforms [electronic informed consent (eConsent), telehealth, remote monitoring/source data verification (SDV)]. Recent reviews show that decentralized/hybrid models expand reach and efficiency but hinge on verifiable data quality, endpoint validity, and governance rather than permissive policy alone.46 For digital endpoints, multi-stakeholder guidance highlights validation and qualification requirements before pivotal use.7,8 eConsent can improve access and process fidelity, yet introduces usability and privacy considerations that demand site-level standardization. 9 Remote/risk-based monitoring is increasingly adopted, but sustained benefit depends on workflow fit and robust standard operating procedures (SOPs). 10 Finally, EHR-integrated data capture and standard models [e.g., Clinical Data Interchange Standards Consortium (CDISC)/Observational Medical Outcomes Partnership (OMOP) with Findable, Accessible, Interoperable, and Reusable (FAIR)-principled EDC] are central to interoperability and reproducibility, but heterogeneous implementations and contractual/privacy constraints remain practical bottlenecks. 11 These domain-specific realities motivate a systematic, quantitative look at policy–practice alignment, which we provide in this study.

Concurrently with technological progress, regulators have clarified expectations for computerized systems, electronic data, and decentralized elements, yet implementation remains heterogeneous across jurisdictions. Recent comparative reviews of decentralized clinical trials (DCTs) detail how agency guidance has evolved post-pandemic while emphasizing that successful deployment still depends on validation, audit trails, data-integrity controls, and fit-for-purpose governance. 12 In parallel, the qualification and acceptance of digital health technology derived endpoints now follow explicit evidentiary pathways that distinguish exploratory use from regulatory decision-making, with practical implications for sponsors’ trial design choices. 13 Statistical and operational considerations for DCTs further highlight the need to pre-specify data provenance, monitoring strategies, and analysis plans that accommodate remote and hybrid workflows. 6 Finally, cross-sectional analyses of global trial registries show uneven diffusion of DCT components despite broad policy interest, underscoring the importance of policy–practice alignment as a measurable construct rather than an assumed outcome of permissive guidance. 14

Despite rapid progress in digital and data-driven trial methods, the literature largely treats technology diffusion and regulatory evolution in parallel streams, leaving a measurable policy practice alignment gap: permissive guidance does not reliably translate into routine use without validation, workflow fit, and governance that scale across sites. Reviews of AI and digital health emphasize the promise for efficiency and precision, but also underscore requirements for reproducibility, model transparency, and clinical integration that slow uptake.1519 Post-pandemic transformation narratives similarly show telehealth and remote workflows expanding unevenly across systems, with operational readiness and data-protection obligations mediating diffusion.20,21 At the same time, verification frameworks for digital measures (e.g., V3, comprising verification, analytical validation, and clinical validation) and common data standards are necessary predicates for real-world scale.22,23 Finally, regulators and methodologists have highlighted the importance of real-world evidence and defensible provenance in decision-making, yet few studies jointly quantify policy permissiveness and domain adoption across countries with transparent, reproducible rules.24,25 This study addresses that need by linking a multi-label view of technology use to a legally grounded friction index and an implementation-gap readout in a two-country setting.

Against this background, the present study aims to quantify policy–practice alignment for smart clinical-trial technologies in Korea and the UK over 2015 to 2025, linking a multi-label view of technology use to a legally grounded measure of policy friction and an implementation-gap readout. We address three questions: (RQ1) How do adoption patterns of AI, Wearable/IoT, Clinical Data Integration, and Digital Platforms differ between Korea and the UK over 2015 to 2025? (RQ2) How do these countries and time-specific adoption patterns compare with policy friction index (PFI)-based expectations, and what do the resulting over-and under-adoption gaps reveal about policy practice alignment in each setting? (RQ3) Which domains therefore constitute priority targets for policy and operational action? By answering these questions with transparent, reproducible rules and country-comparable denominators, we provide a decision-oriented baseline for managers, sponsors, and regulators seeking to accelerate responsible, interoperable, and scalable smart-trial deployment.

This study makes four contributions. First, it provides a transparent two-country baseline (2015–2025) for smart-trial adoption across AI, wearable/IoT, clinical data integration, and digital platforms, using multi-label classification with country-comparable denominators. Second, it introduces a reproducible policy–practice alignment framework that links a legally grounded PFI to an implementation-gap readout, allowing direct comparison between what policy permits and what trials actually use. Third, it delivers traceable evidence by documenting the coding rules, barrier mapping, and trial-level excerpts in the Supplementary, enabling audit from code to text. Fourth, it distills actionable implications for Korea and the UK and outlines a bilateral roadmap to reduce high friction under-adoption while scaling validated practices.

By quantifying where policy permissiveness translates into routine use, and where it does not, this work offers a decision-oriented lens for regulators. This lens helps regulators target guidance and oversight where friction is binding, enables sponsors and contract research organizations (CROs) to prioritize investments that convert verified capabilities into scalable workflows, and supports health-system leaders in aligning incentives, training, and interoperability with measurable gains. The two-country baseline enables benchmarking across programs and time, while the PFI–gap framework provides a reusable scoreboard that can be refreshed as guidance evolves or pilots mature. Together, these elements are intended to shorten the distance between allowance on paper and auditable, patient-safe implementation in smart clinical trials.

Methods

This study is a retrospective, cross-sectional observational analysis of registered clinical trials, designed to examine cross-country patterns of digital technology adoption and their alignment with national regulatory environments and is reported in accordance with the STROBE guideline for observational studies.

Classification of smart-trial technologies

Using the ClinicalTrials.gov advanced search, we built a country-filtered cohort and then restricted to trials with Start Date 2015–2025, yielding n = 1172 records (UK 725, Korea 447; Supplementary S3–S4). Each trial was multi-labeled across four predefined domains: AI (trial design & analysis automation), Wearable/IoT, Clinical Data Integration, and Digital Platforms, with a two-stage pipeline that combined a rule-based dictionary and a sentence-level verifier. Domain dictionaries (Supplementary S5) covered synonyms and implementation terms and were applied to prioritized fields (Interventions → Outcome Measures → Brief Summary → other free text). We required exact or near-exact matches for pivotal constructs [e.g., eConsent, remote source data verification /source data review (SDV/SDR), telehealth/virtual/telephone, wearable/sensor/actigraphy, EHR–EDC/OMOP/CDISC/FHIR/electronic case report form(eCRF)] and excluded generic non-functional terms (e.g., “digital,” “online”) to reduce false positives. Under this rule-based approach, explicit domain keyword matches were identified in 25.4% of trials (298/1172), while the remaining 74.6% (874/1172) did not contain explicit matches (Supplementary S3). To ensure complete domain coverage for downstream analyses, trials without explicit matches were assigned to the clinical data integration domain via a predefined fallback rule, reflecting the assumption that routine clinical trials necessarily rely on core data capture, management, and integration infrastructures. Sensitivity analyses were conducted to assess the robustness of the classification assumptions. Excluding trials classified via the fallback rule and applying a narrower AI definition that removed generic terms (e.g., “algorithm,” “algorithmic,” “predictive analytics,” and “predictive modeling”) yielded directionally consistent country-level adoption patterns across all four domains (Supplementary S6).

Adoption trend analysis

We analyzed year-by-year adoption of smart-trial technologies in the final cohort restricted to Start Date 2015–2025 (UK 725 trials; Korea 447 trials; Supplementary S3–S4). Each trial could carry multiple domain labels, AI, wearable/IoT, clinical data integration, and digital platforms, as defined in the label codebook (Supplementary S5). For each country–year, we summarized adoption by reporting, side-by-side, the number of labeled trials and the share (%) relative to all trials that started in the same country and year; because labels are non-exclusive, domain shares may sum to more than 100% within a country–year, and this is disclosed in figure/table captions. For readability in the main text, we highlight anchor years (2015, 2020, and 2025), while the full annual series for 2015–2025 and the corresponding line plots are provided in Supplementary S8. Country comparisons used the same denominator definition, and years with very small denominators were retained but flagged to avoid over-interpretation.

Policy–regulatory mapping and policy friction index

We operationalized policy permissiveness using seven decentralized-trial barriers, including eConsent, telehealth, investigational medicinal product direct-to-patient (DTP) supply, wearable endpoints, remote monitoring (SDV/SDR), interoperability (EHR–EDC/e-systems), and cross-border data transfer, assessed separately for Korea and the UK from current guidance and statute. Acceptance was coded with four descriptive categories: permitted, conditional, restricted, not permitted, mapped to an ordinal 0–3 scale (0, 1, 2, 3); the legal bases and amendment years are detailed in Supplementary S10–2. Barrier scores were then translated to the four technology domains, AI, wearable/IoT, clinical data integration, digital platforms, via a pre-specified mapping with weights (primary = 1.0; secondary = 0.5; special minor links = 0.25) that reflect whether a barrier directly governs platform operations (e.g., eConsent/remote monitoring) or data governance (e.g., interoperability/cross-border) (Supplementary S10–1). The PFI for each domain–country pair was computed as the weighted aggregation of its mapped barrier scores (results in S10–3), with bottleneck (max) and equal-weight variants reported for sensitivity in Supplementary S12 Table A. PFI serves as the input to the alignment analysis in Sections 2–4.

Implementation gap analysis

To assess alignment between policy permissiveness and real-world uptake, we contrasted each domain–country pair's observed adoption (multi-label share defined in Section 2–2) with a simple expected-adoption envelope derived from its PFI (Section 2–3). The baseline expectation scaled inversely with friction (PFI 0 → high expected uptake, PFI 3 → low expected uptake), and the implementation gap was defined as the difference between observed and expected adoption (positive = over-adoption; negative = under-adoption). For transparency, the main text reports country–domain are provided in Supplementary S11. Robustness was examined by (i) alternative PFI aggregations (weighted mean vs bottleneck and equal-weight) and (ii) alternative expectation slopes (±5 around the baseline), with conclusions unchanged (Supplementary S12). Figure 3 displays the core relationship between friction and adoption, with bubble size proportional to the number of trials.

Results

Overall distribution of smart-trial technologies

In the final cohort (UK 725 trials; Korea 447 trials), the cross-sectional snapshot in Table 1 shows that Clinical Data Integration overwhelmingly dominates technology use in both countries (Korea 386/447, 86.4%; UK 567/725, 78.2%; total 953/1,172, 81.3%). By contrast, AI remains modest (Korea 43, 9.6%; UK 86, 11.9%; total 129, 11.0%), Wearable & IoT is scarce (Korea 11, 2.5%; UK 51, 7.0%; total 62, 5.3%), and Digital Platforms account for a small minority (Korea 26, 5.8%; UK 63, 8.7%; total 89, 7.6%). These contrasts are visualized in Figure 1, a two-color bar chart (Korea = red; UK = blue) with percentage labels above each bar. All values report both counts (n) and shares (%) using country-specific denominators (Korea 447; UK 725). Because trials can carry multiple domain labels, domain shares within a country can sum to more than 100%; this multi-label property is noted in the table footnote and figure caption. Overall, the snapshot indicates a broadly similar ordering across countries, Data Integration ≫ AI ≈ Digital Platforms > Wearable & IoT, with the UK exhibiting slightly higher relative uptake in the three non-integration domains.

Table 1.

Cross-sectional adoption of smart clinical-trial technologies by country.

Technology KOR_n KOR_% UK_n UK_% Total_n Total_%
AI 43 9.6 86 11.9 129 11
Wearable & IoT 11 2.5 51 7 62 5.3
Data Integration 386 86.4 567 78.2 953 81.3
Digital Platforms 26 5.8 63 8.7 89 7.6

Caption: Numbers are n (%) using country-specific denominators (Korea n = 447, UK n = 725) in the Start-Date 2015–2025 cohort; trials can carry multiple domain labels (AI, Wearable/IoT, Clinical Data Integration, Digital Platforms), so within-country percentages may sum to >100%. Source tables and labeling codebook are provided in Supplementary S3–S5 and the year-by-year aggregates in S8.

Figure 1.

Figure 1.

Adoption of smart clinical-trial technologies in Korea and the UK. Caption: Bars show adoption rate (%) by domain using country-specific denominators (Korea n = 447, UK n = 725). Colors denote countries (red = Korea; blue = UK), and percentage values are annotated above each bar. Because trials can carry multiple domain labels (AI, Wearable/IoT, Clinical Data Integration, Digital Platforms), within-country percentages may sum to >100%. Source and labeling details are provided in Supplementary S3–S5, with year-by-year series in S8.

Year-by-year adoption trends

Across 2015–2025, Clinical Data Integration remains the dominant technology in both countries (Figure 2), and the anchor-year table confirms a consistently high share: 2015 Korea 43/50 (86.0%) vs UK 55/75 (73.3%); 2020 Korea 37/44 (84.1%) vs UK 52/67 (77.6%); 2025 Korea 29/31 (93.5%) vs UK 46/55 (83.6%) (Table 2). Outside data integration, the UK shows modestly higher diversification: AI is persistently more common in the UK (2015: 20.0% vs 10.0%; 2025: 12.7% vs 3.2%), Wearable & IoT remains marginal but higher in the UK (2015: 2.7% vs 2.0%; 2025: 3.6% vs 3.2%), and Digital Platforms exhibit a transient rise in the UK during the early–mid 2020s before falling in 2025 (10.4% in 2020 to 1.8% in 2025), while Korea stays low throughout (6.8% in 2020; 0.0% in 2025). All percentages use country–year denominators and reflect multi-label classification, so within-country shares may exceed 100%; late-period fluctuations (e.g., 2024–2025) should be interpreted cautiously given small denominators.

Figure 2.

Figure 2.

Year-by-year adoption of smart clinical-trial technologies in Korea and the UK (2015–2025). Caption: Solid lines denote the UK, and dashed lines denote Korea; colors indicate domains (red = AI; orange = Wearable & IoT; green = Clinical Data Integration; blue = Digital Platforms). The y-axis reports adoption rate (%) computed with country–year denominators. Because trials can carry multiple domain labels, domain shares within a country–year may sum to >100%. Small-denominator years (late period) are shown but interpreted cautiously. Full year-by-year tables are provided in Supplementary S8.

Table 2.

Anchor-year adoption of smart clinical-trial technologies by country (2015, 2020, 2025).

Technology 2015 KOR n (%) 2015 UK n (%) 2020 KOR n (%) 2020 UK n (%) 2025 KOR n (%) 2025 UK n (%)
AI 5 (10.0%) 15 (20.0%) 3 (6.8%) 5 (7.5%) 1 (3.2%) 7 (12.7%)
Wearable & IoT 1 (2.0%) 2 (2.7%) 1 (2.3%) 6 (9.0%) 1 (3.2%) 2 (3.6%)
Data Integration 43 (86.0%) 55 (73.3%) 37 (84.1%) 52 (77.6%) 29 (93.5%) 46 (83.6%)
Digital Platforms 1 (2.0%) 4 (5.3%) 3 (6.8%) 7 (10.4%) 0 (0.0%) 1 (1.8%)

Caption: Values are n (%) using country–year denominators (i.e., the number of trials that started in the given country and year). Trials may carry multiple domain labels (AI, Wearable/IoT, Clinical Data Integration, Digital Platforms), so within-country percentages can sum to >100%. Full year-by-year series and plotting data are provided in Supplementary S8; cohort definition and labeling codebook are in Supplementary S3–S5.

Policy–regulatory mapping

Table 3 shows a clear cross-country contrast: the UK is generally permissive (scores 0–1), whereas Korea is typically restrictive (2–3). The largest gaps occur in eConsent (UK 0 vs Korea 3), investigational medicinal product (IMP) direct-to-patient (DTP) supply (UK 1 vs Korea 3), remote monitoring (SDV/SDR) (UK 0 vs Korea 2), and interoperability of electronic systems (EHR–EDC) (UK 0 vs Korea 2), indicating tighter constraints in Korea across platform operations, logistics, and data-governance. The remaining barriers follow the same pattern: telehealth is permitted in the UK but restricted in Korea (0 vs 2), wearable endpoints are accepted in the UK but not explicitly recognized in Korea (0 vs 2), and cross-border data transfer is conditional in both countries (UK 1, Korea 2 with partial easing). Collectively, the barrier profile implies that UK trials can deploy decentralized methods with relatively low friction, while Korean trials face binding limitations, particularly for platform processes (eConsent/telehealth/remote SDV), IMP logistics (DTP), and data linkage/sharing (interoperability and cross-border transfer), setting the stage for the alignment analysis in Sections 3–4.

Table 3.

PFI by regulatory barrier and country (UK vs Korea).

Regulatory barrier UK score (0–3) Korea score (0–3) Legal/Regulatory basis (Year) Notes
eConsent 0 (Permitted) 3 (Not permitted) UK: HRA Decentralised Trial Methods Position Statement(2023).
KR: Medical Service Act, Art. 33 (Amended 2023).
UK fully permits electronic consent with identity verification and data integrity safeguards. In Korea, remote eConsent conflicts with legal restrictions that medical practice must occur within licensed institutions.
Telehealth 0 (Permitted) 2 (Restricted) UK: HRA DCT Position Statement (2023).
KR: Medical Service Act, Art. 34 (2023).
UK allows remote consultation and follow-up in decentralized trials. Korea only permits telemedicine for limited follow-up cases (e.g., within six months), restricting broader use in clinical trials.
IMP Direct-to-Patient (DTP) Supply 1 (Conditional) 3 (Not permitted) UK: MHRA COVID-19 Guidance on IMP Delivery (2020).
KR: Pharmaceutical Affairs Act, Art. 50 (2023).
UK permits home delivery of investigational products under risk assessment and SOP approval. In Korea, delivery outside pharmacies or hospitals, including clinical trial IMPs, is legally prohibited.
Wearable Endpoints 0 (Permitted) 2 (Restricted) UK: EMA Qualification Opinion on SV95C Digital Endpoint (2019); HRA/MHRA guidance (2023).
KR: MFDS GCP Guidance (no explicit wearable endpoint recognition, as of 2024).
The UK recognizes validated digital biomarkers from wearables. Korea lacks MFDS guidance, preventing wearables from being accepted as primary trial endpoints.
Remote Monitoring (SDV/SDR) 0 (Permitted) 2 (Restricted) UK: HRA remote monitoring guidance (2023).
KR: MFDS KGCP (2017).
UK allows remote source data verification with secure governance. Korea requires on-site monitoring, with no legal basis for remote EMR access.
Interoperability (EHR–EDC Integration) 0 (Permitted) 2 (Restricted) UK: EMA Guideline on Computerised Systems and Electronic Data in Clinical Trials(2023).
KR: Personal Information Protection Act(Amended 2023); Bioethics and Safety Act.
The UK framework supports CDISC/OMOP standards and integrated data platforms. Korea restricts data linkage under PIPA, with limited EMR–EDC standardization.
Cross-Border Data Transfer 1 (Conditional) 2 (Strict, partially eased) UK: UK GDPR and Data Adequacy Decision with Korea (2022).
KR: PIPA, Art. 39–12 (Amended 2023).
The UK permits data transfer under adequacy or safeguards. Korea requires explicit consent and oversight, with partial easing after the 2023 PIPA amendment, but remains highly restrictive.

Caption: PFI by regulatory barrier in the UK and Korea. Scores were assigned on a 0–3 scale (0 = fully permitted, 1 = conditional/limited, 2 = restricted, 3 = not permitted). Legal and regulatory bases are cited with the most recent amendment year. Notes summarize the practical interpretation for decentralized clinical trial (DCT) operations.

Footnote: UK regulatory references include HRA and MHRA guidance, EMA qualification opinions, and UK GDPR. Korean references include the Medical Service Act, Pharmaceutical Affairs Act, MFDS Good Clinical Practice, and the Personal Information Protection Act (PIPA). IMP delivery restrictions in Korea apply equally to investigational products used in clinical trials.

Alignment between policy friction and observed adoption

Table 4 juxtaposes PFI-based expected adoption with the observed share for each domain–country pair, revealing a consistent pattern of under-adoption in the UK, notably in AI, Wearable & IoT, and Digital Platforms, and a mixed profile in Korea, where Clinical Data Integration shows over-adoption despite elevated friction while the other domains remain under-adopted. Figure 3 visualizes the relationship by plotting observed adoption (%) against policy friction (0–3) with the expectation line (E = 100 − 30 × score) and bubble size proportional to domain-specific trial counts; points below (above) the line indicate under- (over-) adoption. PFI values and barrier rationales are not repeated in the main text; they are documented in Table 3 and Supplementary S10 (mapping, scoring, and domain PFI tables), ensuring reproducibility while keeping the Results concise.

Table 4.

Expected (PFI-based) versus observed adoption by domain and country, and the resulting implementation gap.

Country Domain Expected adoption E (%) Observed adoption O (%) Gap (O−E, %p)
UK AI 92.5 11.9 −80.6
UK Wearable & IoT 100 7 −93.0
UK Data Integration 90.1 78.2 −11.9
UK Digital Platforms 90.1 8.7 −81.4
Korea AI 40 9.6 −30.4
Korea Wearable & IoT 40 2.5 −37.5
Korea Data Integration 33.4 86.4 53
Korea Digital Platforms 26.8 5.8 −21.0

Caption: Table 4. Expected (PFI-based) versus observed adoption by domain and country; Gap = observed − expected (positive = over-adoption; negative = under-adoption). Complete PFI scores and notes are provided in Supplementary S11.

Figure 3.

Figure 3.

Policy friction versus observed adoption by domain (Korea vs UK). Caption: Each point represents one domain–country pair; the x-axis is policy friction (0–3), the y-axis is observed adoption (%), and bubble size is proportional to the number of trials in that domain and country. The dashed line shows the PFI-based expectation (E = 100 − 30 × score); points below (above) the line indicate under- (over-) adoption. Colors/markers distinguish countries (Korea vs UK). Full barrier scores, mapping, and domain-level PFI tables are provided in Table 3 and Supplementary S10.

Discussion

Overview of key findings and policy–practice alignment patterns

This study provides an integrated, cross-country view of smart clinical-trial technologies by showing that Clinical Data Integration consistently dominates in both Korea and the UK, while UK exhibits broadly low uptake despite low policy friction and Korea displays a high-friction yet high uptake pattern specifically in data integration alongside under-adoption in the other domains. Leveraging a multi-label labeling pipeline and a legally grounded policy–regulatory mapping, we quantified alignment using a PFI-based expectation and an implementation gap that directly contrasts what policy permissiveness would allow with what trials actually use (Tables 14; Figures 13; Supplementary S3–S5, S8, S10–S12). The results are robust to alternative scoring and expectation choices and traceable to trial-level evidence (Supplementary S12–S13). The remainder of the discussion situates these findings against prior work, examines plausible mechanisms behind low-friction/low-adoption and high-friction/high-adoption patterns, and develops policy and practice implications for accelerating responsible, interoperable, and scalable smart clinical-trial adoption in both settings.

Low policy friction and persistently low adoption in the UK

In the UK, our finding of low adoption despite low policy friction aligns with prior evidence that permissive policy alone does not guarantee diffusion of digital/AI trial methods; operational and evidentiary hurdles, including system validation, auditability, data-integrity controls, and robust reporting, can slow translation from allowance to routine use, particularly for AI, wearables, and digital platforms.2631 In clinical-AI trials, adherence to CONSORT-AI/SPIRIT-AI increases transparency and quality but raises the bar for documentation, bias mitigation, and reproducibility, which can temper near-term uptake even in supportive environments.26,27 Broader digital-trial reviews similarly emphasize that workflow fit, verification of digital endpoints, and sustained quality management are preconditions for scale, not afterthoughts.2830 Finally, data-protection and cross-site governance (e.g., data protection impact assessment (DPIA) / contractual safeguards under the General Data Protection Regulation (GDPR) regimes) add real procedural friction to decentralized, multi-site deployments, further explaining a low-friction/low-adoption pattern in practice. 31

Domain-Divergent adoption under high policy friction in Korea

In Korea, the pattern diverges by domain: Clinical Data Integration shows high adoption despite high policy friction, consistent with the reality that intra-institutional EHR warehousing and EHR–EDC linkage can progress under local institutional review board (IRB) and hospital governance, even when decentralized elements are legally or operationally constrained; by contrast, AI, Wearables, and Digital Platforms remain under-adopted where scale requires validated digital endpoints, remote source verification, eConsent, and cross-site data sharing.3236 Prior work on real-world data / real-world evidence (RWD/RWE) and EHR-based analytics explains why on-site data integration can advance rapidly, clinical data pipelines are already embedded in routine care, yet also underscores that moving from internal data marts to multi-site, decentralized workflows demand additional evidence and governance (data integrity, audit trails, endpoint validation, and cross-border safeguards) that slow diffusion.3234 Similarly, studies on remote monitoring and federated learning highlight both the technical feasibility and the persistent requirements for privacy-preserving architectures and contractual protections, which help explain why decentralized components lag despite institutional progress in data integration.3538

Common rate-limiting mechanisms shaping adoption across settings

Across settings, the divergence we observe can be parsimoniously explained by four rate-limiting mechanisms that act independently of formal permissiveness. In this study, “AI” is used as a pragmatic, inclusive category encompassing classical rule-based approaches, machine learning methods, and modern neural or deep learning techniques, reflecting how such methods are typically described in clinical trial registries rather than a strict taxonomic distinction.3941 Because registry text rarely specifies algorithmic detail, our classification captures the presence of explicitly stated AI-related functionality rather than the technical depth or model architecture.4143 To assess sensitivity to this definitional choice, we additionally applied a narrower AI definition that excluded generic terms and focused on more explicit references, with results showing consistent country-level patterns.41,43,44 This approach acknowledges the conceptual complexity of defining AI in clinical research while prioritizing transparency and robustness in a large-scale, registry-based analysis.4245 Although this study does not make causal claims, the observed associations may reflect the combined influence of several underlying factors that plausibly shape technology adoption decisions in practice. First, operational and quality-management hurdles, computerized system validation, role-based access, audit trails, and sustained data-integrity controls govern whether an allowed tool becomes a routinized workflow. Second, evidentiary and reporting requirements, including qualification of digital endpoints and adherence to rigorous trial-reporting and bias-mitigation standards for AI and app-based interventions, raise the bar for deployment and scale. Third, data-governance and interoperability preconditions, privacy impact assessments, contractual safeguards for cross-site/cross-border exchange, and alignment to common data models and eCRF/e-systems, determine whether projects can move from intra-institutional use to multi-site, decentralized execution. Fourth, implementation economics and workflow fit, local staffing, training, IT uplift, and reimbursement/incentive structures, shape the near-term return on adoption. Importantly, the apparent dominance of Clinical Data Integration should be interpreted in light of the study's conservative fallback classification approach: when explicit descriptions of digital technologies were absent, data integration served as a baseline category reflecting the pervasive, infrastructural role of data capture and management systems in routine clinical research, rather than uniform technological superiority over AI- or wearable-based approaches. Taken together, these four axes reconcile low-friction/low-adoption patterns (permissive on paper, but constrained by verification and governance overheads) and high-friction/high-adoption in data integration (rapid progress within institutional data pipelines, yet slower diffusion for decentralized, cross-site components), and they motivate the policy and operational remedies detailed in the next section.

Targeted policy and operational recommendations for Korea

In Korea, to narrow the negative gaps observed in AI, wearable/IoT, and digital platforms while sustaining the comparative momentum in Data Integration, we recommend a four-step, KPI-linked program: (i) introduce near-term administrative guidance and pilot protocols for eConsent and remote SDV/SDR (IRB pathways, role-based access, audit trails, data-integrity checks), leveraging the ministry's stepwise DCT roadmap that already calls for element-wise pilots and remote-monitoring systems without immediate statutory change; track adoption (sites using eConsent; proportion of monitoring visits completed remotely) and review annually. (ii) Launch a risk-based DTP (home-delivery) pilot under SOPs for chain-of-custody, temperature control, identity/handover, and deviation reporting, initially limited to IRB-only studies permissible under current rules, to generate local safety/feasibility evidence for subsequent legal modernization; key performance indicators (KPI) include pilot completion rate, deviation rate, and approval lead-times. (iii) Institutionalize interoperability via staged CDISC [Clinical Data Acquisition Standards Harmonization (CDASH)/study data tabulation model (SDTM) / Analysis Data Model (ADaM)] + OMOP adoption with pre-submission validation and Define-XML, building on ongoing national standardization deliverables; measure progress by the number of institutions producing SDTM/ADaM with zero critical validation errors and by the share of trials reusing standard CRFs/datasets. (iv) Operationalize cross-border data governance using model DPIA/contract clauses and a federated analysis option where transfers remain difficult, expanding templates and training through the national program office; evaluate by DPIA turnaround time and the number of multi-site analyses completed per year. Together, these steps reduce friction exactly where our gap analysis finds under-adoption, and convert Korea's intra-institutional data strengths into decentralized, multi-site capability under auditable, regulator-aligned standards.

Implementation-oriented acceleration strategies for the UK

In the UK, closing the low-friction/low-adoption gap calls for implementation-science–oriented acceleration rather than policy relaxation: (i) maintain core quality requirements, CONSORT-AI/SPIRIT-AI adherence, digital-endpoint qualification, and computerized-system validation, but lower transaction costs by providing standard SOP packs, validation templates, and pre-submission checks to sites; (ii) improve workflow and economics through modular training (for monitors, principal investigators(PIs), and data teams), sandbox environments, and targeted incentives/cost offsets (e.g., credit for remote-monitoring visits, bonuses for using standardized case report forms (CRFs)/Define-XML/OMOP-CDISC mappings); (iii) institutionalize diffusion via reference sites and a shared resource hub that curates reusable forms, mappings, and checklists, enabling peer-to-peer spread; and (iv) streamline multi-site governance with model DPIA/contract clauses and a clear federated-analysis option where data transfer is impractical. Progress should be tracked with KPIs, domain-specific adoption rates, validation/approval lead-times, proportion of monitoring performed remotely, reuse rate of standard resources, and pilot-to-routine conversion, reviewed quarterly/annually to sustain momentum.

A bilateral roadmap for scaling smart clinical trials

We propose a three-phase roadmap that directly targets the gaps revealed by our PFI–adoption analysis while leveraging complementary strengths. Short term (0–12 months): establish an agency-to-agency governance frame [e.g., Ministry of Health and Welfare (MoHW)–Department of Health and Social Care (DHSC) with Korea Health Industry Development Institute (KHIDI) / Korea National Enterprise for Clinical Trials (KoNECT)–National Institute for Health and Care Research (NIHR) as program offices] and a shared standards hub curating reusable assets (standard CRFs/Define-XML, CDISC–OMOP mappings, eConsent/remote-SDV checklists, DPIA/contract templates), and launch reference-site exchanges in which UK sites demonstrate mature eConsent/remote-monitoring practice while Korean sites conduct sandbox or IRB-only procedural tests within current rules to co-develop SOPs and pre-submission checklists. Medium term (1–2 years): run joint pilots in four priority tracks, remote SDV, risk-based DTP logistics, digital-endpoint qualification, and federated analyses for cross-site studies, using common templates, harmonized governance packages, and coordinated training (monitors, PIs, data teams). Long term (2–5 years): progress to mutual operational recognition (convergent SOPs/validation evidence, streamlined ethics/data-governance reviews), joint funding calls that scale successful pilots, and a reference→network→scale diffusion mechanism across National Health Service (NHS) trusts and Korean tertiary hospitals. To ensure accountability, both countries track the same KPIs, domain-specific adoption rates, validation/approval lead-times, proportion of monitoring performed remotely, reuse rate of standard resources, and pilot-to-routine conversion, reviewed on a quarterly/annual cycle.

Robustness, methodological contributions, and policy relevance

Our findings are robust to alternative analytical choices: the domain–country patterns and the sign/magnitude of gaps remain unchanged when we vary the PFI aggregation (weighted mean vs bottleneck and equal-weight) and the expectation slope (±5 around the baseline), as detailed in Supplementary S12 (Tables A–B). They are also traceable from code to text: label assignments and barrier judgments can be verified at the sentence level in trial records via the Case Notes compendium (Supplementary S13). Methodologically, the study offers three strengths: (i) a large, public ClinicalTrials.gov cohort enabling a transparent two-country comparison; (ii) a quantitative policy–practice alignment framework that couples multi-label technology classification with a PFI and an implementation-gap readout; and (iii) a barrier mapping anchored in current standards and law [e.g., European Medicines Agency (EMA) computerized-systems/endpoint guidance; Health Research Authority (HRA) / Medicines and Healthcare products Regulatory Agency (MHRA) positions; national statutes for medical services, pharmaceutical supply, and data protection], which supports reproducibility and policy relevance. The next section outlines limitations and avenues to further reinforce these conclusions.

Limitations and directions for future research

This study has several limitations. First, it relies on a single public registry (ClinicalTrials.gov), which may be affected by delayed registration/updates, heterogeneous field completion, and English-language abstraction, potentially introducing under- or misclassification despite our cleaning steps; we mitigate traceability with trial-level excerpts in Supplementary S13. In addition, reliance on a single public registry (ClinicalTrials.gov) may under-represent domestically registered or academically led trials, particularly in Korea and the UK, where national registries are commonly used. However, applying a uniform data source and inclusion criteria across both countries preserves the internal consistency of cross-country comparisons and supports the validity of the observed relative patterns. Second, the labeling pipeline (rule-based dictionary plus sentence-level verification) can still yield boundary-case errors, and the multi-label design means adoption rates reflect the presence of a capability, not its intensity, dose, or duration within a trial; denominator/numerator interpretations are documented in Methods 2–2 and Supplementary S8. Third, the PFI represents a static, country-level snapshot that cannot capture intra-country heterogeneity across institutions, indications, or sponsors, and it embeds researcher choices (barrier mapping and weights); we report coding sources in Table 3/S10 and show robustness to alternative aggregations and expectation slopes in S12. Fourth, our analysis is associational, it quantifies alignment between policy permissiveness and uptake but does not establish causality; unmeasured factors (financing, IT workforce, workflow incentives, disease mix) may confound diffusion. Fifth, small denominators in late years and in low-frequency domains (e.g., Wearable/IoT) introduce volatility that warrants cautious interpretation. Finally, generalizability is bounded by the 2015–2025 window and the Korea/UK context; replication in other jurisdictions and periods, and development of finer-grained PFI at the institution/indication level, are important directions for future work.

Future research and extension of the PFI–adoption framework

A priority is to move beyond a static, country-level snapshot by developing a finer-grained and dynamic PFI, disaggregated at the institution, indication, and sponsor levels and updated over time to reflect legal/guidance changes; designs such as difference-in-differences or interrupted time-series can begin to separate policy effects from background trends. To strengthen external validity, the analysis should be replicated across registries [e.g., Clinical Trials Information System (CTIS) / International Standard Randomised Controlled Trial Number (ISRCTN) / University Hospital Medical Information Network (UMIN)] with harmonized cohort filters and denominator rules. Most importantly, future studies should link policy–adoption alignment to trial outcomes, timelines (start-to-primary completion), costs, retention, data quality/integrity metrics, and regulatory approval lead-times, to quantify whether closing gaps translates into operational and scientific gains. These steps, coupled with transparent sharing of codebooks, mapping tables, and inference scripts, would turn the PFI–adoption framework into a practical monitoring tool for managers and regulators and a basis for prospective, multi-site pilots that test whether targeted policy levers improve both uptake and outcomes.

Conclusion

This cross-country analysis shows that Clinical Data Integration dominates in both Korea and the UK, with contrasting policy–practice alignment patterns observed across domains in the two countries. Methodologically, we couple multi-label technology classification with a legally grounded PFI and an implementation-gap readout, yielding a transparent, reproducible lens on policy–practice alignment. Regarding RQ1, this study demonstrates that adoption patterns differ markedly between Korea and the UK across domains, with Clinical Data Integration most prevalent in both countries alongside persistently low uptake of AI, Wearable/IoT, and Digital Platforms under differing regulatory conditions. In response to RQ2, comparison with PFI-based expectations reveals systematic policy–practice misalignment, characterized by low-friction yet low adoption in the UK and high-friction yet selective uptake in Korea, particularly for data integration. Finally, addressing RQ3, the findings identify clear priority targets for action: eConsent, remote monitoring, DTP logistics, and interoperability in Korea, and implementation-focused acceleration measures in the UK to translate regulatory permissiveness into routine use. The policy message is practical and asymmetric: Korea should prioritize eConsent/remote-SDV guidance, risk-based DTP pilots, and staged interoperability and cross-border governance, converting strong intra-institutional data pipelines into decentralized, multi-site capability; the UK should focus on implementation-science acceleration, standard SOP packs, validation templates, pre-submission checks, modular training, incentives, and streamlined multi-site governance. Findings are robust to scoring and expectation choices and traceable to trial texts (Supplementary), though the registry source, static PFI, and associational design bound inference. Future work should refine PFI to institution/indication levels, replicate across registries, and link alignment to trial outcomes (time, cost, retention, data integrity, and approval lead-times) to test whether narrowing policy–practice gaps delivers operational and scientific gains.

Highlights

  1. Policy permissiveness alone is insufficient to drive digital transformation in clinical trials; adoption depends on regulatory alignment, operational readiness, and implementation capacity.

  2. Quantifying regulatory friction through a PFI reveals contrasting adoption patterns: high-friction yet high uptake in Korea and low-friction but low uptake in the United Kingdom.

  3. The study introduces a reproducible framework linking policy context and technology adoption, providing actionable guidance for accelerating safe, interoperable, and scalable smart clinical trial deployment.

Supplemental Material

sj-pdf-1-dhj-10.1177_20552076261430105 - Supplemental material for Bridging policy and practice in smart clinical trials: Quantifying regulatory friction and technology adoption in Korea and the UK

Supplemental material, sj-pdf-1-dhj-10.1177_20552076261430105 for Bridging policy and practice in smart clinical trials: Quantifying regulatory friction and technology adoption in Korea and the UK by Jae Eun Yang and Ah Rim Kim in DIGITAL HEALTH

Acknowledgments

The authors gratefully acknowledge the use of publicly available data from ClinicalTrials.gov and official regulatory documents from the UK and Korea that enabled this study. We also thank colleagues who contributed technical advice on data coding and policy mapping, and the research assistants who supported data management and supplementary material preparation.

Footnotes

Author contributions: JEY: conceptualization, methodology, data curation, formal analysis, visualization, writing – original draft, writing – review & editing. ARK: conceptualization, methodology, policy/regulatory mapping validation, supervision, resources, project administration, writing – review & editing.

Funding: The authors received no financial support for the research, authorship, and/or publication of this article.

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Data availability statement: All data analyzed in this study were obtained from the public registry ClinicalTrials.gov via advanced search with country filters (Korea/UK) and a Start Date 2015–2025 window. Trial-level identifiers (NCT numbers) are included throughout the paper and Supplementary, enabling independent retrieval of the original records. The analytic cohort (n = 1172), cohort filters, variable schema, raw year-by-country counts, labeling codebook/rules, policy-mapping tables, PFI computations, implementation-gap summaries, sensitivity analyses, and case-note excerpts are provided in the Supplementary workbook (Sheets S1–S13): S1–S4 (cohort flow, schema, preprocessing logs, raw counts), S5 (label dictionary), S6 (Sensitivity analysis) S8 (time-series adoption), S10 (barrier→domain mapping, country scores, domain-level PFI), S11 (PFI–gap), S12 (robustness), and S13 (trace-to-text case notes with field locations and URLs). No patient-level data were accessed; all materials necessary to reproduce the figures and tables are contained in the manuscript and Supplementary.

Declaration of generative AI and AI-assisted technologies in the writing process: Portions of the text were refined using generative AI tools (ChatGPT, OpenAI) for language editing and formatting assistance only.

All conceptualization, data analysis, interpretation, and substantive writing were conducted by the authors.

The authors reviewed and approved all content generated, and they take full responsibility for the integrity and originality of the manuscript.

Supplemental material: Supplemental material for this article is available online.

References

  • 1.Dhar S. Decentralized clinical trials and opportunities with artificial intelligence. Proc Jpn Acad Innov 2024; 1: 10008. [Google Scholar]
  • 2.Alemayehu D, Hemmings R, Natarajan K, et al. Perspectives on virtual (remote) clinical trials as the “new normal” to accelerate drug development. Clin Pharmacol Ther 2021; 110: 59–72. [DOI] [PubMed] [Google Scholar]
  • 3.Hu JR, Power JR, Zannad F, et al. Artificial intelligence and digital tools for design and execution of cardiovascular clinical trials . Eur Heart J 2024; 45: 5003–5018. [DOI] [PubMed] [Google Scholar]
  • 4.Jean-Louis G, DeBaun M. The value of decentralized clinical trials: inclusion, innovation, and impact. Science 2024; 384: 40–45. [DOI] [PubMed] [Google Scholar]
  • 5.Hanley DF, Jr, Bernard GR, Wilkins CH, et al. Decentralized clinical trials in the trial innovation network: value, strategies, and lessons learned. J Clin Transl Sci 2023; 7: e170. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Chen J, Li Y, Bretz F, et al. Decentralized clinical trials in the era of real-world evidence: statistical considerations for regulatory decision-making. Clin Transl Sci 2025; 18: e70117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Tackney MS, Pitt K, Smith N, et al. Digital endpoints in clinical trials: emerging themes from a multi-stakeholder knowledge-exchange event. Trials 2024; 25: 57. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Rego S, Rajaraman S, Green E, et al. Methods for the clinical validation of digital endpoints: a systematic review. JMIR Res Protoc 2023; 12: e47119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Mazzochi AT, Morales JG, Navarro C, et al. Electronic informed consent in clinical trials: effects on enrollment, benefits, and challenges—a systematic review. Trials 2023; 24: 98. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Adams A, Whitty P, Chamberlain-James L. Risk-Based monitoring in clinical trials: 2021 update from the ACRO landscape survey. Ther Innov Regul Sci 2023; 57: 510–519. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Dugas M, Neuhaus P, Naumann L, et al. Next-generation study databases require FAIR, EHR-integrated, and scalable EDC for medical documentation and decision support. NPJ Digit Med 2024; 7: 10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Park J, Huh W, Jung SY, et al. The landscape of decentralized clinical trials (DCTs): focusing on the FDA and EMA guidance. Transl Clin Pharmacol 2024; 32: 41–51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Bakker JP, Izmailova ES, Clément A, et al. Regulatory pathways for qualification and acceptance of digital health technology-derived clinical trial endpoints: considerations for sponsors. Clin Pharmacol Ther 2025; 117: 56–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Sato T, Mizumoto S, Ota M, et al. Implementation status and consideration for the globalisation of decentralised clinical trials: a cross-sectional analysis of clinical trial databases. BMJ Open 2023; 13: e074334. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019; 25: 44–56. [DOI] [PubMed] [Google Scholar]
  • 16.Rajpurkar P, Chen E, Banerjee O, et al. AI In health and medicine. Nat Med 2022; 28: 31–38. [DOI] [PubMed] [Google Scholar]
  • 17.Vamathevan J, Clark D, Czodrowski P, et al. Applications of machine learning in drug discovery and development. Nat Rev Drug Discov 2019; 18: 463–477. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Esteva A, Robicquet A, Ramsundar B, et al. A guide to deep learning in healthcare. Nat Med 2019; 25: 24–29. [DOI] [PubMed] [Google Scholar]
  • 19.Beam AL, Kohane IS. Big data and machine learning in health care. JAMA 2018; 319: 1317–1318. [DOI] [PubMed] [Google Scholar]
  • 20.Keesara S, Jonas A, Schulman K. COVID-19 and health care’s digital revolution. N Engl J Med 2020; 382: 82. [DOI] [PubMed] [Google Scholar]
  • 21.Wosik J, Fudim M, Cameron B, et al. Telehealth transformation: COVID-19 and the rise of virtual care. J Am Med Inform Assoc 2020; 27: 957–962. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Goldsack JC, Coravos A, Bakker JP, et al. Verification, analytical validation, and clinical validation (V3): the foundation of determining fit-for-purpose for biometric monitoring technologies (BioMeTs). NPJ Digit Med 2020; 3: 55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Hripcsak G, Duke JD, Shah NH, et al. Observational health data sciences and informatics (OHDSI): opportunities for observational researchers. Stud Health Technol Inform 2015; 216: 574–578. [PMC free article] [PubMed] [Google Scholar]
  • 24.Corrigan-Curay J, Sacks L, Woodcock J. Real-world evidence and real-world data for evaluating drug safety and effectiveness. JAMA 2018; 320: 867–868. [DOI] [PubMed] [Google Scholar]
  • 25.Tomašev N, Glorot X, Rae JW, et al. A clinically applicable approach to continuous prediction of future acute kidney injury. Nature 2019; 572: 116–119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Ibrahim H, Liu X, Rivera SC, et al. Reporting guidelines for clinical trials of artificial intelligence interventions: the SPIRIT-AI and CONSORT-AI guidelines. Trials 2021; 22: 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Cruz Rivera S, Liu X, Chan AW, et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med 2020; 26: 1351–1363. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Inan OT, Tenaerts P, Prindiville SA, et al. Digitizing clinical trials. NPJ Digit Med 2020; 3: 01. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Mittermaier M, Venkatesh KP, Kvedar JC. Digital health technology in clinical trials. NPJ Digit Med 2023; 6: 88. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Coravos A, Goldsack JC, Karlin DR, et al. Digital medicine: a primer on measurement. Digit Biomark 2019; 3: 31–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.McLennan S, Celi LA, Buyx A. COVID-19: putting the general data protection regulation to the test. JMIR Public Health Surveill 2020; 6: e19279. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Sherman RE, Anderson SA, Dal Pan GJ, et al. Real-World evidence—what is it and what can it tell us? N Engl J Med 2016; 375: 2293–2297. [DOI] [PubMed] [Google Scholar]
  • 33.Miotto R, Wang F, Wang S, et al. Deep learning for healthcare: review, opportunities and challenges. Brief Bioinform 2018; 19: 1236–1246. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Rajkomar A, Oren E, Chen K, et al. Scalable and accurate deep learning with electronic health records. NPJ Digit Med 2018; 1: 18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Izmailova ES, Ellis R, Benko C. Remote monitoring in clinical trials during the COVID-19 pandemic. Clin Transl Sci 2020; 13: 838–841. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Rieke N, Hancox J, Li W, et al. The future of digital health with federated learning. NPJ Digit Med 2020; 3: 19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Yang JE, Kim AR. Strategic collaboration framework for UK-Korea healthcare joint research: focusing on technology competitiveness, investment priorities, and policy alignment. Asia-Pac J Converg Res Interchange 2025; 11: 583–596. [Google Scholar]
  • 38.Yang JE, Kim AR. Prioritizing strategic fields for international collaborative research in healthcare: a focus on Korea-UK partnership. Open Public Health J 2025; 18: e18749445410222. [Google Scholar]
  • 39.Wang Y, et al. Correction: guidelines, consensus statements, and standards for the use of artificial intelligence in medicine: systematic review. J Med Internet Res 2023; 25: e55596. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Cote MP, Lubowitz JH. Recommended requirements and essential elements for proper reporting of the use of artificial intelligence machine learning tools in biomedical research and scientific publications. Arthroscopy 2024; 40: 1033–1038. [DOI] [PubMed] [Google Scholar]
  • 41.Barbati G, Pasqualetti P, Matranga D, et al. Study design and research protocol for diagnostic or prognostic studies in the age of artificial intelligence: A biostatistician’s perspective. Epidemiol Biostat Public Health 2023; 18(2): e22227. [Google Scholar]
  • 42.Attafi OA, et al. DOME Registry: implementing community-wide recommendations for reporting supervised machine learning in biology. GigaScience 2024; 13: giae094. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Sun K, Roy A, Tobin JM. Artificial intelligence and machine learning: definition of terms and current concepts in critical care research. J Crit Care 2024; 82: 154792. [DOI] [PubMed] [Google Scholar]
  • 44.Vij P. Pharma in the digital era: the role of artificial intelligence in drug development. Cancer and Archive 2024; 32: 138–146. [Google Scholar]
  • 45.Ryan DK, Maclean R, Balston A, et al. AI and machine learning for clinical pharmacology. Br J Clin Pharmacol 2023; 90: 629–639. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-pdf-1-dhj-10.1177_20552076261430105 - Supplemental material for Bridging policy and practice in smart clinical trials: Quantifying regulatory friction and technology adoption in Korea and the UK

Supplemental material, sj-pdf-1-dhj-10.1177_20552076261430105 for Bridging policy and practice in smart clinical trials: Quantifying regulatory friction and technology adoption in Korea and the UK by Jae Eun Yang and Ah Rim Kim in DIGITAL HEALTH


Articles from Digital Health are provided here courtesy of SAGE Publications

RESOURCES