Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2026 Jan 8;26(2):405. doi: 10.3390/s26020405

Fusing Prediction and Perception: Adaptive Kalman Filter-Driven Respiratory Gating for MR Surgical Navigation

Haoliang Li 1, Shuyi Wang 1,*, Jingyi Hu 1, Tao Zhang 1, Yueyang Zhong 1
Editor: Rosario Schiano Lo Moriello1
PMCID: PMC12845694  PMID: 41600201

Abstract

Background: Respiratory-induced target displacement remains a major challenge for achieving accurate and safe augmented-reality-guided thoracoabdominal percutaneous puncture. Existing approaches often suffer from system latency, dependence on intraoperative imaging, or the absence of intelligent timing assistance; Methods: We developed a mixed-reality (MR) surgical navigation system that incorporates Adaptive Kalman-filter-based respiratory prediction module and visual gating cues. The system was evaluated using a dynamic respiratory motion simulation platform. The Kalman filter performs real-time state estimation and short-term prediction of optically tracked respiratory motion, enabling simultaneous compensation for MR model drift and forecasting of the end-inhalation window to trigger visual guidance; Results: Compared with the uncompensated condition, the proposed system reduced dynamic registration error from (3.15 ± 1.23) mm to (2.11 ± 0.58) mm (p < 0.001). Moreover, the predicted guidance window occurred approximately 142 ms in advance with >92% accuracy, providing preparation time for needle insertion; Conclusions: The integrated MR system effectively suppresses respiratory-induced model drift and offers intelligent timing guidance for puncture execution.

Keywords: mixed reality, respiratory motion compensation, adaptive kalman filter, respiratory gating

1. Introduction

Lung cancer remains the most commonly diagnosed cancer and the leading cause of cancer-related death worldwide. According to the Global Cancer Statistics 2022 (GLOBOCAN), there were approximately 2.5 million new cases and 1.8 million deaths, accounting for 12.4% of all cancer diagnoses and 18.7% of total cancer mortality [1].

Current percutaneous lung biopsy procedures primarily rely on two-dimensional image guidance, which suffers from limitations such as delayed intraoperative information updates and insufficient localization accuracy. This technique often requires repeated intraoperative CT scans to confirm the puncture path, not only prolonging surgery duration and increasing the risk of complications like bleeding and pneumothorax, but also leading to heightened radiation exposure for both medical staff and patients [2,3]. Furthermore, lung tissue displacement caused by respiratory motion presents another critical challenge to precise puncture. Studies indicate that during a single respiratory cycle, the displacement of the target lung region can reach up to 2 cm, significantly increasing positioning errors and surgical difficulty [4].

Driven by the evolving concept of precision medicine and rapid advances in Augmented Reality (AR) and Mixed Reality (MR) technologies, AR/MR surgical navigation systems are emerging as pivotal tools in modern minimally invasive surgery [5,6]. By precisely registering and overlaying three-dimensional virtual models reconstructed from preoperative medical images onto the patient’s actual anatomical structures within the surgical field, these systems provide surgeons with intuitive “X-ray vision” capabilities. This technology holds the potential to revolutionize surgical visualization and positioning accuracy [7,8,9].

Early research primarily focused on preoperative navigation, utilizing three-dimensional reconstruction of CT or MRI imaging data to provide surgeons with clear visualizations of pulmonary anatomical structures. For instance, Leonardo Frajhof et al. applied augmented reality and 3D printing technology to preoperative planning for pulmonary lesions. Printed models assisted surgeons in understanding patient anatomy and spatial relationships, thereby supporting surgical pathway planning [10]. However, this approach has significant limitations, being confined to preoperative static models and unable to support the dynamic changes in anatomical structures during surgery. To address the limitations of preoperative navigation, research has increasingly shifted toward intraoperative navigation. By overlaying three-dimensional models onto the surgical field, MR and AR technologies provide surgeons with dynamic guidance during actual procedures. For instance, Chengrun Li et al. successfully applied augmented reality technology in lung cancer surgery, delivering intuitive intraoperative guidance [11].

However, the periodic deformation and displacement of thoracic and abdominal organs caused by respiratory motion pose a significant challenge to the stability and reliability of MR navigation. During respiration, the MR overlay model derived from static registration continuously deviates from the actual organ positions. This error not only misleads physicians but may also directly lead to puncture failure or complications [12,13]. To address this challenge in current clinical practice, breath-hold puncture is commonly employed. While effective, this strategy heavily relies on the physician’s experience and the patient’s perfect cooperation [14].

In recent years, numerous researchers have focused on addressing respiratory motion issues through technological means. Some studies employ intraoperative CT [15] or MRI [16] for image registration and updating, but this approach inevitably exposes both medical staff and patients to ionizing radiation.

Another category of research explores compensation methods based on models correlating external respiratory signals with internal organ motion [17]. Among these, Kalman filters and their variants are widely applied for respiratory motion tracking and prediction—particularly in radiation therapy—due to their ability to achieve optimal estimation in noisy environments [18].

This paper designs and implements a mixed reality surgical navigation system integrating Adaptive Kalman filter prediction with intelligent gate-controlled prompts. The core innovation lies in establishing an intelligent closed loop from physiological signal sensing to MR visualization feedback: the Adaptive Kalman filter predicts respiratory states hundreds of milliseconds ahead, proactively compensating for inherent MR display latency to stabilize virtual models while precisely anticipating the safe window at the end of inspiration. This enables proactive MR-based puncture alerts for surgeons. Furthermore, to objectively validate system performance, we independently developed a dynamic respiratory motion simulation platform, providing a high-fidelity, quantifiable experimental environment for evaluating algorithms and systems.

2. Materials and Methods

To address the challenges posed by respiratory motion during mixed reality-guided puncture procedures, we developed and validated an integrated experimental platform. The platform employs a workflow design encompassing three key components: data processing, motion prediction, and visual guidance. As shown in Figure 1, the core components of this system primarily include: (1) a head-mounted display device (Microsoft HoloLens 2, Microsoft Corp., Redmond, WA, USA) equipped with a mixed reality navigation application, (2) a high-precision optical tracking system (Qualisys AB, Gothenburg, Sweden), capable of sub-millimeter-level positioning, using Qualisys Track Manager (QTM, version 2022.3), and (3) a simulation platform for mimicking human respiratory movements (Respiratory Motion Simulation Platform, supporting multi-mode waveform simulation). The synergy between these components enables the real-time tracking, prediction, and compensation of respiratory motion.

Figure 1.

Figure 1

The main system framework.

2.1. System Architecture Design

The mixed reality surgical navigation system developed in this study aims to address the challenges of dynamic registration and precise puncture guidance under respiratory motion interference. As shown in Figure 2, the system constitutes an integrated architecture encompassing perception, decision-making, and feedback, primarily comprising four core modules.

Figure 2.

Figure 2

Overall System Architecture. Arrows indicate the data flow between system modules.

Respiratory Motion Simulation Platform: This module serves as the validation benchmark for this study. It consists of a transparent thoracic manikin, an internal silicone airbag, and a precision pneumatic control system based on dual air pumps and a three-way solenoid valve. It generates known and controllable dynamic thoracic displacement according to preset respiratory waveforms, simulating physiological respiratory motion.

Data Acquisition and Processing Module: This module serves as the system’s “sensory” component. A high-precision optical motion capture system continuously tracks passive optical markers affixed to the surface of a dynamic manikin in real time. The resulting respiratory displacement signals function as the spatiotemporal reference for the entire system and as input observations for the Adaptive Kalman filter. The commercial high-precision optical motion capture system Qualisys motion capture system (Qualisys AB, Gothenburg, Sweden) is employed, equipped with three infrared cameras operating at a sampling frequency of 100 Hz.

Prediction and Compensation Core Module: This module serves as the core of the system and is key to achieving dynamic accuracy enhancement. We designed and implemented adaptive Kalman filter for real-time state estimation and short-term prediction of respiratory motion. By integrating the system’s dynamic model with observational data from the optical tracking system, this filter outputs optimal estimates of future respiratory states (including position and velocity) over the next several hundred milliseconds. These predictions serve dual purposes: (a) Compensating for inherent delays in MR displays to stabilize virtual models; (b) Precisely forecasting impending end-inspiratory plateaus to inform decision-making for puncture gating.

Mixed Reality Navigation and Visual Interaction Module: This module serves as the interface between the system and the surgeon. Microsoft Hololens2 is responsible for presenting navigation information. Its core functions include: (a) loading and rendering a 3D lung model reconstructed from preoperative CT data; (b) receiving predictive data from the Adaptive Kalman filter core to dynamically update the spatial pose of the virtual model, synchronizing it with compensated physiological motion; (c) Triggering a clear, high-contrast visual cue within the surgeon’s actual surgical field when predicting the end-inspiratory window, thereby enabling intelligent guidance for optimal surgical timing.

2.2. Implementation of the Adaptive Kalman Filter Algorithm

To overcome system latency and achieve predictive navigation, we designed and implemented Adaptive Kalman filter-based prediction framework for estimating respiratory motion states and compensating for the resulting dynamic errors. The complete workflow of this algorithm is illustrated in Figure 3.

Figure 3.

Figure 3

Algorithm Workflow Diagram.

2.2.1. State Space Model

The respiratory motion is modeled using a three-dimensional state vector, as shown in Formula (1):

xk=XrVrC, (1)

where Xr is the respiratory displacement, Vr is its velocity, and C is the slowly varying baseline offset. The state prediction from step k1 to k is given by Formulas (2)–(4):

xk=Akxk1, (2)
Ak=1Δt0ωr,k2Δt10001, (3)

Here,

ωr,k=2πfr,k, (4)

denotes the instantaneous angular breathing frequency. This representation effectively encodes the oscillatory respiratory dynamics.

The observation model describes the relationship between the state and the measured values, as shown in Formulas (5) and (6). In this system, the observation Zk is derived from the displacement measured by the optical tracking system, and its observation equation is:

Zk=HXk+Vk, (5)
H=[101], (6)

where H denotes the observation matrix, and νk represents the observation noise, which follows a zero-mean Gaussian distribution νkN(0,R).

This observation model reflects the physical measurement mechanism of the optical tracking system: the respiratory displacement Xr,k  contributes directly to the measured signal, the baseline offset Ck appears as an additive bias, and the respiratory velocity Vr,k is not directly observable. Therefore, the coefficients in H are determined by the measurement physics rather than empirical tuning.

2.2.2. Adaptive Estimation

Respiratory frequency fr,k is estimated from a 20 s buffer of the filtered signal using peak detection and smoothed using a 0.05 Hz IIR low-pass filter. The updated frequency directly modifies Ak, forming the closed-loop adaptation mechanism. Rk as shown in Formula (7).

Rk=σnoise,k2, (7)

where σnoise,k is obtained from the high-frequency components of the raw measurement via numerical differentiation. Qk as shown in Formula (8).

Qk=1000ωr,k2000ασresp2 , (8)

with σresp2 computed from a long-term buffer of Xr and α=0.01.

To generate predictions over a longer time horizon, iterative propagation of the state model is performed, as shown in Formulas (9) and (10):

x^k+hk=Akhxk,h=1,2,,H, (9)

The predicted respiratory signal is therefore:

X^r,k+hk=[100]Akhxk, (10)

This model-based propagation leverages: time-varying frequency ωr,k, dynamic process noise Qk, oscillatory nature of the respiratory system. The prediction horizon h determines the temporal lead of the generated prompt, with larger h yielding earlier but potentially less accurate phase estimates. Because Ak is continually adapted from real-time frequency estimates, the multi-step prediction remains well-aligned with the current breathing phase and amplitude.

Liu et al. compared three methods: end-inspiratory breath-hold, random breath-hold, and end-expiratory breath-hold. Results showed 100% puncture success rates across all three groups, but the end-expiratory breath-hold group had the highest complication rate (58.2%) [19]. This indicates that while end-expiratory breath-hold is feasible, it demands high patient cooperation, and repeated breath-holds prolong procedure duration. Therefore, this system selects the end-inspiratory phase as the respiratory trigger.

The system identifies the end-inspiration moment in real time by analyzing the velocity and displacement signals output from the filter. Its decision logic is as follows: When the velocity signal crosses zero from positive to negative, and the predicted displacement is at the end of an upward trend, when these conditions are met, the system determines that the end-inspiration phase is imminent and triggers the MR visual alert in advance, providing the surgeon with sufficient reaction time. To enhance the robustness of respiratory signal monitoring and prevent false alarms caused by motion artifacts or transient physiological variations, a historical data-based adaptive validation module was incorporated into the core detection algorithm. This module dynamically constructs a personalized reference respiratory template through online learning of recent stable breathing patterns, thereby establishing an adaptive threshold range. Each real-time detected respiratory event must subsequently pass a dual-validation process: it is first assessed by the primary algorithm and then evaluated for its similarity to the learned template. An event is only confirmed as a valid physiological signal when both checks are consistent. This redundant design significantly improves the system’s ability to discriminate abnormal signals and enhances its clinical reliability.

As shown in Figure 4, the delayed measurement signal lags behind the actual respiratory signal, while the signal filtered through the Adaptive Kalman filter closely follows the true curve.

Figure 4.

Figure 4

Schematic Diagram of Adaptive Kalman Filtering Task.

2.3. MR Navigation System

Current marker-based registration methods require physical target attachment to patients, introducing clinical limitations including compromised sterility, registration drift due to tissue deformation, and workflow interruptions caused by manual re-registration. Furthermore, for multiple consecutive surgeries, each procedure necessitates re-marking QR codes and lesion locations, repackaging them into the HoloLens for accurate matching—a time-consuming process.

To overcome these limitations, this study establishes a markerless registration framework using a calibration plate. Its key innovation lies in automating CT-to-HoloLens coordinate transformation through a two-stage process: first locating targets in CT space, then tracking the plate in the MR environment. By acquiring marker coordinates in both spaces, registration is achieved.

This method employs a specially designed calibration plate as a common spatial reference carrier. A calibration plate with five non-coplanar QR codes was used for registration. While three non-coplanar markers are theoretically sufficient for rigid pose estimation, additional QR codes were included to improve robustness and reduce sensitivity to detection noise through overdetermined least-squares optimization. The core design of this calibration plate involves the fixed arrangement of five high-contrast QR codes at distinct positions. The spatial layout of these QR codes is meticulously engineered to ensure that any three points are non-collinear and all five points are non-coplanar. This uniquely defines a stable calibration plate coordinate system in three-dimensional space, effectively preventing singularity issues during subsequent spatial transformation solutions.

This study employs Microsoft HoloLens 2 and Vuforia detection capabilities for visual localization. This selection is based on the following rationale: First, the research focuses on validating the algorithmic workflow within a controlled environment, where absolute positioning accuracy is determined by the criterion of not obscuring motion signals. Research confirms that under such conditions, HoloLens 2 achieves millimeter-level positioning accuracy using QR codes [20], meeting the study’s requirements. Second, the QR code solution offers simple integration and high stability, facilitating rapid prototyping and enabling focus on developing core algorithms for respiration prediction.

In this system, the CT coordinates are coordinates we created ourselves within our laboratory environment. The specific implementation process is as follows, as shown in Figure 5:

Figure 5.

Figure 5

Markerless registration framework. Arrows indicate the data flow between system modules.

First, on the PC server, the precise 3D coordinates of the five QR code markers on the calibration plate within the CT coordinate system are preloaded and configured as foundational reference data for subsequent spatial registration. Subsequently, the server program is launched, opening the predefined Socket communication port to enter a listening state, ready to receive connection requests and data from the HoloLens client.

The HoloLens 2 client launches its MR application, utilizing the Vuforia engine to perform real-time recognition of the five QR codes on the calibration plate within its field of view. This process acquires the six-degree-of-freedom pose for each QR code in the HoloLens world coordinate system, comprising three-dimensional position coordinates and rotation quaternions. The client program extracts the positional coordinate information of these five QR codes and asynchronously sends real-time coordinate data packets to the PC server via the established socket connection.

Upon receiving real-time coordinates, the PC server immediately invokes the core algorithm: using the predefined CT coordinates and the received HoloLens coordinates as inputs, it executes the Umeyama algorithm based on least-squares optimization. This precisely calculates the optimal rigid transformation matrix T (comprising rotation matrix R and translation vector t) required to map points from the CT coordinate system to the current HoloLens world coordinate system. Successful determination of T establishes a stable mapping relationship between the CT space and the current HoloLens space.

Subsequently, when any model defined in the CT coordinate system (e.g., 3D organ models, tumor contours, or surgical pathways) is input on the PC server, the server applies transformation matrix T to convert its coordinates to the real-time HoloLens world coordinate system. The transformed coordinate data is then transmitted back to the HoloLens 2 client in real-time via Socket. Finally, HoloLens 2 precisely overlays and registers the received virtual model onto the corresponding real-world location visible to the user through the device, achieving high-precision visualization.

Additionally, after completing registration, users can directly input and select multiple target points on the reconstructed 3D image. By selecting any two points (e.g., point A as the body surface needle insertion point and point B as the deep lesion target), the system will automatically generate a visual puncture path connecting the two points. As shown in Figure 6, the red dot represents the deep lesion target, while the blue dot indicates the body surface needle insertion point. A magenta path is generated between these two points. Furthermore, considering that during actual needle insertion, the needle’s internal position cannot be observed and the accuracy of the insertion angle cannot be confirmed, the needle path extends beyond the body surface needle insertion point. This allows users to externally adjust the needle insertion angle.

Figure 6.

Figure 6

MR Display System.

2.4. Breathing Simulation Platform

The pneumatic control system employs two miniature diaphragm pumps (air pump maximum flow rate ~3 L/min, vacuum pump vacuum level ~−60 kPa) and a three-way solenoid valve (response time < 10 ms). The airbag is positioned within a 3D-printed transparent thoracic body model, with an initial volume of approximately 200 mL. As shown in Figure 7.

Figure 7.

Figure 7

Breathing Simulation Platform.

The core controller for this system utilizes the STM32 NUCLEO-F103RB development board, selected for its high performance, stability, and excellent expandability in embedded development. The STM32 NUCLEO-F103RB employs the STM32F103RBT6 microcontroller with an ARM Cortex-M3 core, operating at a maximum frequency of 72 MHz. It features 128 KB of Flash memory and 20 KB of SRAM, offering significant advantages in processing speed and storage capacity compared to traditional 8-bit microcontrollers. This provides ample performance support for executing complex control logic, processing sensor data, and managing TCP/UDP network communications (via an external ESP8266 module) in this project. Regarding hardware interfaces, the NUCLEO-F103RB offers 76 general-purpose I/O pins (some supporting PWM output and analog input), enabling flexible control of actuators like air pumps and solenoid valves while simultaneously acquiring signals from multiple sensors.

The pneumatic actuator unit serves as the core component for achieving respiratory simulation functionality, designed around the coordinated control of dual-pump drive and a three-way solenoid valve. After comprehensive evaluation, two miniature DC diaphragm air pumps operating at 5–6 V were selected, designated as the inspiratory and expiratory air sources, respectively. This pump type features low noise, low power consumption, and continuous operation, delivering more realistic airflow waveforms during alternating or simultaneous dual-pump operation. Paired with a normally closed three-way miniature solenoid valve for rapid switching between inhalation and exhalation air pathways, this valve offers short response times and a drive voltage matched to the system, ensuring dynamic simulation of the respiratory cycle (Appendix A).

As the lung-simulating reservoir, a highly elastic airbag is selected and positioned within a 3D-printed thoracic skeleton. The airbag is connected to the pneumatic system via silicone tubing, which provides both fixation and sealing through its inherent flexibility and excellent airtightness. This ensures no detachment or leakage occurs during repeated inflation and deflation cycles, thereby enabling stable simulation of the respiratory process.

The sensing and feedback unit of this system employs the FR03H series miniature flow sensor, integrated into the main gas pathway to enable real-time monitoring and closed-loop control of respiratory flow. Based on thermal mass flow measurement principles, the sensor features a D3 mm measurement bore and achieves a maximum flow rate of 5 L/min at 20 °C and 101.325 kPa. Its measurement accuracy is ±2.5% within the 0.15–5 L/min range, fully covering the typical flow range of simulated human tidal breathing. The sensor supports both I2C digital interface (100 kHz) and linear analog voltage output (0.5–4.5 V). It operates within a DC 4.9–14 V supply voltage range with typical operating current not exceeding 30 mA. Interface options include a PH2.0-5P plug-in connector or 2.54 mm-5P pins, facilitating seamless integration with STM32 controllers.

Sensors are installed in series within the main air pathway to ensure all inhaled and exhaled gases pass through their sensing elements, thereby obtaining accurate instantaneous flow measurements. The STM32 controller acquires this flow data in real time via I2C or analog channels and calculates tidal volume parameters by integrating the flow based on a preset algorithm. These values serve as feedback inputs for the PID control algorithm, dynamically adjusting the PWM duty cycle to precisely regulate the air pump speed and solenoid valve activation timing. Through this closed-loop control strategy, the system achieves high-fidelity tracking of target respiratory waveforms, significantly enhancing the realism and reliability of simulating various respiratory states.

2.5. Verification Methodology

To validate the performance of the integrated system proposed in this study, we constructed the complete experimental platform shown in Figure 8 and designed a series of quantitative and qualitative experiments. All experiments were conducted in a controlled laboratory environment to ensure the comparability and reproducibility of results.

Figure 8.

Figure 8

Experimental Platform.

We designed a systematic verification protocol aimed at quantitatively analyzing its dynamic registration accuracy, the precision and timeliness of respiratory gating prompts, and qualitatively assessing its clinical usability. To comprehensively evaluate system performance, we employed two typical respiratory waveforms for experimentation: (1) Standard sine wave: Serving as a benchmark, this simulates steady ideal respiration. Displacement amplitude was set to 10 mm with a 5 s period, corresponding to adult resting respiration. (2) Real-data-based respiratory wave: To assess the system’s robustness under non-ideal conditions, we employed real human respiratory waveforms sourced from public databases [21]. These waveforms encompassed Regular Breathing, Shallow Breathing, and Irregular Breathing. Following standardization, these waveforms matched the amplitude and period of the reference waveform.

When evaluating the impact of different breathing patterns on predictive model performance, this study analyzed respiratory signal data generated by a validated respiratory motion simulation platform. The platform’s preset waveforms were designed strictly within the clinically observed physiological range of human respiration to ensure physiological authenticity. Specifically, the waveform period references the typical respiratory rate range for normal adults at rest (10–20 breaths per minute, corresponding to a period of 3–6 s) [22]; baseline values for normal and shallow breathing amplitudes (10 mm and 3–5 mm) are established based on the AAPM TG-76 report [23], along with the irregular waveform range within the dataset.

The classification method builds upon the work of Jeong et al. [21], ultimately categorizing respiratory signals into three types: regular breathing, shallow breathing, and irregular breathing. The classification was based on three quantitative indices: the mean amplitude Ma, the amplitude irregularity Ia, and the phase irregularity Ip. The mean amplitude Ma was defined as the average of the absolute amplitude of the respiratory signal S(t) , as shown in Formula (11):

Ma=1Ni=1NS(ti), (11)

which characterizes the overall breathing depth. The amplitude irregularity Ia was calculated following the method proposed by Jeong et al. [21] and defined as the average of the standard deviations of all detected peaks Pk and valleys Vl, as shown in Formula (12):

Ia=σ({Pk})+σ({Vl})2, (12)

The phase irregularity Ip was used to quantify variability in the breathing rhythm and was computed as the average of the standard deviations of the respiratory cycle durations, namely the peak-to-peak intervals Tp,k and the valley-to-valley intervals Tv,l, as shown in Formula (13):

Ip=σ({Tp,k})+σ({Tv,l})2, (13)

The latter two indices were used to evaluate fluctuations in breathing depth and rhythm, respectively. First, based on the distribution characteristics of the samples in the dataset, the median values of Ia and Ip were used as thresholds. Any respiratory signal for which either irregularity index exceeded the corresponding threshold was classified as irregular breathing. Subsequently, for signals with both irregularity indices below the thresholds, further discrimination was performed according to the mean amplitude Ma: signals whose Ma fell within the lowest 25th percentile of all samples were defined as shallow breathing, while the remaining signals were classified as regular breathing.

2.5.1. Three-Dimensional Registration Alignment Error Assessment

We used a high-precision optical motion capture system, the Qualisys motion capture system (Qualisys AB, Sweden), as the acquisition device for three-dimensional spatial coordinates. The system utilizes infrared optical tracking technology, whereby multiple synchronized Oqus cameras emit infrared light and capture the reflections from passive retro-reflective markers attached to the target object. Based on the principles of triangulation, the system reconstructs the three-dimensional position and attitude of the marker clusters in real-time. Spatial coordinate data were streamed and recorded via the Qualisys Track Manager (QTM) software. And the reflective markers were ensured to remain within the overlapping field of view of all cameras throughout the experiment.

According to manufacturer specifications and independent experimental evaluations, the Qualisys Oqus camera system achieves a static spatial accuracy better than 0.15 mm and a dynamic measurement error of approximately 0.26 mm when tracking objects in motion [24,25]. Given that the simulated respiratory motion in this study is characterized by low-frequency (typically below 0.5 Hz), smooth, and periodic displacement, the target velocities and accelerations remain well within the dynamic tracking capability of the optical system. Therefore, the measurement uncertainty introduced by the optical tracking system is negligible relative to the respiratory motion amplitude, and the system can be reliably regarded as a gold-standard reference for evaluating respiratory motion prediction and compensation performance.

First, set the recognition image to 5 cm × 5 cm within the Unity3D virtual scene (Unity Technologies, San Francisco, CA, USA; version 2021.3.23), matching the dimensions of the image registration environment. Place four small ball reference points on the model for error measurement, labeled 1–4. Once the 3D registration alignment is complete, the Micron Tacker probe is positioned at the virtual red sphere reference point and the edge of the real phantom. Each point is then tested at least 20 times, with the test results saved when the variance of their coordinates is within 0.2 mm. We define the Euclidean distance between the coordinates of the virtual red sphere reference point and the reference coordinates of the edge of the real phantom as the 3D registration alignment error (error), as shown in Formula (14). The experimental results are expressed as mean ± standard deviation.

Error=(x1x2)2+(y1y2)2+(z1z2)2, (14)

where x1, y1, z1 denote the virtual red sphere reference point coordinates; x2, y2, z2 denote the real phantom edge reference coordinates.

2.5.2. Dynamic Quantified Registration Accuracy Evaluation

Dynamic registration accuracy is by comparing the deviation between the position of virtual target points displayed by the augmented reality system and the actual physical target point positions. The specific process is as follows:

First, select a specific target point within the lung nodule region of the thoracic phantom and define a corresponding homologous point on its virtual 3D model. Initiate the dynamic respiration simulation platform to operate according to the preset waveform. Simultaneously record the true 3D coordinates Ptrue(t) of the target point provided by the optical motion capture system (serving as the “gold standard” for evaluation), along with the 3D coordinates Par(t) of the homologous point calculated and displayed by the MR system through its registration algorithm.

Calculate the target point error at each time point throughout the entire data acquisition period, as shown in Formula (15):

TPE(t)=Ptrue(t)Par(t)2, (15)

Finally, we report the root mean square error (RMSE) and mean ± standard deviation of TPE(t) to comprehensively characterize the system’s dynamic tracking accuracy.

To quantitatively evaluate the contribution of the Adaptive Kalman filter prediction and compensation module, we established two primary comparison modes: (1) Mode A (Baseline Mode): Adaptive Kalman filter disabled. The MR system performs model updates and instantaneous end-inspiratory detection solely using real-time displacement measurements provided by the optical system, which include inherent latency. (2) Mode B: Enables the Adaptive Kalman filter. The MR system updates its model using displacement and velocity data that have undergone predictive compensation, triggering gated prompts based on the predictive results.

2.5.3. Performance Evaluation of Respiratory Gating Prompts

This section evaluates the accuracy and timeliness of the system’s intelligent prompting functionality. Based on optical motion capture data, manually annotate the true time point Tgold of the end-inspiratory phase within each respiratory cycle with precision. Record the time point Tprompt at which the MR system triggers a visual prompt each time. Calculate the following metrics:

  1. Prompt Accuracy: The percentage of total respiratory cycles in which the system correctly triggers a prompt (Tprompt falls within a clinically acceptable tolerance window centered on Tgold).

  2. Prompt Delay/Advance: Calculate ΔT=TpromptTgold. A negative ΔT value indicates the prompt was issued before the true end-inspiratory point, which is the desired effect of the prediction algorithm. We report the mean and standard deviation of ΔT, which reflects the effective prediction lead produced by the selected horizon.

All experiments were conducted under identical hardware configurations and respiratory waveforms, alternating between Mode A and Mode B testing to eliminate random errors.

3. Results

This section systematically presents comparative experimental results and analyzes them in terms of dynamic registration accuracy and respiratory gating response performance to validate the effectiveness of this integrated system.

First, during system validation, the design complexity of QR codes was confirmed as a key variable affecting the stability of HoloLens pose recognition. As summarized in Table 1, increasing encoding density significantly reduced pose jitter, while version differentiation played a critical role in preventing ID-level coordinate misassignment. The first round of experiments employed simplified QR codes. Their sparse features resulted in excessively high initial similarity between adjacent markers, triggering ambiguity in SLAM feature estimation. When two spatially adjacent QR codes entered the field of view simultaneously, pose estimation produced abnormal angular deviations exceeding 15°. This stemmed fundamentally from spatial constraint degradation caused by insufficient feature points.

Table 1.

Quantitative comparison of registration accuracy under different QR code designs.

QR Configuration Encoding Density Translational Jitter
(mm)
Angular Deviation
(°)
ID Misassignment Rate (%)
Simplified QR Low 2.6 ± 0.9 15.8 ± 3.2 35
6 × 6 QR (same version) Medium 0.9 ± 0.4 4.3 ± 1.1 10
7 × 7 QR (V1–V5) High 0.29 ± 0.08 1.2 ± 0.5 0

Subsequent experiments upgraded to medium-complexity QR codes (6 × 6 same-version encoding). While increased information density mitigated initialization ambiguity, identical encoding versions rendered the ID recognition mechanism ineffective: during rapid motion or partial occlusion, the system erroneously assigned QR code 1’s spatial coordinates to marker 5 (coordinate misplacement rate as high as 37%), causing structural disruption in spatial mapping relationships.

The final solution employs fully differentiated QR codes (7 × 7 versions V1–V5 with independent encoding), achieving triple optimization through introducing unique semantic constraints in encoding. High-density feature points enhance adjacent marker differentiation, compressing pose estimation jitter to ±0.3 mm; version differences construct a discrete encoding space, reducing coordinate misalignment rates below 0.2%; Even under extreme viewing angles of 70° or 30% local occlusion, recognition success rates remained consistently above 98%.

This evolutionary path reveals that QR code complexity must co-evolve with encoding uniqueness. While sufficient information density ensures pose estimation accuracy, version differentiation is crucial for achieving ID-level spatial topology locking. This approach holds universal design value for scenarios with extremely low tolerance for error, such as surgical navigation.

The 3D registration alignment error is shown in Figure 9, with an average alignment error of 2.15 ± 1.05 mm, a maximum value of 4.80 mm and a minimum value of 1.42 mm. The error of point 1 is 2.30 ± 0.22 mm, the error of point 2 is 2.42 ± 0.69 mm, the error of point 3 is 1.47 ± 0.35 mm, and the error of point 4 is 3.26 ± 1.04 mm. Point 1 and Point 3 are points closer to the CT coordinate origin, while Point 4 is a point farther from the CT coordinate origin.

Figure 9.

Figure 9

Three-Dimensional registration alignment error for Point 1 to Point 4. The orange line indicates the median value.

Analysis of dynamic target point errors clearly demonstrates the superiority of the Adaptive Kalman filter prediction compensation algorithm. Figure 10 visually illustrates the error variation between the MR virtual target point and the actual target point during a typical respiratory cycle under Mode A (no compensation) and Mode B (Kalman compensation). It can be observed that enabling compensation significantly reduces both the magnitude and fluctuation of the error.

Figure 10.

Figure 10

Dynamic Target Point Error Comparison Chart.

The quantitative analysis results are summarized in Table 2. When tested with a standard sine wave, Mode A exhibited an average TPE of 2.82 ± 0.91 mm, whereas Mode B significantly reduced this to 2.24 ± 0.39 mm (p < 0.001, paired t-test). In the real breathing waveform test (steady breathing), Mode B also performed excellently, reducing the average TPE from 3.15 ± 1.23 mm to 2.11 ± 0.58 mm (p < 0.001), demonstrating the algorithm’s robustness to non-ideal breathing patterns.

Table 2.

System Performance Evaluation Results.

Test Conditions Mode Average TPE
(mm)
Prompt Accuracy Rate (%) Average Prompt Time (ms)
Standard sine wave Mode A (No Compensation) 2.82 ± 0.91 100 200 ± 31
Model B (Kalman Compensation) 2.24 ± 0.39 100 −125 ± 45
True Breathing Wave Mode A (No Compensation) 3.15 ± 1.23 90 195 ± 38
Model B (Kalman Compensation) 2.11 ± 0.58 92 −142 ± 52

The performance comparison results of the respiratory gating cueing system are shown in Figure 11 and Table 2. The time-series diagram in Figure 8 clearly demonstrates that cues based on Adaptive Kalman filter prediction (Mode B) are consistently delivered ahead of the actual end-inspiration point, whereas cues based on instantaneous detection (Mode A) exhibit significant delay.

Figure 11.

Figure 11

Respiratory Gating Timing Comparison Diagram. Cross symbols indicate the detected respiratory peaks. Colored dots mark the intersections between gating times and the respiratory waveform, with the yellow dot indicating Tgold of the representative cycle and the red and blue dots indicating the prompt times of Mode A and Mode B, respectively.

Based on the quantitative data in Table 2, during the standard sine wave test: Mode A achieved a prompt accuracy of 100%, but its average prompt delay was 200 ± 31 ms, while Mode B achieved an average prompt time of −125 ± 45 ms, indicating a 125-ms advance in puncture recommendations while maintaining high accuracy. This trend persists in real respiratory wave testing, where Mode B achieves a lead time of −142 ± 52 ms with 92% accuracy. Experiments demonstrate that the Adaptive Kalman filter algorithm exhibits superior performance in predictive capabilities. However, for clinical application, it is necessary to further enhance the time delay compensation while maintaining a high level of accuracy.

Table 3 shows the performance comparison of the Adaptive Kalman filter under Mode B across three respiratory states. Based on the experimental results, the proposed algorithm demonstrated comparable accuracy in estimating respiratory signals during both Regular Breathing and Shallow Breathing states. However, its performance revealed limitations under irregular patterns. Specifically, the maximum absolute error increased to 4.62 mm during Shallow Breathing and further to 5.04 mm during Irregular Breathing. These elevated error margins indicate that the current model’s robustness is compromised when confronted with significant deviations from normal, periodic respiratory patterns, highlighting an important area for future refinement to enhance clinical reliability across diverse physiological conditions. The poor results of Irregular waveforms stem from high-frequency, large-amplitude random pulses and non-stationarity in the movement patterns.

Table 3.

System Performance Evaluation Results.

Breathing State Average TPE
(mm)
Maximum Error
(mm)
Prompt Accuracy Rate
(%)
Average Prompt Time
(ms)
Regular Breathing 2.11 ± 0.58 2.84 92 −142 ± 52
Shallow Breathing 2.23 ± 0.86 4.62 91 −142 ± 63
Irregular Breathing 3.36 ± 0.76 5.04 85 −210 ± 83

4. Discussion

This study successfully integrates Adaptive Kalman filter-based respiratory motion prediction, mixed reality visualization, and intelligent gate-controlled prompts to construct and validate an innovative navigation system tailored for dynamic surgical environments. Experimental results consistently and robustly demonstrate that the system significantly enhances dynamic registration accuracy while intelligently and proactively guiding puncture timing.

Prior MR-assisted pulmonary intervention studies mainly focus on spatial visualization and lesion localization, often relying on breath-holding or static assumptions to mitigate respiratory motion. As summarized in recent reviews, respiratory-induced displacement remains a key limitation affecting MR accuracy, particularly for lower-lobe lesions. The present work differs by explicitly modeling respiratory dynamics and system latency, introducing a predictive, timing-aware prompting strategy that complements existing MR spatial navigation approaches.

The system’s core contribution lies in upgrading navigation from passive “real-time tracking” to active “proactive compensation” through Adaptive Kalman filtering. The approximately 50% improvement in dynamic registration accuracy is not only statistically significant but also clinically meaningful. It reduces errors from levels potentially affecting small nodule localization accuracy to a clinically acceptable range (generally considered less than 2–3 mm for most biopsy procedures) [26,27,28]. This precision gain directly translates to stable MR visualization, tight alignment between virtual models and actual organs, and greatly enhances physician confidence in navigation information.

More importantly, the respiratory gating alert system achieves a leap from informing the present to predicting the future. With an average advance warning of approximately 142 ms, it restores valuable preparation time to physicians, enabling them to calmly plan and execute puncture maneuvers. This is expected to reduce errors or multiple punctures caused by rushed operations, ultimately enhancing surgical safety and efficiency. Although the proposed predictive gating system successfully provides an advanced notification of the optimal puncture timing, preliminary in-house testing with team members has uncovered a critical challenge in translating technical performance into clinical practice: the variability in operator response time. Testing revealed significant variations in reaction time from receiving MR prompts to actually performing the puncture action among different individuals, and even among the same person in different situations.

This inherent human factor variability, coupled with any potential instability in system latency, could potentially lead to missed opportunities for puncture within the brief ideal window, thereby impacting the ultimate success of the procedure. This observation underscores that a truly robust dynamic navigation system must not only provide a theoretically optimal timing but must also integrate the operator into the entire control loop. Future work should focus on developing more adaptive interaction strategies. For instance, implementing a multi-stage alert [29] (e.g., Get Ready followed by Puncture Now) based on the remaining duration of the respiratory cycle and operator habits, or exploring semi-autonomous robotic assistance where the prompt signal translates into a constrained, active execution, could be promising directions [30,31]. Such advancements would minimize the dependency on human reaction latency and ensure the stable and reliable translation of system performance into clinical benefit.

Additionally, while it performs well under the steady breathing patterns observed in this study, its performance may deteriorate when handling coughing, hiccups, or highly irregular breathing patterns. Moving forward, we will explore more sophisticated models—such as predictive models incorporating deep learning [32]—to enhance robustness. Second, the current system relies on an external optical tracking system for respiratory signals, which may encounter occlusion and spatial constraints during clinical deployment. Integrating more portable respiratory monitoring sensors—such as abdominal belts or surface-mounted inertial measurement units [33]—will be a key direction for future clinical translation.

Furthermore, validation of this study is currently limited to physical phantoms. While phantom experiments provide valuable preliminary performance and feasibility data, factors such as the mechanical properties of biological tissue [34] and needle path deviation remain unaccounted for [35,36]. Consequently, animal studies will be an indispensable next step to evaluate the system’s overall efficacy in a more physiologically realistic environment. Finally, we will focus on optimizing the system’s overall latency and developing more sophisticated MR human–machine interfaces. These enhancements could include displaying confidence intervals for predicted targets or providing predictive information for multiple future respiratory cycles, thereby further improving the system’s practicality and reliability.

From a methodological perspective, the proposed system was verified using a respiratory motion simulator and physical phantoms to specifically evaluate its temporal prediction accuracy, dynamic registration stability, and prompt timing performance under controlled and repeatable conditions. This verification strategy was deliberately chosen to isolate respiratory motion and system latency effects from confounding variables such as operator skill, tissue deformation, and procedural variability.

By employing a simulator with known ground-truth respiratory phases and repeatable motion patterns, the validation focuses on the core technical contribution of this work—namely, predictive respiratory synchronization and proactive MR guidance—rather than downstream clinical outcomes. Such a staged validation approach is consistent with early-phase evaluation of navigation and guidance systems, where establishing algorithmic reliability and performance boundaries is a prerequisite for subsequent animal and clinical studies.

In summary, the experimental results presented in this study were obtained under controlled conditions using a respiratory motion simulator. While this method possesses core advantages such as repeatability, safety, and known benchmarks for verifiable algorithms, it represents only the first critical stage in the translation process. Subsequent logical steps include: (1) Conducting in vitro validation using human models incorporating tissue-mimetic materials to assess needle-tissue interaction effects; (2) In vivo animal studies to evaluate system performance in real-world settings; (3) Clinical pilot trials, which require further engineering optimization to meet clinical ergonomics requirements and obtain necessary ethical approvals. This study successfully establishes the technical feasibility and core performance benchmarks of the proposed MR navigation and gating system, laying a solid foundation for subsequent translational steps.

Acknowledgments

The authors are grateful to all those who participated in this study.

Appendix A. Air Tightness and Stability Testing

To ensure the reliability and repeatability of the respiratory simulation, the assembled pneumatic platform underwent rigorous testing for air-tightness and motion stability. Under confirmed airtight conditions, the platform was driven by a standard sinusoidal waveform (amplitude: 10 mm, period: 4 s). A high-precision optical motion capture system was employed to continuously monitor and record the 3D trajectory of a marker located at the center of the dynamic region of the airbag over more than 50 respiratory cycles. The cyclic repeatability of the platform was assessed by analyzing the position of this marker at the end-inspiration phase for each cycle. Results demonstrated that the standard deviation of the 3D positions of all end-inspiration points was less than 0.3 mm, with the maximum positional fluctuation tightly constrained within 1.0 mm. This confirms that the simulation platform is capable of generating highly stable and repeatable dynamic displacements, thereby providing a dependable foundation for the subsequent performance evaluation of the navigation system.

Author Contributions

Conceptualization, H.L. and S.W.; methodology, H.L.; validation, H.L., S.W. and J.H.; formal analysis, H.L. and T.Z.; investigation, T.Z. and Y.Z.; resources, H.L., S.W. and J.H.; data curation, J.H. and T.Z.; writing—original draft preparation, H.L.; writing—review and editing, S.W. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

The data used in this study are publicly available from the GitHub repository: https://github.com/SangWoonJeong/Respiratory-prediction, associated with Jeong et al. [21], and were accessed on 15 July 2025. The version of the software corresponds to the publicly available repository state at the time of access.

Conflicts of Interest

The authors declare no conflicts of interest.

Funding Statement

This research received no external funding.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.International Agency for Research on Cancer GLOBOCAN 2022: Lung Cancer Fact Sheet. [(accessed on 12 June 2025)]. Available online: https://gco.iarc.fr/today/data/factsheets/cancers/15-Lung-fact-sheet.pdf.
  • 2.Liang T., Du Y., Guo C., Wang Y., Shang J., Yang J., Niu G. Ultra-low-dose CT-guided lung biopsy in clinic: Radiation dose, accuracy, image quality, and complication rate. Acta Radiol. 2021;62:198–205. doi: 10.1177/0284185120917622. [DOI] [PubMed] [Google Scholar]
  • 3.Chen C., Xu L., Sun X., Liu X., Han Z., Li W. Safety and diagnostic accuracy of percutaneous CT-guided transthoracic biopsy of small lung nodules (≤20 mm) adjacent to the pericardium or great vessels. Diagn. Interv. Radiol. 2021;27:94–101. doi: 10.5152/dir.2020.20051. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Shimizu S., Shirato H., Kagei K., Nishioka T., Bo X., Dosaka-Akita H., Hashimoto S., Aoyama H., Tsuchiya K., Miyasaka K. Impact of respiratory movement on the computed tomographic images of small lung tumors in three-dimensional (3D) radiotherapy. Int. J. Radiat. Oncol. Biol. Phys. 2000;46:1127–1133. doi: 10.1016/s0360-3016(99)00352-1. [DOI] [PubMed] [Google Scholar]
  • 5.Jud L., Fotouhi J., Andronic O., Aichmair A., Osgood G., Navab N., Farshad M. Applicability of augmented reality in orthopedic surgery—A systematic review. BMC Musculoskelet. Disord. 2020;21:103. doi: 10.1186/s12891-020-3110-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Brockmeyer P., Wiechens B., Schliephake H. The Role of Augmented Reality in the Advancement of Minimally Invasive Surgery Procedures: A Scoping Review. Bioengineering. 2023;10:501. doi: 10.3390/bioengineering10040501. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Qian L., Barthel A., Johnson A., Osgood G., Kazanzides P., Navab N., Fuerst B. Comparison of optical see-through head-mounted displays for surgical interventions with object-anchored 2D-display. Int. J. Comput. Assist. Radiol. Surg. 2017;12:901–910. doi: 10.1007/s11548-017-1564-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Chen X., Xu L., Wang Y., Wang H., Wang F., Zeng X., Wang Q., Egger J. Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display. J. Biomed. Inform. 2015;55:124–131. doi: 10.1016/j.jbi.2015.04.003. [DOI] [PubMed] [Google Scholar]
  • 9.Verhey J.T., Haglin J.M., Verhey E.M., Hartigan D.E. Virtual, augmented, and mixed reality applications in orthopedic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2020;16:e2067. doi: 10.1002/rcs.2067. [DOI] [PubMed] [Google Scholar]
  • 10.Frajhof L., Borges J., Hoffmann E., Lopes J., Haddad R. Virtual reality, mixed reality and augmented reality in surgical planning for video or robotically assisted thoracoscopic anatomic resections for treatment of lung cancer. J. Vis. Surg. 2018;4:143. doi: 10.21037/jovs.2018.06.02. [DOI] [Google Scholar]
  • 11.Li C., Zheng B., Yu Q., Yang B., Liang C., Liu Y. Augmented Reality and 3-Dimensional Printing Technologies for Guiding Complex Thoracoscopic Surgery. Ann. Thorac. Surg. 2021;112:1624–1631. doi: 10.1016/j.athoracsur.2020.10.037. [DOI] [PubMed] [Google Scholar]
  • 12.Han Z., Dou Q. A review on organ deformation modeling approaches for reliable surgical navigation using augmented reality. Comput. Assist. Surg. 2024;29:2357164. doi: 10.1080/24699322.2024.2357164. [DOI] [PubMed] [Google Scholar]
  • 13.Yang S., Wang Y., Ai D., Geng H., Zhang D., Xiao D., Song H., Li M., Yang J. Augmented Reality Navigation System for Biliary Interventional Procedures with Dynamic Respiratory Motion Correction. IEEE Trans. Bio-Med. Eng. 2024;71:700–711. doi: 10.1109/TBME.2023.3316290. [DOI] [PubMed] [Google Scholar]
  • 14.Guoming W. Transthoracic needle biopsy of lung under simulator for the diagnosis of carcinoma of the lung. Cancer Res. Prev. Treat. 1995;22:339–340. [Google Scholar]
  • 15.Frisk H., Burström G., Persson O., El-Hajj V.G., Coronado L., Hager S., Edström E., Elmi-Terander A. Automatic image registration on intraoperative CBCT compared to Surface Matching registration on preoperative CT for spinal navigation: Accuracy and workflow. Int. J. Comput. Assist. Radiol. Surg. 2024;19:665–675. doi: 10.1007/s11548-024-03076-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Blackall J.M., Ahmad S., Miquel M.E., McClelland J.R., Landau D.B., Hawkes D.J. MRI-based measurements of respiratory motion variability and assessment of imaging strategies for radiotherapy planning. Phys. Med. Biol. 2006;51:4147–4169. doi: 10.1088/0031-9155/51/17/003. [DOI] [PubMed] [Google Scholar]
  • 17.Borgert J., Krüger S., Timinger H., Krücker J., Glossop N., Durrani A., Viswanathan A., Wood B.J. Respiratory motion compensation with tracked internal and external sensors during CT-guided procedures. Comput. Aided Surg. 2006;11:119–125. doi: 10.3109/10929080600740871. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Tatinati S., Veluvolu K.C., Sun-Mog Hong Nazarpour K. Real-time prediction of respiratory motion traces for radiotherapy with ensemble learning; Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society Annual International Conference; Chicago, IL, USA. 26–30 August 2014; pp. 4204–4207. [DOI] [PubMed] [Google Scholar]
  • 19.Liu N., Zhao Z., Li Y., Liu J. A prospective randomized controlled study of the efficacy of different breath-holding methods on CT-guided percutaneous lung biopsy. J. Dalian Med. Univ. 2023;45:13–18. doi: 10.11724/jdmu.2023.01.03. [DOI] [Google Scholar]
  • 20.Florkowska A., Trojak M., Celniak W., Stanuch M., Daniol M., Skalski A. Metrological analysis of HoloLens 2 for visual marker-based surgical navigation; Proceedings of the 2023 IEEE International Conference on Imaging Systems and Techniques (IST); Copenhagen, Denmark. 17–19 October 2023; pp. 1–6. [DOI] [Google Scholar]
  • 21.Jeong S., Cheon W., Cho S., Han Y. Clinical applicability of deep learning-based respiratory signal prediction models for four-dimensional radiation therapy. PLoS ONE. 2022;17:e0275719. doi: 10.1371/journal.pone.0275719. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Ariffin F.F.S., Kamarudin L.M., Ghazali N., Nishizaki H., Zakaria A., bin Syed Zakaria S.M.M. Inhalation and exhalation detection for sleep and awake activities using non-contact ultra-wideband (UWB) radar signal. J. Phys. Conf. Ser. 2021;1755:012038. doi: 10.1088/1742-6596/1755/1/012038. [DOI] [Google Scholar]
  • 23.Keall P.J., Mageras G.S., Balter J.M., Emery R.S., Forster K.M., Jiang S.B., Kapatoes J.M., Low D.A., Murphy M.J., Murray B.R., et al. The management of respiratory motion in radiation oncology report of AAPM Task Group 76. Med. Phys. 2006;33:3874–3900. doi: 10.1118/1.2349696. [DOI] [PubMed] [Google Scholar]
  • 24.Vilas-Boas M.D.C., Choupina H.M.P., Rocha A.P., Fernandes J.M., Cunha J.P.S. Full-body motion assessment: Concurrent validation of two body tracking depth sensors versus a gold standard system during gait. J. Biomech. 2016;87:189–196. doi: 10.1016/j.jbiomech.2019.03.008. [DOI] [PubMed] [Google Scholar]
  • 25.Shipota M.S., Maksimova I.A. Metody analiza pokhodki v diagnostike sostoyaniya oporno-dvigatel’nogo apparata pri DTsP [Methods of Gait Analysis in Diagnosing the State of the Musculoskeletal System in Cerebral Palsy] Proc. Russ. State Univ. A.N. Kosygin (Technol. Des. Art) 2024:215–217. [Google Scholar]
  • 26.Tsukada H., Satou T., Iwashima A., Souma T. Diagnostic accuracy of CT-guided automated needle biopsy of lung nodules. AJR. Am. J. Roentgenol. 2000;175:239–243. doi: 10.2214/ajr.175.1.1750239. [DOI] [PubMed] [Google Scholar]
  • 27.Zhou G., Chen X., Niu B., Yan Y., Shao F., Fan Y., Wang Y. Intraoperative localization of small pulmonary nodules to assist surgical resection: A novel approach using a surgical navigation puncture robot system. Thorac. Cancer. 2020;11:72–81. doi: 10.1111/1759-7714.13234. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Duan X., He R., Jiang Y., Cui F., Wen H., Chen X., Hao Z., Zeng Y., Liu H., Shi J., et al. Robot-assisted navigation for percutaneous localization of peripheral pulmonary nodule: An in vivo swine study. Quant. Imaging Med. Surg. 2023;13:8020–8030. doi: 10.21037/qims-23-716. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Zheng F., Zhong H., Chen H. Impact of the Varian real-time position management respiratory gating system on dosimetry of radiotherapy plans. Chin. J. Radiol. Med. Prot. 2022;42:685–690. doi: 10.3760/cma.j.cn112271-20220516-00207. [DOI] [Google Scholar]
  • 30.Barud M., Turek B., Dąbrowski W., Siwicka D. Anesthesia for robot-assisted surgery: A review. Anaesthesiol. Intensive Ther. 2025;57:99–107. doi: 10.5114/ait/203168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Sharp G.C., Jiang S.B., Shimizu S., Shirato H. Prediction of respiratory tumour motion for real-time image-guided radiotherapy. Phys. Med. Biol. 2004;49:425–440. doi: 10.1088/0031-9155/49/3/006. [DOI] [PubMed] [Google Scholar]
  • 32.Lin H., Shi C., Wang B., Chan M.F., Tang X., Ji W. Towards real-time respiratory motion prediction based on long short-term memory neural networks. Phys. Med. Biol. 2019;64:085010. doi: 10.1088/1361-6560/ab13fa. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Vitazkova D., Foltan E., Kosnacova H., Micjan M., Donoval M., Kuzma A., Kopani M., Vavrinsky E. Advances in Respiratory Monitoring: A Comprehensive Review of Wearable and Remote Technologies. Biosensors. 2024;14:90. doi: 10.3390/bios14020090. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Eskandari M., Arvayo A.L., Levenston M.E. Mechanical properties of the airway tree: Heterogeneous and anisotropic pseudoelastic and viscoelastic tissue responses. J. Appl. Physiol. 2018;125:878–888. doi: 10.1152/japplphysiol.00090.2018. [DOI] [PubMed] [Google Scholar]
  • 35.Ertop T.E., Emerson M., Rox M., Granna J., Maldonado F., Gillaspie E., Lester M., Kuntz A., Rucker C., Fu M., et al. Steerable needle trajectory following in the lung: Torsional deadband compensation and full pose estimation with 5DOF feedback for needles passing through flexible endoscopes; Proceedings of the ASME Dynamic Systems and Control Conference. ASME Dynamic Systems and Control Conference; Pittsburgh, PA, USA. 4–7 October 2020; [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Al-Safadi S., Hutapea P. A study on modeling the deflection of surgical needle during insertion into multilayer tissues. J. Mech. Behav. Biomed. Mater. 2023;146:106071. doi: 10.1016/j.jmbbm.2023.106071. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data used in this study are publicly available from the GitHub repository: https://github.com/SangWoonJeong/Respiratory-prediction, associated with Jeong et al. [21], and were accessed on 15 July 2025. The version of the software corresponds to the publicly available repository state at the time of access.


Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES