Abstract
Ultrasound (US) imaging is a widely used diagnostic method in clinics. Real-time-generated US images are used for rapid diagnosis without harm to patients. The quality of US imaging highly depends on the skill of the physician due to the differences among physicians. Techniques for autonomous robotic ultrasound (AU-RUS) acquisitions are expected to become an effective means to improve the level of US diagnosis, reduce the workload of physicians, and improve the standardization of US imaging quality. This paper aims to summarize the current research status of techniques for AU-RUS acquisitions, and to discuss the research trends and challenges regarding related technologies. Firstly, the techniques for AU-RUS acquisitions and systems are outlined. The techniques for teleoperated or autonomous US acquisitions are briefly discussed. Representative RUS acquisition systems are introduced. Then, the current research status of AU-RUS acquisitions is reviewed from four research directions: force sensitivity and control, scanning path-planning and positioning, US treatment guidance, and US image processing technology and quality assessment optimization. This review provides a decision-oriented autonomy perspective by mapping typical methods to workflow components across the stages of perception, decision-making, and execution. We identify major deployment bottlenecks, including safety-verifiable autonomy and failure recovery, motion compensation under deformation, and the lack of standardized, clinically meaningful US image quality metrics. Finally, the shortcomings of current research are summarized and analyzed, and the research trends and challenges for AU-RUS acquisitions are prospected.
Keywords: medical robotics, ultrasound imaging, robotic ultrasound acquisition, autonomous robotic system, medical image processing
1. Introduction
B-mode ultrasound (US) imaging began to be used for medical diagnosis in the 1940s [1]. Its principle is based on transmitting US waves into the human body via a US transducer while simultaneously receiving the reflected echo signals. Since the echoes carry information about the biological tissues, intuitive imaging of the corresponding tissues can be achieved by analyzing the distribution and intensity of the echoes, allowing physicians to assess the health of the tissues for diagnostic purposes [2]. Compared to other medical imaging modalities, such as computed tomography (CT) and magnetic resonance imaging (MRI), US imaging offers the distinct advantages of portability, real-time capability, safety, and low cost [3,4]. As a result, it is widely applied in various clinical fields, such as obstetrics and gynecology, cardiology, urology, and gastroenterology [5,6,7,8,9,10,11].
Despite its unique advantages, US imaging also has a notable weakness: the examination must be performed when the US probe is in “close” contact with the patient’s body. Furthermore, image quality is highly dependent on the contact status [12], placing high demands on the physician’s skill and experience. During US examination, the physician must first identify the correct scanning area, and then continuously move the US probe while maintaining adequate contact pressure and constantly adjusting its position to optimize image quality. Additionally, the physician must use their other hand to continuously adjust the parameters of the US machine. This diagnostic process relies entirely on the physician’s skill in integrating anatomical prior knowledge with the US images to assess imaging quality and form a diagnosis. Such a heavily operator-dependent procedure has become a bottleneck, limiting the broader application and adoption of US imaging.
The need for manual operation makes it difficult to adapt to intraoperative navigation applications, and even harder to integrate them with other intelligent surgical tools. Additionally, US image quality highly depends on the physician’s skill and experience. Reliable diagnostic outcomes are often determined by the proficiency of the specialized physician. The current lack of quantitative evaluation standards for US imaging quality can lead to discrepancies in diagnosis for the same case among different physicians [13]. Furthermore, the repetitive, mechanical motions involved in the daily examination of a large number of cases impose significant mental and physical strain on physicians. The most common issues are permanent fatigue-related injuries to local muscles and skeletal structures [14,15], which can also contribute to imaging defects caused by human operation. Moreover, it is also worth noting that during infectious disease outbreaks, the challenge of reducing infection risks associated with contact-based examinations is a prominent issue for conventional manual US procedures [16,17]. Finally, the lengthy and costly training cycle for physicians, coupled with the uneven distribution of medical resources, results in a shortage of highly skilled US physicians in remote and impoverished regions. This situation also occurs in various extreme environments, e.g., disaster zones, scientific expeditions in harsh conditions, and space missions, where deploying a sufficient number of specialized US physicians is exceptionally difficult.
With the rapid advancement of robotics and artificial intelligence (AI), medical robots have gradually demonstrated flexibility and human–robot friendliness distinct from traditional industrial robots. Progress in soft materials and sensor technology has endowed medical robots with a level of dexterity and perceptual capabilities comparable to that of the human arm. Extensive clinical studies have confirmed that medical robots have already shown significant advantages in clinical diagnosis, surgical procedures, postoperative rehabilitation, and home care [18,19,20]. Therefore, integrating robotic technology with US image acquisition holds the potential to overcome many challenges associated with manual US imaging. This approach is expected to realize standardized and consistent US images and has emerged as a prominent research direction in the field of medical-engineering integration. This paper will review relevant research work from three perspectives: robotic ultrasound (RUS) acquisition systems, core enabling technologies, challenges and prospects.
Different from existing surveys focusing on system summaries, this review emphasizes autonomy as decision-making capability across the full US acquisition workflow. We clarify which workflow components are system-led, shared, or human-led at different autonomy levels. We further distinguish engineering autonomy from clinically acceptable diagnostic autonomy and discuss clinical readiness and translation barriers. We summarize the cross-cutting bottlenecks and prioritize near-term barriers to deployment.
2. Overview of RUS Acquisition Systems
The RUS acquisition system aims to replace the physician, holding the US probe with a robotic arm to obtain medical diagnostic-quality US images while ensuring patient safety. In recent years, with the rapid development of collaborative robotic arms, force-sensing and control technologies, and image processing techniques, various RUS systems have been developed. Based on the level of involvement of the US physician, these systems can be categorized into three types: remote-controlled RUS (RC-RUS), semi-autonomous RUS (SA-RUS), and autonomous RUS (AU-RUS). Some representative RUS systems are listed in Table 1.
To move beyond an involvement-based description, we interpret autonomy in a decision-oriented way by identifying who closes the loop for key workflow components. A typical RUS workflow can be decomposed into target localization and initial pose selection, force regulation and acoustic coupling maintenance, image quality assessment and acceptance, scan completion and recovery from failures, and physician supervision. From this perspective, RC-RUS usually relies on the physician for most perception and decision steps, while SA-RUS automates specific components, such as force regulation or partial positioning, while keeping the physician in the loop for acceptance and supervision. AU-RUS aims to close more loops, including localization, quality-aware adjustments, and completion decisions under safety constraints. We further distinguish engineering-level autonomy from clinically acceptable diagnostic autonomy. Engineering autonomy mainly refers to the safe and robust execution of scanning primitives, such as stable contact, force-limited motion, and trajectory tracking. Clinically acceptable diagnostic autonomy additionally requires quality-aware decision-making with clinically aligned acceptance criteria, uncertainty monitoring, validated failure detection and recovery behaviors, and evaluation endpoints such as repeatability and diagnostic agreement. Therefore, a system can be mechanically autonomous without being diagnostically autonomous, which is particularly important for interpreting autonomous scanning results in clinical workflows.
RC-RUS represents a significant research direction. Such systems typically consist of an expert console, a robotic arm to operate the US probe, and a software controller that maps the expert’s motions to the robotic arm. In a decision-oriented view, the physician primarily closes the loop for target localization, scan-planning, and image acceptance, while the robot mainly provides motion reproduction and basic safety/force-limiting functions. In 2009, Nakadate et al. [21] developed the first 2D RUS acquisition system, WTA-1RII, for carotid blood flow measurement. The core components include a US probe, a custom-designed six-degrees-of-freedom (DOF) parallel robotic arm, a passive robotic arm, and a joystick-based master controller. As an initial attempt, this system adopts a teleoperation mode and incorporates a passive structure to provide a safety margin. Since it relies on the operator at the master console to ensure both safety and US image quality, the requirements for autonomous robotic control are relatively low.
As shown in Figure 1a, the ReMeDi robot [22] allows the physician to perform US diagnostics using a set of input devices, including a haptic interface for manipulating the robotic arm and a dedicated keyboard for operating the US machine. The system features a user-friendly human–robot interface and has successfully enabled remote echocardiography examinations. Commercially available system solutions that are currently in use include the MGI system [23] and the MELODY system [24]. The MGI system features high positioning accuracy, high flexibility, safety force protection, and real-time US imaging. Notably, Wang et al. [25] utilized the MGI system to perform RC-RUS examinations on COVID-19 patients, thereby reducing the risk of infection for healthcare providers. The MELODY system features three active DOFs and three passive DOFs, requiring a physician to be on-site for coarse positioning. It has been successfully applied in the US examinations of over 300 patients, covering areas such as cardiac, abdominal, and obstetric imaging [26,27,28]. As shown in Figure 1b, Mathiassen at al. [29] presented an RUS based on a commercial UR5 manipulator. This system integrates robotic motion control with US imaging, demonstrating the feasibility of using a collaborative robot for assisted or automated scanning tasks. Siao et al. [30] developed a system integrating a force sensor, a LiDAR device on the robotic arm, and an adhesive mechanism. This system transforms the robot’s coordinate system into a smartphone coordinate system, enabling physicians to remotely control the robotic arm via their smartphone. A built-in force sensor monitors whether the applied force exceeds safe limits during the procedure, ensuring patient safety and ultimately facilitating RC-RUS examinations.
SA-RUS further incorporates human–robot collaboration and semi-autonomous strategies. It retains the physician’s role in high-level decision-making (e.g., defining the clinical target and acceptance criteria) while delegating specific workflow components such as contact force and coupling maintenance, fine pose adjustment, and safety constraint enforcement to robotic assistance. Therefore, SA-RUS can be mechanically autonomous in its execution while still being physician-led in terms of quality acceptance and diagnostic responsibility. For instance, Mustafa et al. [31] utilized a commercial robotic arm for US acquisition, shifting the research focus towards enhancing the autonomy of the acquisition process. This system established a preliminary overall architecture for an “autonomous” RUS system, i.e., robotic arm + US probe + force sensor + optical sensor. The system utilized an image recognition algorithm to achieve abdominal recognition, thereby determining the starting position and the area requiring scanning for liver US imaging, and preliminarily realized autonomous scanning. Although this work cannot guarantee that the acquired US images meet diagnostic requirements, it validated the feasibility of AU-RUS acquisition. Subsequently, Ma et al. [32] developed a pulmonary RUS system consisting of a 7-DOF robotic arm, a US probe, and an RGB-D camera. This system estimated patient posture and identified the target acquisition area based on DensePose, achieving AU-RUS acquisitions. However, the image quality still required optimization through manual adjustment of the probe’s pose. Huang et al. [13] developed an SA-RUS system for the carotid artery, consisting of a 7-DOF robotic arm, an RGB-D camera, a US probe, and a control console. The system divided the neck scanning procedure into pre-scanning and intraoperative scanning phases. The initial pose and trajectory for the pre-scanning phase were determined based on RGB-D images, while the intraoperative scanning employed hybrid force–position control for implementation. This enabled the system to perform transverse/longitudinal scans of the neck. However, the initial pose and the path selection for the pre-scanning phase still required assistance from a specialized physician. As shown in Figure 1c, Akbari et al. [33] developed an RUS system featuring real-time image-based force adjustment, with the aim of enabling safer US examinations during the COVID-19 pandemic. By combining automatic robotic scanning with online US image quality assessment and force regulation, their method enhanced physical distancing while maintaining scanning performance.
AU-RUS aims to achieve autonomous US acquisition by enabling the robot to plan and execute scanning motions without continuous human teleoperation. More importantly, higher autonomy also implies closing additional loops, such as target localization, quality-aware pose adaptation, scan completion decisions, and failure detection and recovery under safety constraints. However, autonomy in motion-planning and execution does not necessarily imply clinically acceptable diagnostic autonomy, which further requires validated quality metrics and clinically aligned endpoints. As shown in Figure 1d, Su et al. [34] proposed an AU-RUS system for thyroid examination, consisting of a 6-DOF robotic arm equipped with a linear array US probe, a 6-DOF force/torque sensor, and an RGB-D camera. This utilizes human skeleton keypoint recognition for initial positioning and combines reinforcement learning with force feedback to accomplish thyroid target searches. It also employs Bayesian optimization to adjust the US probe pose online, thereby enhancing imaging quality and scanning completeness. Experiments on human subjects demonstrated its potential to achieve image quality and nodule information extraction comparable to that of manual examination.
To enhance image quality, Zielke et al. [35] developed an RUS system for thyroid volume measurement consisting of a 7-DOF robotic arm and a US probe. Based on the acquired US images, a neural network is employed for image segmentation. According to the segmentation results, a scanning trajectory was defined to guide the US probe’s movement, which improved the repeatability and consistency of thyroid lobe volume measurements. However, due to the sensitive anatomical location of the thyroid gland, the initial position of the probe in this system still required manual determination by a physician. Shah et al. [36] developed an RUS system integrated with a 6-DOF robotic arm and a 2D US probe. This system focused on 3D reconstruction technology of the carpal arch based on conventional 2D US imaging, enabling 3D morphological assessment of the carpal arch. The approach of this system holds referential significance for the reconstruction and analysis of other anatomical features in the human body using 2D US images. Tan et al. [37] proposed a system consisting of a US probe, a dual robotic-arm system, a multi-structured light system, a human–robot interaction system, and a flexible US probe clamping device. They designed an end-to-end scanning strategy along with a closed-loop force control strategy to achieve repeatability in breast US imaging. Zielke et al. [35] integrated online segmentation and image appearance feedback into a robotic scanning closed-loop for the thyroid volume measurement. This system comprises a 6-DOF robotic arm and a US probe. The robot guides its motion based on real-time segmentation results to accomplish stable acquisition, thereby significantly reducing measurement errors introduced by physician variability. However, its application objective leans more towards standardized volume estimation than a comprehensive diagnostic procedure.
Table 1.
Summary of robotics systems for US acquisition.
| Year | Robotic Arm | US Device | Additional Sensor | Technical Features | Application | |
|---|---|---|---|---|---|---|
| [13] | 2024 | Franka Panda (7-DOF) |
|
|
|
Carotid artery |
| [23] | 2009 | Parallel robot (6-DOF) |
|
/ |
|
Carotid artery |
| [30] | 2024 | Not provided |
|
|
|
Not specified |
| [31] | 2013 | Mitsubishi MELFA RV-1 (6-DOF) |
|
|
|
Liver |
| [32] | 2021 | Franka Panda (7-DOF) |
|
|
|
Lung |
| [36] | 2021 | Denso (6-DOF) |
|
/ |
|
Carpal arch |
| [37] | 2022 | Franka Panda (7-DOF) |
|
|
|
Breast |
| [38] | 2017 | KUKA LBR iiwa (7-DOF) |
|
|
|
Heart |
| [39] | 2018 | Lechuang mechanism (3-DOF) |
|
|
|
Not specified |
| [40] | 2018 | Epson C4-A601S (6-DOF) |
|
|
|
Not specified |
| [41] | 2025 | Not provided |
|
|
|
Not specified |
| [42] | 2016 | KUKA LBR iiwa (7-DOF) |
|
|
|
Abdominal aortic aneurysm |
| [43] | 2025 | Franka Panda (7-DoF) |
|
|
|
Not specified |
| [44] | 2023 | Not provided |
|
|
|
Not specified |
| [45] | 2022 | KUKA LBR Med7 (7-DOF) |
|
|
|
Breast |
| [46] | 2025 | Not provided |
|
|
|
Kidney |
| [47] | 2025 | Franka Panda (7-DOF) |
|
|
|
Musculoskeletal |
| [48] | 2024 | Diana 7 Med (7-DOF) |
|
|
|
Carotid artery |
Compared with 2D US imaging, 3D US imaging enables the 3D visualization of target structures. Any slice obtained from 3D US can be reviewed by physicians, providing more precise 3D morphobiological measurements [49] and offering more accurate references for diagnosis [50]. However, 3D US is more sensitive to the applied force and tissue deformation [51,52], leading to poor repeatability and poor standardization for physician-held 3D US probes. In contrast, robotic 3D US imaging fulfils an urgent need and shows broad medical application prospects, making it a key research focus.
Figure 1.
Representative robotic systems for US acquisitions. (a) ReMeDi system used in a cardiac exam [22]; (b) teleoperated US system with haptic device [29]; (c) robot-assisted system for automatic tissue scanning [33]; (d) AU-RUS system for thyroid scanning [34]; (e) 3D RUS system [38].
The system developed by Göbl et al. [38] consists of a 7-DOF robotic arm, an RGB-D camera, and a US probe, as shown in Figure 1e. The RGB-D camera is positioned above the patient to register CT images to the world coordinate system. The scanning path is automatically planned in preoperative CT data to cover the region of interest inside the patient, with optimization of the acoustic window based on estimations of internal anatomical structures and acoustic transmission. This system used acoustic window quality as the evaluation criterion, resulting in higher coverage of the anatomical target area compared to simple planning approaches.
To address the poor repeatability in physician-performed US acquisition, Kojcev et al. [53] compared the consistency of thyroid lobe length measurements obtained from 3D US images acquired automatically by a robotic system versus those from 2D US scans performed by a physician. Their system employed a 7-DOF robotic arm with an RGB-D camera mounted at its end-effector. Using the point cloud data of the body surface acquired by the RGB-D camera, a scanning trajectory for the thyroid lobe was defined. Compliant behavior was achieved through impedance control, followed by 3D volume reconstruction, with measurements subsequently performed by a physician. Compared to physician-operated US, the RUS system yielded more repeatable and highly consistent measurement results. Jiang et al. [54] addressed the significant tissue deformation caused by probe pressure in 3D US imaging via a patient-specific stiffness-based method. The system consists of a 7-DOF robotic arm with a torque sensor and a linear US probe attached to its flange. It records the contact force and probe pose during palpation to estimate the stiffness of nonlinear tissues. US images are used to compute pixel displacement, characterizing tissue deformation under varying forces. Ultimately, the acquired US images are calibrated to correct deformation and achieve repeatable image acquisition. Regarding challenges in US examination, such as large variations in 3D fetal pose and poor image quality, Chen et al. [55] proposed a novel 3D fetal pose estimation framework, which effectively improves the accuracy of fetal US localization and pose estimation.
US examination protocols vary significantly across different organs. Consequently, 2D or 3D RUS systems are often tailored to specific, personalized US examination applications, lacking universal evaluation metrics. Hence, we will proceed from a technical research perspective, establishing corresponding evaluations based on a summary of the technical framework to assess and characterize existing RUS systems.
3. Current Developments on AU-RUS Acquisitions
This section will review the current research status of RUS acquisition from a technical perspective. This includes force sensitivity and control, scanning path-planning and positioning, US treatment guidance, and US image processing technology and quality assessment optimization, as detailed in Table 2. Patient safety is the paramount prerequisite during AU-RUS acquisition. Under the condition of ensured safety, balancing patient comfort with imaging quality places higher demands on the precision and sensitivity of robotic force control. The degree of automation in the acquisition process is closely related to the robot’s localization of the scanning area and the planning technology of the scanning path. Furthermore, the effectiveness evaluation of AU-RUS acquisition technology is strongly correlated with the assessment and optimization of imaging quality and the structural accuracy evaluation of 3D reconstruction.
Table 2.
Summary of RUS acquisition techniques.
| Technique | Implementation | Key Features | References |
|---|---|---|---|
| Force sensitivity and control | Probe force sensors maintaining 1~8 N range | Inherent shadowing at image edges | [39,40] |
| Flange 6-axis force/torque sensor | Precise constant flange force | [56,57] | |
| Built-in force/torque sensors | Force-feedback-based compliant control | [38,53,58,59,60] | |
| Built-in sensors for patient-specific optimal force | Contact force optimization via US confidence maps | [42] | |
| Constant-force spring (passive mechanism) | Mechanically guaranteed force safety | [61] | |
| Force computed via acoustically transparent pad | Mechanically guaranteed force safety | [62] | |
| Clutch joint limiting force range | Mechanically guaranteed force safety | [63] | |
| Linear spring and flexible joint for safe force | Mechanically guaranteed force safety | [64] | |
| Scanning path-planning and localization | Real-time US-based path-planning | US image-only control | [65] |
| Manual path-planning in robot workspace | Fixed pre-planned paths (non-adjustable) | [66] | |
| RGB-D-based adjustment of planned paths | Real-time RGB-D tracking of planned paths | [54] | |
| Preoperative path-planning | CT, MRI, CAD models with image registration | [42,51,59,67,68,69,70] | |
| Automatic path-planning from surface point clouds | Selected ROI surface point clouds | [45,71] | |
| US treatment guidance | Catheter tracking via preoperative anatomy and US | 3D vessel registration and interventional 3D US | [72] |
| US imaging + visual/force-guided needle insertion | Dual-arm coordination + pre-/intra-op image registration + depth vision navigation | [60] | |
| US-guided spinal needle insertion | US-based servoing for spinal needle insertion | [47] | |
| Image processing techniques, quality assessment and optimization | Image quality assessment using anatomical priors | Biological parameter comparison | [73,74,75,76,77] |
| RF data and US attenuation-based quality assessment | US attenuation characteristics | [73,74,75,76] | |
| Acoustic window optimization to reduce shadowing | Acoustic window optimization | [38,78] | |
| Real-time probe adjustment via US confidence maps | US confidence map | [3,79,80,81] | |
| Visual servoing for path adjustment | Preoperative image registration | [82,83,84,85] |
3.1. Force Sensitivity and Control
Based on clinical experience from physician-performed US acquisition, it is understood that continuous contact between the US probe and the patient must be maintained. An appropriate force must be applied to obtain clear images while ensuring patient safety. Chatelain et al. [79,80,86] investigated the relationship between US image confidence values and contact force magnitude. Their research results indicate that inadequate contact forces affect acoustic coupling, resulting in insufficient image confidence and poor image quality. As contact force increases, the confidence value stabilizes after reaching a certain threshold. This suggests that beyond this point, the correlation between contact force magnitude and image quality weakens, while too high a contact force may cause discomfort or even harm to the patient. Zhang et al. proposed a path smoothing optimization method based on the force–strain regression of the breast tissue deformation. Further, they introduced online updating of the contact force by monitoring US image confidence, aiming to maintain good coupling and stable imaging under a lower desired force [87]. Based on the above analyses, precise force-sensing and control constitute a significant research direction in US acquisition systems.
Some researchers have integrated force sensor with the flange of the robot or position it on the US probe to detect the contact force in real time. Huang et al. [39,40] positioned two force sensors on the sides of the probe. The probe’s pose is automatically adjusted to maintain the contact force within the range of 1~8 N. To address the angular misalignment between the robotic arm and the probe caused by the US probe vibration, as shown in Figure 2a, Wang et al. [88] employed a dual-side IMU sensor structure. This setup analyzes the angles of both the US probe and the robotic arm during scanning and uses a feedback mechanism for the timely correction of the probe’s imaging angle, ensuring good reproducibility of US images. However, such a design can introduce artifacts into the acquired images, compromising image quality. To address this, many researchers have altered the placement of the force sensor. Merouch et al. [56] positioned a force sensor between the flange and the probe for the automatic US scanning of lower-limb arteries, applying a constant force vertical to the patient’s body surface. Mustafa et al. [57] employed a similar design for automated liver US imaging, where the force control algorithm relies on real-time feedback from the force sensor. Fu et al. [89] also installed a force sensor between the flange of the robotic arm and the US probe. This setup acquires contact force data along with the robotic arm’s flange position and velocity to perform an online estimation of environmental parameters. Based on these parameters, the actual contact force in the current state is calculated to ensure the safety of remote US examinations. As shown in Figure 2b, Zheng et al. proposed a low-cost force sensor and force control system for RUS imaging, achieving stable contact through hybrid position–force control [41].
With the increasing demand for human–robot interaction, collaborative medical robots have been developed. These robots typically have force-torque sensors embedded in their joints, which can be used to estimate the Cartesian forces acting on the flange. Many systems directly utilize such robotic arms and apply a constant contact force along the direction of the probe [38,42,53,58,59,60,90]. Lin et al. constructed an RC-RUS using a 7-DOF robotic arm with an embedded force sensor at its flange. They employed Z-direction PD control to maintain a constant contact force of 2.0 N. Comparative experiments with impedance control showed that both control methods could maintain the contact force near the desired value, and differences existed in repeat-scan stability. This work provides reproducible experimental evidence regarding “constant force setting + controller selection” [43].
To prevent potential harm to patients caused by excessive contact force, some researchers have designed specialized end-effector mechanisms to maintain constant contact force, thereby eliminating the need for force sensors and feedback control. Tsumura et al. [61] developed a passive mechanism incorporating a constant-force spring for fetal US scanning, which applies no active force and ensures safety for the pregnant woman and the fetus. Groenhuis et al. [62] designed an acoustically transparent pad for strain elastography reconstruction. No additional force sensor is required as the pad’s elasticity is known priori, and thickness changes can be detected on US images via edge detection. The pressure distribution transmitted through the pad can be calculated and then used to infer pressure within the underlying tissue. Lucas et al. [91,92] designed a soft end-effector consisting of a base and a sensor holder, connected by three soft fluidic actuators arranged in parallel at 120° intervals. This configuration provides two rotations and one translation, achieving passive compliance suitable for US examinations.
To enhance patient safety and mitigate the risk of injury when force-sensor fails, Wang et al. [63] designed a clutch joint (as shown in Figure 2c) that constrains the applied force to a safe range while maintaining the required contact force across different probe poses. As shown in Figure 2d, Sandoval et al. [64] designed a flexible joint coupled to the robot’s flange. This structure is a combination of a linear spring and a multi-joint planar mechanism, providing sufficient compliance to protect patients from excessive contact force. Bao et al. [44] designed a multifunctional end-effector that incorporates a force control mechanism and a force/torque measurement mechanism. This enables the US probe to scan with a constant force while simultaneously measuring the operational forces/torques. The application of a constant force allows for safe US scanning, and the measured forces/torques are used for monitoring and issuing warnings during the procedure. Notably, this multifunctional end-effector operates with independent control, separate from the robotic arm.
The force control methods commonly employed by researchers are predominantly based on admittance controllers, which utilize a predefined relationship between force and position. Carrière et al. [93] used admittance control to ensure compliance in a cooperatively controlled US acquisition system, which regulates the force applied to the tissue and reduces the effort required from the physician. Piwowarczyk et al. [94] proposed an admittance controller to scale the relationship between the force exerted by the physician on the robot and the force applied to the environment. Ferraguti et al. [95] investigated the stability of admittance-controlled robots and their ability to respond to different environmental forces. Dimas et al. [96] analyzed the stability of admittance control by detecting unstable behavior and adjusting the admittance control gains using an adaptive online method to stabilize the robot. Ning et al. [97] proposed a control strategy based on an end-effector admittance controller, achieving simultaneous control of the US probe’s position, pose, and contact force. It is difficult for traditional admittance control to achieve high-precision force control. Therefore, as shown in Figure 2e, Jiang et al. [98] introduced an integral adaptive admittance control strategy, which performs an online estimation of uncertain environmental information and uses the estimated parameters to correct the reference trajectory. While ensuring system stability, it incorporates an integral controller to improve the system’s steady-state response, thereby enabling constant-force scanning in uncertain environments. Xie et al. [99] presented a virtual admittance-based primary–secondary control method, where the controller regulates the contact force between the probe and the tissue, mitigating increases in interactive force to ensure safety. Ning et al. [100] introduced inverse reinforcement learning into the active compliance control of RUS acquisition system, enabling interactive behaviors in uncertain environments that more closely approximate expert strategies. Zhang et al. integrated “dynamic contact force adjustment” with “US image confidence feedback” for robotic breast US scanning, allowing the contact force to be updated online in response to changes in local coupling quality [87].
Figure 2.
Representative force control methods in RUS acquisition systems: (a) dual-side IMU sensor structure [88]; (b) a cost-effective US force sensor and force control system [41]; (c) force-limiting clutch joint [63]; (d) instrumented probe-force measurement [64]; (e) RUS scanning system with integral adaptive admittance control [98].
In summary, the current state of force-sensing research technologies can be primarily classified into three approaches: designing passive mechanisms, attaching external force sensors to the flange of robotic arms, and utilizing collaborative/medical robotic arms with built-in force sensors. Regardless of the method, the constant force applied by the system is often based on empirical data. This force value typically meets the requirement for acquiring US images while ensuring no harm to the patient. However, using a fixed value may lead to varying levels of comfort for different patients or even for different anatomical sites on the same patient. Force control algorithms, predominantly based on admittance control, have achieved favorable results. Nonetheless, it remains necessary to establish a mapping relationship between force control and patient comfort, which plays a crucial role in advancing the acceptance RUS acquisition. In force controller design, three related but distinct requirements should be met. Acoustic coupling defines a minimum contact condition to avoid dropouts and maintain usable images, while safety force thresholds define a conservative upper bound to prevent tissue injury under uncertainty. Patient-perceived comfort is neither identical to coupling nor identical to safety, because discomfort may occur below injury thresholds and varies across subjects and anatomical sites; therefore, comfort needs explicit consideration and reporting.
3.2. Scanning Path-Planning and Localization
Scanning path-planning and localization typically involves pre-defining a set of probe positions and poses along planned trajectories to fully cover the target imaging plane or region of interest. The level of autonomy in an RUS acquisition system is closely tied to the effectiveness of its scanning path-planning and localization methods. Accurate and comprehensive planning enables efficient coverage of the imaging plane, thereby precisely acquiring US images of the required structures. This serves as a prerequisite for influencing US image quality.
Some approaches operate without real-time image feedback, relying solely on pre-planned paths within the robot’s workspace. Pahl et al. [66] designed a Cartesian coordinate robot capable of following a pre-designed scanning path without human intervention during operation. However, the system’s insufficient degrees of freedom limited its flexibility, potentially resulting in suboptimal imaging at certain positions. For locating and assessing stenosis to plan treatment for peripheral artery disease, Merouche et al. [56] determined the scanning path by manually delineating it directly along the robotic arm.
Some researchers focus solely on using US images as feedback for path-planning, which is effective but faces several constraints. The lack of objective quality assessment criteria for US images makes it difficult to establish robust decision rules, and such approaches are often only applicable to specific scenarios. For carotid artery diagnosis, Nakadate et al. [65] developed a real-time image processing-based incremental planning approach. Through a pre-designed workflow, it detects carotid artery landmarks, relying solely on US image information for real-time feedback control. Because the image features of the carotid landmarks and the custom-designed scanning strategy are closely tied to the specific characteristics of the carotid artery, it limits the generalizability of this method. Huang et al. [13] divided the carotid artery US scanning process into pre-scanning and scanning to improve its accuracy and efficiency. During the pre-scanning stage, the physician determines the probe’s initial pose based on the RGB-D images. In the scanning stage, the probe’s pose is continuously adjusted by observing real-time US image features to achieve transverse and longitudinal scanning of the carotid artery. Shida et al. [101] modeled the probe’s motion on the body surface as a sequential decision-making problem in the cardiac structure search for transthoracic echocardiography. They employed deep reinforcement learning to generate search behaviors and further utilized a path-generation algorithm to shorten the search path and reduce examination time. This work embodies the concept of end-to-end closed-loop planning, integrating “image–action–path”. Beyond medical US-specific studies, recent mapless navigation research has shown that deep reinforcement learning policies can make motion decisions using only local sensory feedback, without relying on global environment information, while improving generalization across different scenes and even different hardware platforms [102]. Related studies further highlight practical routes toward robust deployment, such as improved TD3-based mapless navigation [103] and Sim-to-Real transfer with minimal sensing [104]. Demonstrations can also be used to accelerate learning and enhance reliability in mapless navigation [105].
Most studies utilize RGB-D cameras to acquire body surface information, which is then registered with the global information of the target tissue obtained from preoperative MRI/CT images. Following this registration, the scanning path is planned on the MRI/CT images. The core of this method involves planning a 2D scanning path in advance and then projecting it onto the 3D surface to complete the scan within a manually selected region of interest. Based on the 3D information obtained through this approach, the surface normal direction of the patient’s body is calculated, ensuring the probe remains vertical to the body surface. Such methods typically combine pre-planning with real-time feedback, allowing for real-time adjustments to the manually defined path. This approach avoids the limitations seen in the methods proposed in Reference [66], which do not account for real-time changes in tissue information. Therefore, both the flexibility and accuracy of AU-RUS can be improved. Jiang et al. [54] proposed a vision-based RUS acquisition system that uses an RGB-D camera to extract manually planned scanning trajectories, followed by estimating the normal direction of the target using the extracted 3D trajectory. This system can monitor the target’s movement and automatically update the scanning trajectory, thereby seamlessly providing a 3D synthesized image of the target anatomy. Wang et al. [106] introduced an image-based probe tracking method to achieve AU-RUS acquisition. By identifying and segmenting preoperative MRI images, an estimation of the probe’s initial pose is obtained, and a transformation from MRI coordinate to US image coordinate is defined to register the MRI images to the real US images. Hennersperger et al. [59] manually planned a scanning path in the patient’s preoperative MRI images by defining start and end points. This path was then transformed into the robot’s workspace through surface registration between the preoperative MRI image and the real-time RGB-D image. Welleweerd et al. [67] proposed using preoperative MRI images to reconstruct the tissue surface or employing CAD files of a model to reconstruct a surface for path-planning, followed by transferring the planned path onto this surface. A similar approach has been applied to various medical scenarios. For instance, in References [68,69] for spinal scanning, and in Reference [70], where the scanning path was planned on a thyroid model and then transferred to the robot’s workspace via registration between the model and real-time RGB-D images. In addition, approaches based on MRI/CT images for pre-planning are also popular. For instance, References [42,59] utilize MRI images for pre-planning, and Reference [38] conducts pre-planning using CT images. Bi et al. [107] addressed the acoustic shadowing problem caused by rib occlusion. They trained a reinforcement learning planner in a virtual environment constructed from CT templates to automatically generate intercostal scanning trajectories that avoid acoustic shadows and cover the target area. This approach aims to enhance coverage efficiency and reproducibility in scenarios with restricted acoustic windows, such as the chest and upper abdomen.
From a clinical feasibility perspective, preoperative CT/MRI-based planning with cross-modality registration can offer strong anatomical priors for target localization and acoustic-window optimization, and is particularly attractive when cross-sectional imaging is already available (e.g., selected preoperative or interventional workflows) [38,42,59]. However, this paradigm typically requires additional steps, such as segmentation, calibration, and reliable registration. Further, its performance can degrade under respiration-induced motion and soft-tissue deformation, which increases workflow complexity and limits scalability in routine diagnostic screening [54,67,106]. Alternately, RGB-D image-based planning combined with online adaptation reduces reliance on preoperative cross-sectional imaging and can better fit near-term clinical workflows due to the lower setup burden and easier integration with real-time perception and compensation [31,32]. Finally, fully autonomous 3D planning and scan completion (including automatic region identification, trajectory generation, and reconstruction in a unified pipeline) is increasingly explored, but in many applications it remains at a research or proof-of-concept stage and still requires stronger robustness, safety-verifiable autonomy, and clinically aligned endpoints before broader deployment [37].
The above path-planning methods all belong to semi-autonomous approaches, such as manually pre-planned paths or manually selected regions of interest, which involve significant human interventions. To enhance the overall system’s autonomy, some researchers have begun to focus on the fully automatic planning of regions of interest or scanning paths. Yang et al. [71] proposed a US system for 3D imaging, which utilizes RGB-D images and a fully convolutional network to achieve the automatic identification of spinal regions of interest and planning of scanning paths. Similarly, Suligoj et al. [108] employed an RGB-D camera to acquire surface point clouds of a human model and performed autonomous path-planning on the curved surface. The obtained images were effectively applied to jugular vein segmentation. Wang et al. [45] aimed to achieve complete and uniform coverage of the breast organ. They utilized an RGB-D camera to acquire point clouds of the breast from multiple angles, registered them together for shape reconstruction, and finally employed an equidistant 3D point cloud search algorithm to complete the path scanning. Tan et al. [37,109,110,111] proposed an end-to-end scanning trajectory generation strategy based on 3D point clouds. Building upon this, they incorporated considerations for trajectory offset strategies and dual-breast synchronous scanning strategies, achieving repeatability in breast scanning. Yang et al. [112] integrated RGB-D and US image information to determine the probe’s path and pose in real-time in spinal scanning. This approach ensures the vertebral structures remain centered in the US image, thereby enhancing the consistency of extended field-of-view imaging and measurements. For abdominal organs such as the kidney, Wu et al. [46] proposed a point-cloud-guided anatomical localization and scanning path-planning strategy, which improved the automation of path generation and its adaptability to individual body surface geometry. Additionally, Sun et al. [47] introduced an AU-RUS system for musculoskeletal US imaging, integrating anatomical region localization, automatic trajectory generation, segmentation, and 3D reconstruction into a unified workflow. The system employs hybrid position–force control to enhance scanning stability and image consistency along the trajectory.
Based on the above research, the current state of scanning path-planning can be summarized into the following four categories: planning within the robot’s workspace without image feedback; planning using only US image feedback; planar-planning based on the registration of preoperative images that are then projected onto the curved surface; and fully autonomous 3D surface path-planning. Currently, path-planning based on preoperative image registration is the mainstream approach, offering sufficient adaptability, and planar-planning avoids the complexities of direct 3D surface-planning. However, the lack of acquisition autonomy remains a non-negligible issue. Therefore, research on fully autonomous scanning path-planning will become the driving force advancing the autonomous progression of RUS acquisition systems.
In practice, scanning path-planning and probe localization for AU-RUS can be separated into four paradigms: (i) preplanned coverage paths (e.g., raster/spiral sweeps) executed with force/impedance regulation; (ii) anatomy-/model-informed planning, where surface/organ priors or protocol templates constrain feasible trajectories; (iii) image-driven online servoing, where real-time US feedback (e.g., target centering or quality surrogates) continuously refines the probe pose; and (iv) learning-based policies that map multi-modal observations to actions to reduce manual intervention. These paradigms differ in robustness, required priors, and failure recovery ability.
Preplanned coverage is simple and reproducible, and thus suitable for large and relatively smooth regions, but it is sensitive to patient motion and anatomical variability. Anatomy-/model-informed planning improves repeatability when reliable registration is available, yet it can degrade under soft-tissue deformation and calibration drift. Image-driven online servoing is effective when the imaging objective can be explicitly defined, but may suffer from noisy feedback and local minima. Recently, learning-based autonomy has shown promising cross-subject generalization, e.g., fully autonomous thyroid scanning and learning-based expert-level carotid ultrasonography, while diffusion-policy learning has also been explored with force-aware constraints for carotid scanning. Nevertheless, safety constraints and clinically meaningful quantitative evaluation remain essential for clinical deployment [34,113,114].
3.3. US Treatment Guidance
US-guided therapy enables physicians to focus on the surgical intervention task, allowing for automatic imaging tracking of tools such as needles and catheters to facilitate human–robot collaborative treatment. Compared with diagnostic scanning, interventional guidance requires stricter safety guarantees, lower-latency feedback, and higher failure tolerance, because errors can directly affect tool–tissue interaction and procedural risk. Therefore, interventional systems typically require conservative fallback behaviors and rapid physician override. Langsch et al. [72] proposed an autonomous catheter tracking system for endovascular aneurysm treatment, where the robot’s flange holds a 2D US probe to acquire US images. In preoperative CT scanning, the vascular structures of interest are segmented and then registered to the intraoperative US images. During intervention, the physician inserts the catheter into the abdominal aorta and guides it to the region of interest. The robot employs a tracking algorithm along with force control to follow the catheter, ensuring the catheter tip remains continuously visible in the US image.
For needle guidance and placement tasks, Kojcev et al. [60] proposed a dual-robot system to simultaneously perform US imaging and needle insertion. First, the region of interest is selected from the RGB-D image of the patient’s body surface. During the insertion process, US image-based visual servoing is employed for target-tracking of the needle. Yan et al. [115] proposed integrating visual tracking with motion prediction for the continuous localization of the needle tip in 2D US images. Compared to detection methods solely relying on single-frame appearance features, this approach can still maintain stable tracking even when the needle tip is temporarily invisible or against strong background interference, making it more suitable as a feedback signal for robotic visual servoing.
For carotid artery reconstruction, Faoro et al. [116] proposed a robotic platform for US-guided endovascular surgery. This platform incorporates both preoperative and intraoperative US images, achieving precise 3D vascular volume reconstruction from 2D US image sequences through robotic probe manipulation and AI-based image analysis. Experimental results demonstrate that its reconstruction accuracy reached the submillimeter level. Esteban et al. [117] performed facet joint injections using US- and robot-guided needle insertion for the treatment of chronic back and spinal pain. This work presents the first clinical data on robot-assisted, US-guided facet joint needle insertion surgery, with the results demonstrating the clinical value of the system. Chen et al. [118] proposed a novel convolutional neural network (CNN) framework for the automatic and accurate detection of inserted needles, aiming to enhance the accuracy and success rate of clinical punctures. The method involves segmenting needle motion information to extract two adjacent US image frames, and then extracts the needle’s region of interest from the US images based on the prediction results from the previous frame. This data is then fed into the network, enabling finer and faster continuous needle localization. Grube et al. [119] proposed a deep learning-based needle tracking method for low-resolution volume US and performed quantitative evaluations on a robotically driven acquisition platform, highlighting the potential value of volume US for interventional navigation. Mazdarani et al. [120] integrated confidence maps with visual servoing, enabling the robot to automatically adjust the probe to maintain needle visibility within the imaging plane under conditions of unknown trajectories and complex backgrounds. Their experiments on phantoms/models reported stable tracking accuracy.
US treatment/intervention guidance in autonomous or semi-autonomous RUS can be grouped into: (i) US visual-servoing/confidence-driven tracking, where the robot adjusts the probe to keep the tool or target structure visible; (ii) registration-based guidance, where targets/plans from preoperative CT/MRI are mapped to the US frame; and (iii) integrated scan–localize–guide pipelines, which combine planning, scanning, segmentation/reconstruction, and guidance into a unified workflow. Visual-servoing approaches are attractive for real-time guidance with minimal preoperative imaging, but rely on robust tool visibility and can be challenged by acoustic shadowing. Registration-based methods provide global spatial context but are sensitive to calibration and deformation. Integrated pipelines are promising for reducing operator workload and standardizing workflow, yet require conservative safety design and clinically aligned validation. Recent studies illustrate these directions. For example, confidence-map-based visual servoing has been validated for maintaining longitudinal needle visibility in robotic US-guided PCNL, while an end-to-end autonomous US-guided CVC pipeline has been reported that integrates scan initialization, region/path-planning, vessel reconstruction and supervised needle guidance on high-fidelity phantoms [120,121].
3.4. US Image Processing Techniques and Quality Assessment Optimization
Due to the working mechanism of acoustic propagation, US imaging cannot penetrate air or bone. Therefore, the US probe must maintain close contact with the skin, and image acquisition requires selecting an optimal acoustic window while avoiding bone obstruction. Consequently, the relative position between the US probe and the human body is a critical factor affecting image quality. Although advancements in electronic technology, image processing techniques [122], and US transducer design [123] have significantly improved the image quality of current clinical devices, there remains no unified standard for evaluating US image quality.
In this review, “US image quality” for AU-RUS acquisition is interpreted as a clinically usable and protocol-compliant image state, which jointly reflects: (i) physical signal adequacy (e.g., coupling/contact, attenuation/shadowing, signal-to-noise ratio and artifact level), and (ii) task-level diagnostic sufficiency, i.e., whether the target anatomy/plane required by a specific protocol is clearly observable and reproducible. Under this definition, most existing quality assessment methods remain task-specific (e.g., plane-specific scoring for fetal, breast, arthroscopy, or other organ protocols), device- or protocol-dependent (affected by transducer type, frequency, gain, and vendor processing pipelines). Therefore, they are not directly comparable across studies due to inconsistent targets, scoring criteria, and acquisition settings.
Some scholars leverage prior anatomical knowledge of target organs or structures to comparatively assess US image quality, often by referencing the patient’s CT/MRI data [124,125,126]. Another effective approach involves processing the US images directly and evaluating their quality based on the correlation between the processed image content and the target image. Such methods are typically task-oriented, verifying whether the processed images contain the targets necessary for the specific task. For instance, References [73,74,75] proposed automated methods for fetal image quality assessment and fetal biometric measurements in US images. References [76,77] assessed US image quality for breast and knee arthroscopy, respectively. Reference [127] introduced a novel automated conical breast US system for breast cancer detection using a 3D US-MRI fusion method. Experimental analysis indicated that its image quality is comparable to that provided by the Siemens ABVS. Cao et al. [128] proposed and validated an AI-powered automated image quality auditing system for first-trimester screening. This system enables rapid auditing of the imaging quality of key anatomical planes and assists operators in improving acquisition quality. Meanwhile, Liu et al. [129] developed a deep learning-based quality assessment model for the fetal mid-sagittal plane used in nuchal translucency measurement, emphasizing its consistency and usability within multi-source data and clinical workflows.
Furthermore, from the perspective of US physics, more generalized methods for US image quality assessment can be proposed. Some studies estimate the attenuation characteristics of US waves and assess image quality by combining radiofrequency data [130,131,132,133]. Göbl et al. [38] utilized CT images of the target organ to plan scanning paths that optimize the acoustic window using the US attenuation model introduced in Reference [125]. By leveraging the relationship between X-ray attenuation coefficients and US propagation, the US intensity at selected points within the patient’s body can be predicted before performing the actual acquisition task. Furthermore, the optimal probe pose on the tissue surface can be selected to minimize US attenuation. Therefore, based on prior knowledge of the target organ, the planned scanning path can avoid strong reflectors such as ribs, thereby optimizing US image quality during the planning stage. Integrating the principles of US physics, using acoustic window optimization as a criterion is also an effective method. Sutedjo et al. [78] proposed a pose optimization method to address image shadows caused by bone interference during scanning, which effectively improved US image quality.
Some researchers have focused on improving the image quality of RUS acquisition. Part of this work starts from image processing methods, introducing the concept of US confidence maps into US image processing and achieving significant research progress. Karamalis et al. [81] first proposed the concept of US confidence maps, which estimate the confidence level of information described by each pixel in the image. Based on the hypothesis that the probability of US transmission is directly related to the information confidence in the image, and incorporating US-specific constraints, they calculated the probability that a random walk algorithm starting from a given pixel could reach virtual transducer nodes. The random walk algorithm [134] played a crucial role in this process, effectively describing the pixel uncertainty in US images. Building on this foundation, Chatelain et al. [79,80] proposed using the confidence value from the US confidence map as a control signal. They established a link between confidence and the robotic arm within a position-based visual servoing framework, maintaining a constant force between the probe and the patient under an overall redundant control framework to keep the target centered horizontally in the image. Virga et al. [42] employed confidence maps to assess the quality of US images acquired in real-time, enabling the comparison of performances across different control strategies. Welleweerd et al. [67] integrated confidence maps into the visual servoing system of an autonomous breast-scanning robot to adjust the contact between the probe and the patient’s skin, thereby optimizing the quality of the acquired US images. Jiang et al. [3] utilized both US confidence maps and force feedback to estimate the optimal pose of the probe at the contact point, aiming to enhance image quality at a given location. The proposed method seeks to improve US propagation within the tissue by optimizing the US probe’s pose, i.e., aligning the probe’s central axis with the surface normal of the patient at the contact point, and thus addressing the challenges encountered in orthopedic applications.
Additionally, some research focuses on optimizing US image quality during the acquisition process through the real-time updating of the scanning path and adjusting the probe’s pose. Abolmaesumi et al. [82,83,84] proposed several feature extraction algorithms to track the carotid artery in US images in real-time and used a visual servo controller to automatically adjust the probe’s in-plane motion during remote US examinations. Since the US image-based servoing can automatically keep the carotid artery centered in the image, it compensates for unintended patient movement during remote operation. In robotic visual tracking tasks, numerous visual features have been proposed for the use of visual servoing to track targets with complex shapes [135,136,137,138]. TutKunSen et al. [85] proposed a registration-based method that dynamically updates the required probe position in a cooperative system based on the differences between real-time US images and reference US images. Fujibayashi et al. [139] introduced a target image search strategy combining visual servoing and deep learning. This strategy controls the US probe to acquire images at various locations, uses YOLACT++ for anatomical segmentation to extract features, and thereby searches for the optimal kidney US image. To achieve the autonomous visual servoing motion of the US probe, Wang et al. [48] proposed a target-tracking method based on an improved Siamese network. This provides real-time dynamic feedback control of the probe by analyzing the differences between a template image and real-time US images, aiming to acquire high-quality images. Furthermore, to prevent image loss during scanning, they utilized the average intensity of the US images to characterize the coupling relationship between the probe and the scanned tissue. Tang et al. [140] integrated pose recognition with image-based servoing control in their autonomous cardiac US acquisition system. By assessing low-quality regions, the system triggers acoustic window/pose corrections, thereby enhancing the success rate of acquiring high-quality cardiac images. Meanwhile, Lin et al. [43] compared the effects of different control strategies on repeatability and consistency within an autonomous US acquisition system. Their work further validates the critical role of visual servoing and real-time correction in consistently obtaining analyzable images.
In summary, the current research on US image quality assessment and optimization can be briefly categorized into five types: (1) quality assessment by incorporating anatomical prior knowledge; (2) quality assessment by integrating US attenuation models; (3) quality assessment and optimization using US confidence maps; (4) image quality improvement through acoustic window optimization; and (5) real-time path adjustment via visual servoing to enhance image quality. Due to the current lack of a unified, objective evaluation system and relevant metrics for US images, each research method has its own distinct characteristics, making it difficult to determine absolute superiority or inferiority. Selecting an effective assessment approach based on specific technical solutions and application scenarios can meet certain research needs. However, it remains evident that the absence of a unified, objective US quality evaluation system significantly hinders the advancement of US image quality improvement. More importantly, the absence of standardized, task-agnostic and cross-device comparable quality metrics becomes a fundamental bottleneck for autonomous decision-making, because it limits the design of robust quality-aware feedback rules (e.g., re-positioning, termination, and failure recovery) and hinders fair benchmarking and multi-site clinical validation.
4. Limitations and Challenges of Current Research
Leveraging robotic assistance, US acquisition is now capable of replacing physicians in certain medical scenarios. In terms of effectiveness, current RUS acquisition systems can achieve image quality with higher standardization and consistency than that achieved by physicians, and they are gradually being applied in clinical diagnosis and anatomical structural biometry. However, as current AU-RUS systems remain in an ongoing exploration and research phase, technical limitations are also obvious in these systems.
From the deployment viewpoint, we prioritize the challenges into near-term critical bottlenecks and longer-term goals. Near-term bottlenecks include safety-verifiable autonomy with reliable failure detection and recovery, standardized and clinically meaningful image quality metrics for objective validation, and motion compensation under non-rigid deformation. Longer-term goals include broad cross-organ generalization and higher-level autonomy toward diagnostic decision support.
4.1. The Balance Between the Need for Enhanced Robot Autonomy and Human Safety
As evidenced by numerous studies discussed above, AU-RUS acquisition technology is capable of replacing traditional physician-held manual operation. Currently, most research focuses on SA-RUS systems, which require collaboration between humans and robots during the acquisition process, with the entire procedure still necessitating physician adjustment and control [67,141,142,143]. In some systems, the initial probe position is also determined by the physician. Even for AU-RUS systems, the initial application of the acoustic coupling gel at the beginning of the examination is performed manually. Compared to semi-autonomous systems, autonomous acquisition systems eliminate the need for human participation, thereby significantly reducing the physician’s workload. Hence, enhancing robot autonomy is essential. With the increasing shortage of medical resources and the ongoing global pandemic, the demand for US examinations continues to rise. However, US examination heavily relies on the physician’s expertise and experience, and the number of skilled sonographers cannot meet clinical needs. It is still difficult for families and individuals with special requirements in remote areas to receive timely, professional US examinations. Therefore, there is an urgent need for fully autonomous RUS examination systems [144]. However, as the level of robot autonomy increases, the demands on the robot extend beyond mere operation to include more sophisticated perception and decision-making. To achieve this, it is necessary to enhance the robot’s perception and decision-making capabilities through algorithmic innovations, enabling it to respond to various sensor data and make correct decisions, thereby better supporting physicians in US acquisition.
The enhancement of robotic autonomy inevitably raises concerns about safety. The people-centered philosophy makes safety the primary issue determining whether AU-RUS systems can be widely adopted. Increased autonomy implies greater medical risks due to a higher possibility of failures [145], requiring the system to possess adaptive adjustment mechanisms. It must be capable of responding to and quickly adapting to unexpected situations, maintaining system stability and ensuring that the force applied by the probe remains safe in the presence of parameter uncertainties and external disturbances [146]. Although many systems incorporate force sensors or passive mechanisms to guarantee safe force application, the inherent possibility of sensor failure inevitably introduces risks. Enhancing real-time fault detection for sensors can improve safety to some extent, but the consequent burden of extensive data processing also becomes a challenge. Zheng et al. proposed a low-cost force sensor and a hybrid position–force control strategy to reduce the system’s cost barrier while maintaining contact safety [41]. Wu et al. developed an adjustable constant-force end-effector [147]. By combining active long-stroke compensation with a passive constant-force compliance buffer, the system enhances contact force stability and tolerance to variations in body surface geometry.
The autonomy–safety trade-off is largely driven by uncertainty in (1) perception (poor acoustic windows, speckle noise, and out-of-distribution anatomy), (2) contact mechanics (patient-dependent stiffness/friction and non-rigid deformation), and (3) system drift/latency (calibration drift and delayed feedback). These uncertainties can lead to unsafe contact forces or loss of acoustic coupling. Therefore, beyond improving nominal performance, a key bottleneck is robust failure detection and recovery, e.g., recognizing target loss, slip, unexpected patient motion, or sensor faults, and triggering conservative fallback behaviors. Recent constant-force end-effector designs with hybrid active–passive mechanisms also highlight the importance of hardware-control co-design for maintaining safe contact under changing surfaces [148].
Furthermore, the inherent safety risks associated with the system’s rigid hardware cannot be overlooked. Advances in soft robotics technology offer an effective approach to enhance human–robot interaction safety by integrating flexible materials with US at the end-effector. By meeting the necessary stiffness requirements, the inherent characteristics of soft fluidic actuators can establish a safe and adaptable interaction between the US probe and the patient [92].
In addition to focusing on tangible physical safety, we must also consider intangible psychological safety, e.g., whether patients can accept a robotic physician. Therefore, human safety should also take into account patient comfort and acceptability. Current research has not incorporated patient feedback as a study metric. To further promote the clinical and community acceptance of AU-RUS acquisition, patient feedback represents a crucial factor for consideration.
4.2. Generalization of Application Scenarios
The insufficient compatibility of AU-RUS systems across diverse scenario requirements may be a significant factor limiting their potential to lower the threshold of healthcare access and promote community-wide dissemination. On one hand, different application scenarios share certain common requirements. For instance, the fundamental requirement is the visibility and structural clarity of the imaged target. A universal technique involves image processing to extract structural features. For organs with simple anatomy and favorable locations, this may suffice to meet task demands and yield satisfactory US images. On the other hand, as outlined in the preceding literature review, autonomous robotic systems have been applied for US acquisition in a wide range of medical scenarios. Target anatomical structures include the lung [32], thyroid [34], carotid artery [13], breast [149], liver [150], cervix [151], fetus [152], lower-limb arteries [153], and kidney [139]. Each application possesses distinct examination characteristics and image acquisition requirements. AU-RUS systems developed based on general scanning principles cannot be specifically tailored to all unique medical contexts. This is also a major reason why such systems struggle to achieve full autonomy. Therefore, enhancing the system’s perceptual and decision-making capabilities, enabling it to autonomously adapt to different medical scenarios, is essential for effectively promoting the broader application of RUS systems.
Furthermore, advancements in hardware design present a significant challenge. For different organs, the types of auxiliary sensors required for US acquisition may vary, and the corresponding US transducers may also differ. Current research tends to be highly specific, with individual systems typically applicable only to particular organs and lacking generalization capabilities. Although different organs have distinct requirements for sensor types and imaging parameters, the development of unified, generalized equipment capable of adapting to diverse scenarios holds substantial importance for clinical adoption and widespread implementation.
4.3. Human Motion and Respiratory Compensation
Most existing robotic systems for autonomous US acquisition require patients to hold their breath or remain still during scanning [37]. However, this requirement can be uncomfortable and inconvenient for patients. Although some current systems incorporate compensation functions for small-scale patient movements, they may fail to operate properly if unexpected or substantial movement occurs during the examination.
Real-time observation and tracking compensation for patient motion and tissue deformation are beneficial for enhancing system stability and improving patient comfort and experience during medical procedures, particularly in intraoperative imaging applications. Many systems employ visual servoing methods for real-time control of the US probe, enabling real-time tracking based on visual detection or image information [154] to compensate for tissue motion in medical scenarios. Various features in US images are utilized for tissue tracking, including speckle information [155,156,157], image moments [158], and intensity [159,160], further improving imaging performance during tissue movement.
Motion compensation remains difficult because US acquisition couples probe motion, tissue deformation, and imaging quality in a highly nonlinear manner. Respiratory motion is quasi-periodic but patient-dependent, while soft tissue exhibits non-rigid deformation that cannot be removed by rigid tracking alone. A further bottleneck is real-time coordination between motion prediction, force/admittance control, and volume reconstruction; otherwise, stable contact may conflict with accurate spatial compounding. Recent work has explored respiratory-motion-robust RUS using vision–haptic fusion control with predictive compensation, and breathing-compensated 3D reconstruction using implicit neural representations for RUS screening [161,162].
4.4. Real-Time Cognition and Quality Assessment of US Images
The rapid development of AI presents new opportunities and challenges for AU-RUS systems. Cutting-edge algorithms from the fields of computer vision and image analysis are increasingly being applied to US acquisition, fostering expectations regarding the intelligent capabilities of US robots. These robots are gradually evolving from precise tools into intelligent agents. In particular, recent studies increasingly treat AU-RUS acquisition as a closed-loop “perception–control” problem, where real-time image understanding directly supports probe motion-planning and adjustment [4]. Data-driven intelligent algorithms such as deep learning are progressively being integrated into image analysis, yielding state-of-the-art results in tasks such as the classification, detection, and segmentation of various anatomical structures [163]. These capabilities can be extended to online view/anatomy recognition and image-quality estimation, providing direct feedback for real-time probe guidance [4]. Image-based probe guidance is increasingly being employed in machine learning methods, with some algorithms already being applied in simulated US acquisition for fetal [164] and cardiac imaging [140]. In addition, multi-modal learning (e.g., combining US video with probe motion and sonographer gaze) has been explored to better model expert scanning behavior and provide guidance signals for navigation [165]. Ning et al. [166] proposed an AU-RUS system that utilizes reinforcement learning to achieve the adaptive constant-force tracking of soft moving targets with the US probe. Building on this, they introduced a force-position control method based on an admittance controller, achieving autonomous control of the probe. Li et al. [167] proposed a deep reinforcement learning solution that controls the US probe towards the desired imaging plane based on real-time US image feedback. Related work has further explored reinforcement learning for standard-plane localization in practical scanning tasks. Li et al. proposed an RL-based RUS system for the automated localization of standard liver planes using a DQN-LSTM agent and reported promising image and anatomy localization metrics [168]. In addition, Si et al. proposed a deep multimodal imitation learning framework that fuses RGB and US images, force signals, and robot proprioception to predict desired probe motion and contact force, and executes the learned skill using compliant control and trajectory optimization [169]. In future research, robotic systems are expected to integrate image acquisition, cognition, decision-making, and guided probe movement, ultimately better assisting US physicians in performing US acquisition.
Real-time cognition requires evaluation standards for quality assessments to effectively function and for interpretation of the results. However, at present, there is no unified evaluation standard, let alone quantitative metrics. Recent clinically oriented studies also emphasize protocol-driven evaluation and reproducibility reporting, which further motivates standardized quantitative metrics for quality assessment [4,114]. Furthermore, many studies only qualitatively validate the effectiveness of the proposed methods or systems, lacking the ability to objectively and quantitatively evaluate their performance. Therefore, establishing a quantitative and standardized quality assessment system represents a crucial future research direction and challenge, enabling the validation of methods and systems based on clinical application outcomes.
Encouragingly, recent studies have started to move towards more quantitative and benchmark-oriented quality assessments. For instance, fully autonomous thyroid scanning systems reported multiple quantitative indicators related to contact condition and target centering/pose, providing a reproducible way to evaluate autonomous scanning quality. In addition, benchmark efforts such as Ultrasound-QBench have been proposed to assess model capabilities regarding US image quality (classification/scoring/comparative assessment), which may inspire more standardized evaluation pipelines. Moreover, domain-specific quality assessment models for robot screening have also been reported recently, indicating a trend toward quantitative, task-aligned metrics for RUS acquisition [34,170,171].
5. Perspective of Techniques for AU-RUS Acquisitions
The development trends in RUS focus on enhancing autonomy in image acquisition, diagnosis, and treatment guidance. For instance, autonomous planning technologies are replacing the manually planned components required in semi-autonomous systems. Further, the ability to compensate for target motion and deformation should be improved to prevent the loss of target visibility in US images. The integration of US robots into clinical workflows and the promotion of community services are also subjects of ongoing research. In this context, the interaction between the robot and the patient, along with safety aspects, must be ensured. Additionally, AI technologies can be leveraged for image processing and analysis, enabling real-time interpretation and diagnosis.
5.1. Enhancement of Cognitive Judgment and Autonomy Through AI Technology
A current challenge in RUS acquisition systems lies in the cognition and understanding of US images. To achieve more significant acquisition outcomes, it is essential to fully integrate image processing techniques and advance research on technologies for image cognition and judgment that are specifically tailored to US images.
From the perspective of this paper, AI has two primary application domains for improving the acquisition effectiveness of future RUS systems: image understanding and robotic navigation planning. Regarding image understanding, CNNs have demonstrated exceptional performance in medical image analysis [172,173,174,175,176] and have been successfully applied to US images [177,178,179,180]. Jiang et al. [114] introduced UltraBot, a learning-based autonomous carotid US robot. The system simultaneously acquires anatomical awareness and scanning skills based on a unified imitation learning framework trained on large-scale expert demonstrations. In clinically oriented validation, it demonstrated a high success rate and achieved expert-level agreement. Intelligent image understanding can be utilized for image quality assessment [74] or for medical measurement and diagnosis [181,182,183]. Concerning robot navigation and planning, deep reinforcement learning [184,185,186,187] has achieved breakthroughs in robot learning in, for example, perceptive path-planning and navigation [188], and real-time obstacle avoidance in complex dynamic environments [189,190]. For example, cross-platform mapless DRL navigation without global information has been reported to transfer across different scenes, which provides a useful reference for designing more robust learning-based probe navigation under partial observability [102]. These methods may play a key role in solving the task of autonomous US probe placement and could be applied in US-guided robot navigation and positioning, e.g., in the autonomous localization of standard fetal facial planes [191].
Furthermore, reinforcement learning, with its characteristic of learning policies through interaction with the environment to maximize reward, has garnered significant attention from researchers. It establishes a relationship between the environment and the system. By learning the relationship between contact force and output force, and defining the desired output force as the workspace, reinforcement learning can enable action generation under visually constrained conditions for soft, uncertain environments. Compared to traditional visual planning and force control methods, this allows for simultaneous position, pose, and force control without requiring prior knowledge, while also avoiding issues like occlusion and parameter tuning [97]. This mechanism of reinforcement learning highlights its potential for application in training novice physicians. An end-to-end trained model can filter out ambiguous regions and guide novices in selecting optimal acoustic window positions to acquire clear cardiac images.
Additionally, reinforcement learning can be used to imitate the operational patterns of experienced sonographers, enabling robots to progressively learn expert-level skills. By combining imitation learning with reinforcement learning, robots can master more complex techniques, such as fine-tuning probe pressure and optimizing imaging angles, thereby enhancing the stability and precision of examinations. However, given the variations in patient physique and lesion locations, achieving generalization capabilities in such systems would require a multiplicative increase in the data volume and computational load for reinforcement learning, and the scanning strategy would also need to adapt as patients change. In a recent review, Bi et al. [192] systematically summarized the key bottlenecks of machine learning methods in RUS: there are high costs associated with data acquisition and annotation, insufficient generalization across devices and populations, and a lack of safety constraints and interpretability. They further pointed out that future efforts need to more tightly integrate quality assessment, strategy learning, and safety control into a unified closed-loop framework.
Although some AI technologies have demonstrated experimental success during application, their current limited interpretability and reliability make clinical adoption challenging. Nevertheless, these technologies can be integrated with reliable control methods to play a role in image interpretation, thereby enhancing the effectiveness of US imaging.
5.2. Virtual Reality and Augmented Reality Technologies
Regarding virtual reality (VR), US data can be displayed on a graphical user interface for navigation [193]. Virtual scenes, enhanced by incorporating 3D models of the robot, enable robot-controlled US probe guidance for therapeutic purposes [194], as well as the simulation and verification of robotic equipment [44].
Augmented reality (AR), it has attracted considerable attention in recent years due to its outstanding information fusion abilities, which enables the effective augmentation of reality. In medical settings, AR can provide physicians with critical real-time information and enhance spatial awareness, thereby improving procedural accuracy and reducing the risk of errors [195]. In US acquisitions, AR can overlay 2D US images [196], 3D US images, and reconstructed US phantoms onto real-world scenes, allowing physicians to acquire US images while simultaneously observing the patient; thus, it has strong potential to improve ergonomics. In particular, in RUS scenarios, AR-based overlays can enhance physicians’ perception of critical anatomical structures, improve 3D understanding, and simplify hand–eye coordination, thereby improving the user experience [197]. Beyond clinical assistance, AR can also be used for skills training and simulation-based practice. By superimposing virtual information onto the real world, AR can create flexible and controllable training scenarios and provide sensory feedback and interaction experiences that more closely resemble real operations, thereby supporting procedural training such as US-guided needle puncture. For example, in training for US-guided renal biopsy, both VR and AR platforms can help alleviate practical challenges, including the high of difficulty in the training, low efficiency, and the inability to repeatedly practice on human subjects [198]. With advances in US probes and nonlinear image registration, VR/AR presents new opportunities for RUS in visualization, sensor integration, and user interaction.
5.3. Soft Robotics Technology Ensuring Compliance and Safety
In RUS acquisition systems, a critical prerequisite is ensuring the safety of human force interaction. Maintaining a balance between the force needed for effective image acquisition and ensuring human safety is an important research direction. Soft robotics technology represents a key avenue for safe human–robot interaction. The compliant materials of soft robots inherently offer a degree of safety, while research into rigidity–softness coupling and variable stiffness enables them to possess sufficient load-bearing capacity to meet the demands of US acquisition.
The design and control techniques of end-effector grippers incorporating soft materials fully leverage their advantages. Compared to rigid devices, they offer higher compliance; for instance, the operational behavior of parallel mechanisms more closely resembles that of the human wrist [92]. Several researchers have developed various flexible mechanisms [199] to provide passive compliance in driving robotic joints.
In recent years, soft robotics technology, by utilizing deformable materials and structures, has opened up new design paradigms for robotic systems. In medical scenarios such as surgery, soft robots often exhibit highly flexible operational capabilities but lack sufficient load-bearing capacity. To find a balance between mobility and load-bearing capability, some researchers have conducted studies on soft robotic systems with variable stiffness capabilities, and demonstrated some progress. These systems have been applied in delicate surgical procedures, including pericardial puncture [200], providing a solid technical foundation for further integration with US imaging technology.
5.4. Integrated All-in-One Acquisition–Processing–Diagnosis Solution
Most current systems focus solely on the task of US acquisition. The objective of the research should be to reduce human labor. Therefore, if the entire acquisition–processing–diagnosis workflow can be seamlessly integrated, the system will achieve a high level of intelligence, allowing the US physician to transition into the role of a supervisor and evaluator.
The diagnostic component of this workflow is inseparable from AI assistance. Deep learning and reinforcement learning can learn from vast datasets of image cases and, through continuous training, realize diagnostic functions. These include liver disease classification [201], COVID-19 classification [202], vagus nerve detection [203], and breast tumor classification [204], among others. The realization of this technology would significantly lower the barriers to using US, effectively address the issue of the uneven distribution of medical resources, promote the deployment of US examinations at the community level, and achieve high efficiency in resource utilization within smaller regions.
5.5. From Laboratory Systems to Clinical Translation
While AU-RUS systems have shown rapid technical progress, clinical translation requires a pathway beyond algorithmic performance. First, standardization and verification should be established for calibration, safety force limits, image quality targets, and repeatability across operators and sites. In addition, workflow scalability is a practical determinant of clinical adoption: approaches relying on preoperative CT/MRI and cross-modality registration may be best suited to settings where such imaging is already routine, whereas RGB-D surface-based planning with online adaptation is generally more compatible with near-term deployment due to the reduced setup and coordination burden [13,31,32]. Second, human–computer interaction must be designed for the clinical workflow, including the intuitive visualization of system confidence, shared-control modes, and rapid physician override, following usability engineering principles. Third, deployment requires training and competency building for sonographers/physicians to integrate semi-autonomous workflows safely. Finally, regulatory and compliance considerations (risk management, usability engineering evidence, and appropriate submission pathways) are critical barriers. In the European Union and the United States, Medical Device Regulation (MDR) and the Food and Drug Administration (FDA) emphasize safety/performance validation and human-factors engineering, which should be incorporated early in the system design process.
6. Conclusions
Leveraging the complementary strengths of robotics and US imaging, an increasing number of RUS acquisition systems have been developed, achieving the automation of RUS scanning across a wide range of medical applications. The progress in AU-RUS acquisition has demonstrated the potential for robots to autonomously obtain reproducible and diagnostically usable imaging results without the need for specialized operators.
This paper first analyzes the advantages and limitations of current RUS imaging technologies. Following a comparison of the characteristics of teleoperated and autonomous systems, it highlights the research necessity for AU-RUS systems and introduces several representative systems. Subsequently, it provides an overview of the current research landscape from four key technical perspectives: force-sensing and control, scanning path-planning and localization, US-guided therapy, and US image processing techniques for quality assessment and optimization. This review presents the latest systems and technologies for AU-RUS acquisition, applied in various clinical contexts. Based on the state-of-the-art offered by existing robotic systems, the paper discusses the shortcomings and challenges in the current research. Finally, it presents a future outlook for AU-RUS acquisition from multiple perspectives.
Current efforts in AU-RUS acquisition are crucial for promoting the community-based deployment of US healthcare. However, future progress in AU-RUS will depend not only on reliable mechanical execution, but also on robust perception, quality-aware decision-making, and standardized, clinically meaningful US image quality assessments to support safety-verifiable autonomy and multi-site clinical validation. With the advances in other related fields, such as soft robotics and AI-powered US image analysis, AU-RUS is expected to play an increasingly prominent role in a broad spectrum of clinical applications.
Author Contributions
Conceptualization, Y.Q. and H.W.; methodology, L.D. (Lele Dang) and F.R.; formal analysis, Z.L. and L.D. (Lele Dang); investigation, F.R. and L.D. (Lele Dang); resources, Y.Q.; data curation, F.R., L.D. (Lele Dang). and Z.L.; writing—original draft preparation, F.R. and L.D. (Lijun Duan); writing—review and editing, Y.Q. and L.D. (Lijun Duan); visualization, H.W.; supervision, Y.Q. and J.H.; funding acquisition, Y.Q., H.W. and J.H. All authors have read and agreed to the published version of the manuscript.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analyzed in this study.
Conflicts of Interest
The authors declare no conflicts of interest.
Funding Statement
This work was supported in part by the Beijing Tianjin Hebei Basic Research Cooperation Special Project (No. 24JCZXJC00060), and in part by the Shenzhen Science and Technology Program (No. KQTD20210811090143060).
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
References
- 1.Dussik K. On the possibility of using ultrasound waves as a diagnostic aid. Neurol Psychiat. 1942;174:153–168. [Google Scholar]
- 2.Golemati S., Cokkinos D.D. Recent advances in vascular ultrasound imaging technology and their clinical implications. Ultrasonics. 2022;119:106599. doi: 10.1016/j.ultras.2021.106599. [DOI] [PubMed] [Google Scholar]
- 3.Jiang Z., Grimm M., Zhou M., Esteban J., Simson W., Zahnd G., Navab N. Automatic Normal Positioning of Robotic Ultrasound Probe Based Only on Confidence Map Optimization and Force Measurement. IEEE Robot. Autom. Lett. 2020;5:1342–1349. doi: 10.1109/lra.2020.2967682. [DOI] [Google Scholar]
- 4.Jiang Z., Salcudean S.E., Navab N. Robotic ultrasound imaging: State-of-the-art and future perspectives. Med. Image Anal. 2023;89:102878. doi: 10.1016/j.media.2023.102878. [DOI] [PubMed] [Google Scholar]
- 5.Windschall D., Malattia C. Ultrasound imaging in paediatric rheumatology. Best Pract. Res. Clin. Rheumatol. 2020;34:101570. doi: 10.1016/j.berh.2020.101570. [DOI] [PubMed] [Google Scholar]
- 6.Li C., Shen E., Wang H., Wang Y., Yuan J., Gong L., Zhao D., Zhang W., Jin Z. Real-Time Volumetric Free-Hand Ultrasound Imaging for Large-Sized Organs: A Study of Imaging the Whole Spine. Ultrasound Med. Biol. 2025;51:598–605. doi: 10.1016/j.ultrasmedbio.2024.12.015. [DOI] [PubMed] [Google Scholar]
- 7.Liu G., Dong S., Zhou Y., Yao S., Liu D. MFAR-Net: Multi-level feature interaction and Dual-Dimension adaptive reinforcement network for breast lesion segmentation in ultrasound images. Expert Syst. Appl. 2025;272:126727. doi: 10.1016/j.eswa.2025.126727. [DOI] [Google Scholar]
- 8.Zucker N., Le Meur-Diebolt S., Cybis Pereira F., Baranger J., Hurvitz I., Demené C., Osmanski B.-F., Ialy-Radio N., Biran V., Baud O., et al. Physio-fUS: A tissue-motion based method for heart and breathing rate assessment in neurofunctional ultrasound imaging. eBioMedicine. 2025;112:105581. doi: 10.1016/j.ebiom.2025.105581. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Callen P.W. Ultrasonography in Obstetrics and Gynecology E-Book. Elsevier Health Sciences; Maryland Heights, MO, USA: 2011. [Google Scholar]
- 10.Safdar H., Sardar M., Tekchandani N., Iftikhar A., Iftikhar H., Ghumman F., Burki J. Clinical Utility of Contrast-Enhanced Ultrasound (CEUS) in Urology: A Multisystem Review. Cureus. 2025;17:e94690. doi: 10.7759/cureus.94690. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Boccatonda A., Brighenti A., Tiraferri V., Doglioli M., Iazzetta L., De Meis L., Zadeh E.S., Dietrich C.F., Serra C. POCUS for acute abdominal pain: Practical scan protocols on gastrointestinal diseases and an evidence review. J. Ultrasound. 2025;28:851–871. doi: 10.1007/s40477-025-01088-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Berg W.A., Blume J.D., Cormack J.B., Mendelson E.B. Operator dependence of physician-performed whole-breast US: Lesion detection and characterization. Radiology. 2006;241:355–365. doi: 10.1148/radiol.2412051710. [DOI] [PubMed] [Google Scholar]
- 13.Huang Q., Gao B., Wang M. Robot-Assisted Autonomous Ultrasound Imaging for Carotid Artery. IEEE Trans. Instrum. Meas. 2024;73:4003009. doi: 10.1109/tim.2024.3353836. [DOI] [Google Scholar]
- 14.Harrison G., Harris A. Work-related musculoskeletal disorders in ultrasound: Can you reduce risk? Ultrasound. 2015;23:224–230. doi: 10.1177/1742271X15593575. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Evans K., Roll S., Baker J. Work-related musculoskeletal disorders (WRMSD) among registered diagnostic medical sonographers and vascular technologists: A representative sample. J. Diagn. Med. Sonogr. 2009;25:287–299. doi: 10.1177/8756479309351748. [DOI] [Google Scholar]
- 16.Yang G.-Z., Nelson B.J., Murphy R.R., Choset H., Christensen H., Collins S.H., Dario P., Goldberg K., Ikuta K., Jacobstein N. Combating COVID-19—The role of robotics in managing public health and infectious diseases. Sci. Robot. 2020;5:eabb5589. doi: 10.1126/scirobotics.abb5589. [DOI] [PubMed] [Google Scholar]
- 17.Wu S., Wu D., Ye R., Li K., Lu Y., Xu J., Xiong L., Zhao Y., Cui A., Li Y. Pilot study of robot-assisted teleultrasound based on 5G network: A new feasible strategy for early imaging assessment during COVID-19 pandemic. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 2020;67:2241–2248. doi: 10.1109/TUFFC.2020.3020721. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Hinata N., Shiroki R., Tanabe K., Eto M., Takenaka A., Kawakita M., Hara I., Hongo F., Ibuki N., Nasu Y. Robot-assisted partial nephrectomy versus standard laparoscopic partial nephrectomy for renal hilar tumor: A prospective multi-institutional study. Int. J. Urol. 2021;28:382–389. doi: 10.1111/iju.14469. [DOI] [PubMed] [Google Scholar]
- 19.Gruionu L.G., Constantinescu C., Iacob A., Gruionu G. Robotic System for Catheter Navigation during Medical Procedures. Appl. Mech. Mater. 2020;896:211–217. doi: 10.4028/www.scientific.net/AMM.896.211. [DOI] [Google Scholar]
- 20.Liang J., Wu J., Huang H., Xu W., Li B., Xi F. Soft sensitive skin for safety control of a nursing robot using proximity and tactile sensors. IEEE Sens. J. 2019;20:3822–3830. doi: 10.1109/JSEN.2019.2959311. [DOI] [Google Scholar]
- 21.Nakadate R., Uda H., Hirano H., Solis J., Takanishi A., Minagawa E., Sugawara M., Niki K. Development of assisted-robotic system designed to measure the wave intensity with an ultrasonic diagnostic device; Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems; St. Louis, MO, USA. 10–15 October 2009; pp. 510–515. [Google Scholar]
- 22.Giuliani M., Szczęśniak-Stańczyk D., Mirnig N., Stollnberger G., Szyszko M., Stańczyk B., Tscheligi M. User-centred design and evaluation of a tele-operated echocardiography robot. Health Technol. 2020;10:649–665. doi: 10.1007/s12553-019-00399-0. [DOI] [Google Scholar]
- 23.Evans K.D., Yang Q., Liu Y., Ye R., Peng C. Sonography of the lungs: Diagnosis and surveillance of patients with COVID-19. J. Diagn. Med. Sonogr. 2020;36:370–376. doi: 10.1177/8756479320917107. [DOI] [Google Scholar]
- 24.Vieyres P., Novales C., Rivas R., Vilcahuaman L., Sandoval J., Clark T., DeStigter K., Josserand L., Morrison Z., Robertson A. The next challenge for WOrld wide Robotized Tele-Echography eXperiment (WORTEX 2012): From engineering success to healthcare delivery; Proceedings of the Congreso Peruano De Ingeniería Biomédica, Bioingeniería, Biotecnología y Física Médica (TUMI II); Lima, Peru. 29–31 May 2013. [Google Scholar]
- 25.Wang J., Peng C., Zhao Y., Ye R., Hong J., Huang H., Chen L. Application of a Robotic Tele-Echography System for COVID-19 Pneumonia. J. Ultrasound Med. 2021;40:385–390. doi: 10.1002/jum.15406. [DOI] [PubMed] [Google Scholar]
- 26.Adams S.J., Burbridge B.E., Badea A., Langford L., Vergara V., Bryce R., Bustamante L., Mendez I.M., Babyn P.S. Initial experience using a telerobotic ultrasound system for adult abdominal sonography. Can. Assoc. Radiol. J. 2017;68:308–314. doi: 10.1016/j.carj.2016.08.002. [DOI] [PubMed] [Google Scholar]
- 27.Georgescu M., Sacccomandi A., Baudron B., Arbeille P.L. Remote sonography in routine clinical practice between two isolated medical centers and the university hospital using a robotic arm: A 1-year study. Telemed. e-Health. 2016;22:276–281. doi: 10.1089/tmj.2015.0100. [DOI] [PubMed] [Google Scholar]
- 28.Avgousti S., Panayides A.S., Jossif A.P., Christoforou E.G., Vieyres P., Novales C., Voskarides S., Pattichis C.S. Cardiac ultrasonography over 4G wireless networks using a tele-operated robot. Healthc. Technol. Lett. 2016;3:212–217. doi: 10.1049/htl.2016.0043. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Mathiassen K., Fjellin J.E., Glette K., Hol P.K., Elle O.J. An ultrasound robotic system using the commercial robot UR5. Front. Robot. AI. 2016;3:1. doi: 10.3389/frobt.2016.00001. [DOI] [Google Scholar]
- 30.Siao C.-Y., Chang R.-G., Huang H.-C. Robotic Arms for Telemedicine System Using Smart Sensors and Ultrasound Robots. Internet Things. 2024;27:101243. doi: 10.1016/j.iot.2024.101243. [DOI] [Google Scholar]
- 31.Mustafa A.S.B., Ishii T., Matsunaga Y., Nakadate R., Ishii H., Ogawa K., Saito A., Sugawara M., Niki K., Takanishi A. Human abdomen recognition using camera and force sensor in medical robot system for automatic ultrasound scan; Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); Osaka, Japan. 3–7 July 2013; pp. 4855–4858. [DOI] [PubMed] [Google Scholar]
- 32.Ma X., Zhang Z., Zhang H.K. Autonomous scanning target localization for robotic lung ultrasound imaging; Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Prague, Czech Republic. 27 September–1 October 2021; pp. 9467–9474. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Akbari M., Carriere J., Meyer T., Sloboda R., Husain S., Usmani N., Tavakoli M. Robotic ultrasound scanning with real-time image-based force adjustment: Quick response for enabling physical distancing during the COVID-19 pandemic. Front. Robot. AI. 2021;8:645424. doi: 10.3389/frobt.2021.645424. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Su K., Liu J., Ren X., Huo Y., Du G., Zhao W., Wang X., Liang B., Li D., Liu P.X. A fully autonomous robotic ultrasound system for thyroid scanning. Nat. Commun. 2024;15:4004. doi: 10.1038/s41467-024-48421-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Zielke J., Eilers C., Busam B., Weber W., Navab N., Wendler T. RSV: Robotic sonography for thyroid volumetry. IEEE Robot. Autom. Lett. 2022;7:3342–3348. doi: 10.1109/LRA.2022.3146542. [DOI] [Google Scholar]
- 36.Shah R., Li Z.-M. Three-dimensional carpal arch morphology using robot-assisted ultrasonography. IEEE Trans. Biomed. Eng. 2021;69:894–898. doi: 10.1109/TBME.2021.3108720. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Tan J., Li B., Li Y., Li B., Chen X., Wu J., Luo B., Leng Y., Rong Y., Fu C. A flexible and fully autonomous breast ultrasound scanning system. IEEE Trans. Autom. Sci. Eng. 2022;20:1920–1933. doi: 10.1109/TASE.2022.3189339. [DOI] [Google Scholar]
- 38.Göbl R., Virga S., Rackerseder J., Frisch B., Navab N., Hennersperger C. Acoustic window planning for ultrasound acquisition. Int. J. Comput. Assist. Radiol. Surg. 2017;12:993–1001. doi: 10.1007/s11548-017-1551-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Huang Q., Wu B., Lan J., Li X. Fully automatic three-dimensional ultrasound imaging based on conventional B-scan. IEEE Trans. Biomed. Circuits Syst. 2018;12:426–436. doi: 10.1109/TBCAS.2017.2782815. [DOI] [PubMed] [Google Scholar]
- 40.Huang Q., Lan J., Li X. Robotic arm based automatic ultrasound scanning for three-dimensional imaging. IEEE Trans. Ind. Inform. 2018;15:1173–1182. doi: 10.1109/TII.2018.2871864. [DOI] [Google Scholar]
- 41.Zheng Y., Ning H., Rangarajan E., Merali A., Geale A., Lindenroth L., Xu Z., Wang W., Kruse P., Morris S. Design of a Cost-Effective Ultrasound Force Sensor and Force Control System for Robotic Extra-Body Ultrasound Imaging. Sensors. 2025;25:468. doi: 10.3390/s25020468. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Virga S., Zettinig O., Esposito M., Pfister K., Frisch B., Neff T., Navab N., Hennersperger C. Automatic force-compliant robotic ultrasound screening of abdominal aortic aneurysms; Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Daejeon, Republic of Korea. 9–14 October 2016; pp. 508–513. [Google Scholar]
- 43.Lin X.-X., Li M.-D., Ruan S.-M., Ke W.-P., Zhang H.-R., Huang H., Wu S.-H., Cheng M.-Q., Tong W.-J., Hu H.-T. Autonomous robotic ultrasound scanning system: A key to enhancing image analysis reproducibility and observer consistency in ultrasound imaging. Front. Robot. AI. 2025;12:1527686. doi: 10.3389/frobt.2025.1527686. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Bao X., Wang S., Zheng L., Housden R.J., Hajnal J.V., Rhode K. A novel ultrasound robot with force/torque measurement and control for safe and efficient scanning. IEEE Trans. Instrum. Meas. 2023;72:4002012. doi: 10.1109/TIM.2023.3239925. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Wang Z., Zhao B., Zhang P., Yao L., Wang Q., Li B., Meng M.Q.-H., Hu Y. Full-coverage path planning and stable interaction control for automated robotic breast ultrasound scanning. IEEE Trans. Ind. Electron. 2022;70:7051–7061. doi: 10.1109/TIE.2022.3204967. [DOI] [Google Scholar]
- 46.Wu C., Lan H., Ma J., Li X., Wen J. Point cloud-guided ultrasound robotic scanning path planning for the kidney based on anatomical positioning. Comput. Biol. Med. 2025;192:110191. doi: 10.1016/j.compbiomed.2025.110191. [DOI] [PubMed] [Google Scholar]
- 47.Sun D., Cappellari A., Lan B., Abayazid M., Stramigioli S., Niu K. Automatic Robotic Ultrasound for 3D Musculoskeletal Reconstruction: A Comprehensive Framework. Technologies. 2025;13:70. doi: 10.3390/technologies13020070. [DOI] [Google Scholar]
- 48.Wang Z., Han Y., Zhao B., Xie H., Yao L., Li B., Meng M.Q.-H., Hu Y. Autonomous robotic system for carotid artery ultrasound scanning with visual servo navigation. IEEE Trans. Med. Robot. Bionics. 2024;6:1436–1447. doi: 10.1109/TMRB.2024.3464109. [DOI] [Google Scholar]
- 49.Obst S.J., Newsham-West R., Barrett R.S. In vivo measurement of human achilles tendon morphology using freehand 3-D ultrasound. Ultrasound Med. Biol. 2014;40:62–70. doi: 10.1016/j.ultrasmedbio.2013.08.009. [DOI] [PubMed] [Google Scholar]
- 50.Gee A., Prager R., Treece G., Berman L. Engineering a freehand 3D ultrasound system. Pattern Recognit. Lett. 2003;24:757–777. doi: 10.1016/S0167-8655(02)00180-0. [DOI] [Google Scholar]
- 51.Virga S., Göbl R., Baust M., Navab N., Hennersperger C. Use the force: Deformation correction in robotic 3D ultrasound. Int. J. Comput. Assist. Radiol. Surg. 2018;13:619–627. doi: 10.1007/s11548-018-1716-8. [DOI] [PubMed] [Google Scholar]
- 52.Gilbertson M.W., Anthony B.W. Force and position control system for freehand ultrasound. IEEE Trans. Robot. 2015;31:835–849. doi: 10.1109/tro.2015.2429051. [DOI] [Google Scholar]
- 53.Kojcev R., Khakzar A., Fuerst B., Zettinig O., Fahkry C., DeJong R., Richmon J., Taylor R., Sinibaldi E., Navab N. On the reproducibility of expert-operated and robotic ultrasound acquisitions. Int. J. Comput. Assist. Radiol. Surg. 2017;12:1003–1011. doi: 10.1007/s11548-017-1561-1. [DOI] [PubMed] [Google Scholar]
- 54.Jiang Z., Zhou Y., Bi Y., Zhou M., Wendler T., Navab N. Deformation-aware robotic 3D ultrasound. IEEE Robot. Autom. Lett. 2021;6:7675–7682. doi: 10.1109/LRA.2021.3099080. [DOI] [Google Scholar]
- 55.Chen C., Yang X., Huang Y., Shi W., Cao Y., Luo M., Hu X., Zhu L., Yu L., Yue K. FetusMapV2: Enhanced fetal pose estimation in 3D ultrasound. Med. Image Anal. 2024;91:103013. doi: 10.1016/j.media.2023.103013. [DOI] [PubMed] [Google Scholar]
- 56.Merouche S., Allard L., Montagnon E., Soulez G., Bigras P., Cloutier G. A robotic ultrasound scanner for automatic vessel tracking and three-dimensional reconstruction of b-mode images. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 2015;63:35–46. doi: 10.1109/TUFFC.2015.2499084. [DOI] [PubMed] [Google Scholar]
- 57.Mustafa A.S.B., Ishii T., Matsunaga Y., Nakadate R., Ishii H., Ogawa K., Saito A., Sugawara M., Niki K., Takanishi A. Development of robotic system for autonomous liver screening using ultrasound scanning device; Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO); Shenzhen, China. 12–14 December 2013; pp. 804–809. [Google Scholar]
- 58.Graumann C., Fuerst B., Hennersperger C., Bork F., Navab N. Robotic ultrasound trajectory planning for volume of interest coverage; Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA); Stockholm, Sweden. 16–21 May 2016; pp. 736–741. [Google Scholar]
- 59.Hennersperger C., Fuerst B., Virga S., Zettinig O., Frisch B., Neff T., Navab N. Towards MRI-based autonomous robotic US acquisitions: A first feasibility study. IEEE Trans. Med. Imaging. 2016;36:538–548. doi: 10.1109/TMI.2016.2620723. [DOI] [PubMed] [Google Scholar]
- 60.Kojcev R., Fuerst B., Zettinig O., Fotouhi J., Lee S.C., Frisch B., Taylor R., Sinibaldi E., Navab N. Dual-robot ultrasound-guided needle placement: Closing the planning-imaging-action loop. Int. J. Comput. Assist. Radiol. Surg. 2016;11:1173–1181. doi: 10.1007/s11548-016-1408-1. [DOI] [PubMed] [Google Scholar]
- 61.Tsumura R., Iwata H. Robotic fetal ultrasonography platform with a passive scan mechanism. Int. J. Comput. Assist. Radiol. Surg. 2020;15:1323–1333. doi: 10.1007/s11548-020-02130-1. [DOI] [PubMed] [Google Scholar]
- 62.Groenhuis V., Nikolaev A., Nies S.H., Welleweerd M.K., de Jong L., Hansen H.H., Siepel F.J., de Korte C.L., Stramigioli S. 3-d ultrasound elastography reconstruction using acoustically transparent pressure sensor on robotic arm. IEEE Trans. Med. Robot. Bionics. 2020;3:265–268. doi: 10.1109/TMRB.2020.3042982. [DOI] [Google Scholar]
- 63.Wang S., Housden R.J., Noh Y., Singh A., Lindenroth L., Liu H., Althoefer K., Hajnal J., Singh D., Rhode K. Analysis of a customized clutch joint designed for the safety management of an ultrasound robot. Appl. Sci. 2019;9:1900. doi: 10.3390/app9091900. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Sandoval J., Laribi M.A., Zeghloul S., Arsicault M., Guilhem J.-M. Cobot with prismatic compliant joint intended for doppler sonography. Robotics. 2020;9:14. doi: 10.3390/robotics9010014. [DOI] [Google Scholar]
- 65.Nakadate R., Solis J., Takanishi A., Minagawa E., Sugawara M., Niki K. Implementation of an automatic scanning and detection algorithm for the carotid artery by an assisted-robotic measurement system; Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems; Taipei, Taiwan. 18–22 October 2010; pp. 313–318. [Google Scholar]
- 66.Pahl C., Supriyanto E. Design of automatic transabdominal ultrasound imaging system; Proceedings of the 2015 20th International Conference on Methods and Models in Automation and Robotics (MMAR); Miedzyzdroje, Poland. 24–27 August 2015; pp. 435–440. [Google Scholar]
- 67.Welleweerd M.K., de Groot A.G., De Looijer S., Siepel F.J., Stramigioli S. Automated robotic breast ultrasound acquisition using ultrasound feedback; Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA); Paris, France. 31 May–31 August 2020; pp. 9946–9952. [Google Scholar]
- 68.Wu B., Huang Q. A Kinect-based automatic ultrasound scanning system; Proceedings of the 2016 International Conference on Advanced Robotics and Mechatronics (ICARM); Macau, China. 18–20 August 2016; pp. 585–590. [Google Scholar]
- 69.Zhang J., Wang Y., Liu T., Yang K., Jin H. A flexible ultrasound scanning system for minimally invasive spinal surgery navigation. IEEE Trans. Med. Robot. Bionics. 2021;3:426–435. doi: 10.1109/TMRB.2021.3075750. [DOI] [Google Scholar]
- 70.Kaminski J.T., Rafatzand K., Zhang H.K. Feasibility of robot-assisted ultrasound imaging with force feedback for assessment of thyroid diseases; Proceedings of the Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling; Houston, TX, USA. 15–20 February 2020; p. 113151D. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Yang C., Jiang M., Chen M., Fu M., Li J., Huang Q. Automatic 3-D imaging and measurement of human spines with a robotic ultrasound system. IEEE Trans. Instrum. Meas. 2021;70:7502013. doi: 10.1109/TIM.2021.3085110. [DOI] [Google Scholar]
- 72.Langsch F., Virga S., Esteban J., Göbl R., Navab N. Robotic ultrasound for catheter navigation in endovascular procedures; Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Macau, China. 3–8 November 2019; pp. 5404–5410. [Google Scholar]
- 73.Lin Z., Li S., Ni D., Liao Y., Wen H., Du J., Chen S., Wang T., Lei B. Multi-task learning for quality assessment of fetal head ultrasound images. Med. Image Anal. 2019;58:101548. doi: 10.1016/j.media.2019.101548. [DOI] [PubMed] [Google Scholar]
- 74.Wu L., Cheng J.-Z., Li S., Lei B., Wang T., Ni D. FUIQA: Fetal ultrasound image quality assessment with deep convolutional networks. IEEE Trans. Cybern. 2017;47:1336–1349. doi: 10.1109/TCYB.2017.2671898. [DOI] [PubMed] [Google Scholar]
- 75.Zhang L., Dudley N.J., Lambrou T., Allinson N., Ye X. Automatic image quality assessment and measurement of fetal head in two-dimensional ultrasound image. J. Med. Imaging. 2017;4:024001. doi: 10.1117/1.JMI.4.2.024001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Schwaab J., Diez Y., Oliver A., Martí R., Zelst J.v., Gubern-Mérida A., Mourri A.B., Gregori J., Günther M. Automated quality assessment in three-dimensional breast ultrasound images. J. Med. Imaging. 2016;3:027002. doi: 10.1117/1.JMI.3.2.027002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Antico M., Vukovic D., Camps S.M., Sasazawa F., Takeda Y., Le A.T., Jaiprakash A.T., Roberts J., Crawford R., Fontanarosa D. Deep learning for US image quality assessment based on femoral cartilage boundary detection in autonomous knee arthroscopy. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 2020;67:2543–2552. doi: 10.1109/TUFFC.2020.2965291. [DOI] [PubMed] [Google Scholar]
- 78.Sutedjo V., Tirindelli M., Eilers C., Simson W., Busam B., Navab N. Acoustic shadowing aware robotic ultrasound: Lighting up the dark. IEEE Robot. Autom. Lett. 2022;7:1808–1815. doi: 10.1109/LRA.2022.3141451. [DOI] [Google Scholar]
- 79.Chatelain P., Krupa A., Navab N. Confidence-driven control of an ultrasound probe: Target-specific acoustic window optimization; Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA); Stockholm, Sweden. 16–21 May 2016; pp. 3441–3446. [Google Scholar]
- 80.Chatelain P., Krupa A., Navab N. Confidence-driven control of an ultrasound probe. IEEE Trans. Robot. 2017;33:1410–1424. doi: 10.1109/TRO.2017.2723618. [DOI] [Google Scholar]
- 81.Karamalis A., Wein W., Klein T., Navab N. Ultrasound confidence maps using random walks. Med. Image Anal. 2012;16:1101–1112. doi: 10.1016/j.media.2012.07.005. [DOI] [PubMed] [Google Scholar]
- 82.Abolmaesumi P., Salcudean S., Zhu W. Visual servoing for robot-assisted diagnostic ultrasound; Proceedings of the 22nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Cat. No. 00CH37143); Chicago, IL, USA. 23–28 July 2000; pp. 2532–2535. [Google Scholar]
- 83.Abolmaesumi P., Sirouspour M.R., Salcudean S.E. Real-time extraction of carotid artery contours from ultrasound images; Proceedings of the 13th IEEE Symposium on Computer-Based Medical Systems; Houston, TX, USA. 24 June 2000; pp. 181–186. [Google Scholar]
- 84.Abolmaesumi P., Salcudean S.E., Zhu W.-H., Sirouspour M.R., DiMaio S.P. Image-guided control of a robot for medical ultrasound. IEEE Trans. Robot. Autom. 2002;18:11–23. doi: 10.1109/70.988970. [DOI] [Google Scholar]
- 85.Şen H.T., Cheng A., Ding K., Boctor E., Wong J., Iordachita I., Kazanzides P. Cooperative control with ultrasound guidance for radiation therapy. Front. Robot. AI. 2016;3:49. doi: 10.3389/frobt.2016.00049. [DOI] [Google Scholar]
- 86.Chatelain P., Krupa A., Navab N. Optimization of ultrasound image quality via visual servoing; Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA); Seattle, WA, USA. 26–30 May 2015; pp. 5997–6002. [Google Scholar]
- 87.Zhang L., Yang D., Niu B., Yang H., Huang Q., Jiang L., Liu H. Smooth Path Planning and Dynamic Contact Force Regulation for Robotic Ultrasound Scanning. IEEE Robot. Autom. Lett. 2025;10:10570–10577. doi: 10.1109/lra.2025.3604746. [DOI] [Google Scholar]
- 88.Wang K.-J., Chen C.-H., Chen J.-J., Ciou W.-S., Xu C.-B., Du Y.-C. An improved sensing method of a robotic ultrasound system for real-time force and angle calibration. Sensors. 2021;21:2927. doi: 10.3390/s21092927. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89.Fu Y., Lin W., Yu X., Rodríguez-Andina J.J., Gao H. Robot-assisted teleoperation ultrasound system based on fusion of augmented reality and predictive force. IEEE Trans. Ind. Electron. 2022;70:7449–7456. doi: 10.1109/TIE.2022.3201322. [DOI] [Google Scholar]
- 90.Welleweerd M.K., de Groot A.G., Groenhuis V., Siepel F.J., Stramigioli S. Out-of-plane corrections for autonomous robotic breast ultrasound acquisitions; Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA); Xi’an, China. 30 May–5 June 2021; pp. 12515–12521. [Google Scholar]
- 91.Lindenroth L., Soor A., Hutchinson J., Shafi A., Back J., Rhode K., Liu H. Design of a soft, parallel end-effector applied to robot-guided ultrasound interventions; Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Vancouver, BC, Canada. 24–28 September 2017; pp. 3716–3721. [Google Scholar]
- 92.Lindenroth L., Housden R.J., Wang S., Back J., Rhode K., Liu H. Design and Integration of a Parallel, Soft Robotic End-Effector for Extracorporeal Ultrasound. IEEE Trans. Biomed. Eng. 2020;67:2215–2229. doi: 10.1109/TBME.2019.2957609. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Carriere J., Fong J., Meyer T., Sloboda R., Husain S., Usmani N., Tavakoli M. An admittance-controlled robotic assistant for semi-autonomous breast ultrasound scanning; Proceedings of the 2019 International Symposium on Medical Robotics (ISMR); Atlanta, GA, USA. 3–5 April 2019; pp. 1–7. [Google Scholar]
- 94.Piwowarczyk J., Carriere J., Adams K., Tavakoli M. An admittance-controlled force-scaling dexterous assistive robotic system. J. Med. Robot. Res. 2020;5:2041002. [Google Scholar]
- 95.Ferraguti F., Talignani Landi C., Sabattini L., Bonfe M., Fantuzzi C., Secchi C. A variable admittance control strategy for stable physical human–robot interaction. Int. J. Robot. Res. 2019;38:747–765. doi: 10.1177/0278364919840415. [DOI] [Google Scholar]
- 96.Dimeas F., Aspragathos N. Online stability in human-robot cooperation with admittance control. IEEE Trans. Haptics. 2016;9:267–278. doi: 10.1109/toh.2016.2518670. [DOI] [PubMed] [Google Scholar]
- 97.Ning G., Chen J., Zhang X., Liao H. Force-guided autonomous robotic ultrasound scanning control method for soft uncertain environment. Int. J. Comput. Assist. Radiol. Surg. 2021;16:2189–2199. doi: 10.1007/s11548-021-02462-6. [DOI] [PubMed] [Google Scholar]
- 98.Jiang J., Luo J., Wang H., Tang X., Nian F., Qi L. Force tracking control method for robotic ultrasound scanning system under soft uncertain environment. Actuators. 2024;13:62. doi: 10.3390/act13020062. [DOI] [Google Scholar]
- 99.Xie Y., Guo J., Deng Z., Hou X., Housden J., Rhode K., Liu H., Hou Z.-G., Wang S. Robot-assisted trans-esophageal ultrasound and the virtual admittance-based master-slave control method thereof. IEEE/ASME Trans. Mechatron. 2023;28:2505–2516. doi: 10.1109/TMECH.2023.3247832. [DOI] [Google Scholar]
- 100.Ning G., Liang H., Zhang X., Liao H. Inverse-reinforcement-learning-based robotic ultrasound active compliance control in uncertain environments. IEEE Trans. Ind. Electron. 2023;71:1686–1696. [Google Scholar]
- 101.Shida Y., Kumagai S., Iwata H. Robotic navigation with deep reinforcement learning in transthoracic echocardiography. Int. J. Comput. Assist. Radiol. Surg. 2025;20:191–202. doi: 10.1007/s11548-024-03275-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 102.Cheng C., Zhang H., Sun Y., Tao H., Chen Y. A cross-platform deep reinforcement learning model for autonomous navigation without global information in different scenes. Control. Eng. Pract. 2024;150:105991. doi: 10.1016/j.conengprac.2024.105991. [DOI] [Google Scholar]
- 103.Nasti S.M., Najar Z.A., Chishti M.A. Adaptive mapless mobile robot navigation using deep reinforcement learning based improved TD3 algorithm. Front. Robot. AI. 2025;12:1625968. doi: 10.3389/frobt.2025.1625968. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 104.Ampuero G.C., Hermosilla G., Varas G., Clark M.T. Deep Reinforcement Learning for Sim-to-Real Robot Navigation with a Minimal Sensor Suite for Beach-Cleaning Applications. Appl. Sci. 2025;15:10719. doi: 10.3390/app151910719. [DOI] [Google Scholar]
- 105.Yang J., Lu S., Han M., Li Y., Ma Y., Lin Z., Li H. Mapless navigation for UAVs via reinforcement learning from demonstrations. Sci. China Technol. Sci. 2023;66:1263–1270. doi: 10.1007/s11431-022-2292-3. [DOI] [Google Scholar]
- 106.Wang S., Singh D., Johnson D., Althoefer K., Rhode K., Housden R.J. Robotic ultrasound: View planning, tracking, and automatic acquisition of transesophageal echocardiography. IEEE Robot. Autom. Mag. 2016;23:118–127. doi: 10.1109/MRA.2016.2580478. [DOI] [Google Scholar]
- 107.Bi Y., Qian C., Zhang Z., Navab N., Jiang Z. Autonomous path planning for intercostal robotic ultrasound imaging using reinforcement learning. arXiv. 2024 doi: 10.1038/s41598-026-37702-9.2404.09927 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 108.Suligoj F., Heunis C.M., Sikorski J., Misra S. RobUSt–an autonomous robotic ultrasound system for medical imaging. IEEE Access. 2021;9:67456–67465. doi: 10.1109/ACCESS.2021.3077037. [DOI] [Google Scholar]
- 109.Tan J., Li B., Leng Y., Li Y., Peng J., Wu J., Luo B., Chen X., Rong Y., Fu C. Fully automatic dual-probe lung ultrasound scanning robot for screening triage. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 2022;70:975–988. doi: 10.1109/TUFFC.2022.3211532. [DOI] [PubMed] [Google Scholar]
- 110.Tan J., Li J., Li Y., Li B., Leng Y., Rong Y., Fu C. Autonomous trajectory planning for ultrasound-guided real-time tracking of suspicious breast tumor targets. IEEE Trans. Autom. Sci. Eng. 2023;21:2478–2493. [Google Scholar]
- 111.Tan J., Li Y., Li B., Leng Y., Peng J., Wu J., Luo B., Chen X., Rong Y., Fu C. Automatic generation of autonomous ultrasound scanning trajectory based on 3-D point cloud. IEEE Trans. Med. Robot. Bionics. 2022;4:976–990. doi: 10.1109/TMRB.2022.3214493. [DOI] [Google Scholar]
- 112.Yang C., Chen M., Xu H., Li J., Huang Q. Fully automatic spinal scanning and measurement based on multi-source vision information. Measurement. 2024;224:113955. doi: 10.1016/j.measurement.2023.113955. [DOI] [Google Scholar]
- 113.Chen R., Yan X., Lv K., Huang G., Li Z., Li X. UltraDP: Generalizable Carotid Ultrasound Scanning with Force-Aware Diffusion Policy; Proceedings of the 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Hangzhou, China. 19–25 October 2025; pp. 20074–20080. [Google Scholar]
- 114.Jiang H., Zhao A., Yang Q., Yan X., Wang T., Wang Y., Jia N., Wang J., Wu G., Yue Y. Towards expert-level autonomous carotid ultrasonography with large-scale learning-based robotic system. Nat. Commun. 2025;16:7893. doi: 10.1038/s41467-025-62865-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 115.Yan W., Ding Q., Chen J., Yan K., Tang R.S.-Y., Cheng S.S. Learning-based needle tip tracking in 2D ultrasound by fusing visual tracking and motion prediction. Med. Image Anal. 2023;88:102847. doi: 10.1016/j.media.2023.102847. [DOI] [PubMed] [Google Scholar]
- 116.Faoro G., Maglio S., Pane S., Iacovacci V., Menciassi A. An artificial intelligence-aided robotic platform for ultrasound-guided transcarotid revascularization. IEEE Robot. Autom. Lett. 2023;8:2349–2356. [Google Scholar]
- 117.Esteban J., Simson W., Requena Witzig S., Rienmüller A., Virga S., Frisch B., Zettinig O., Sakara D., Ryang Y.-M., Navab N. Robotic ultrasound-guided facet joint insertion. Int. J. Comput. Assist. Radiol. Surg. 2018;13:895–904. doi: 10.1007/s11548-018-1759-x. [DOI] [PubMed] [Google Scholar]
- 118.Chen S., Lin Y., Li Z., Wang F., Cao Q. Automatic and accurate needle detection in 2D ultrasound during robot-assisted needle insertion process. Int. J. Comput. Assist. Radiol. Surg. 2022;17:295–303. doi: 10.1007/s11548-021-02519-6. [DOI] [PubMed] [Google Scholar]
- 119.Grube S., Latus S., Behrendt F., Riabova O., Neidhardt M., Schlaefer A. Needle tracking in low-resolution ultrasound volumes using deep learning. Int. J. Comput. Assist. Radiol. Surg. 2024;19:1975–1981. doi: 10.1007/s11548-024-03234-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 120.Mazdarani H., Watterson J., Hibbert R., Rossa C. Integrating Confidence Maps and Visual Servoing for Needle Tracking in Robotic US-Guided Percutaneous Nephrolithotomy. IEEE Open J. Instrum. Meas. 2025;4:4000409. doi: 10.1109/ojim.2025.3581634. [DOI] [Google Scholar]
- 121.Raina D., Al-Zogbi L., Teixeira B., Singh V., Kapoor A., Fleiter T., Bell M.A.L., Pandian V., Krieger A. AURA-CVC: Autonomous Ultrasound-guided Robotic Assistance for Central Venous Catheterization. arXiv. 2025 doi: 10.1007/s11548-026-03572-9.2507.05979 [DOI] [PubMed] [Google Scholar]
- 122.von Haxthausen F., Böttger S., Wulff D., Hagenah J., García-Vázquez V., Ipsen S. Medical robotics for ultrasound imaging: Current systems and future trends. Curr. Robot. Rep. 2021;2:55–71. doi: 10.1007/s43154-020-00037-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 123.Chen D., Wang L., Luo X., Fei C., Li D., Shan G., Yang Y. Recent development and perspectives of optimization design methods for piezoelectric ultrasonic transducers. Micromachines. 2021;12:779. doi: 10.3390/mi12070779. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 124.Kutter O., Shams R., Navab N. Visualization and GPU-accelerated simulation of medical ultrasound from CT images. Comput. Methods Programs Biomed. 2009;94:250–266. doi: 10.1016/j.cmpb.2008.12.011. [DOI] [PubMed] [Google Scholar]
- 125.Wein W., Brunke S., Khamene A., Callstrom M.R., Navab N. Automatic CT-ultrasound registration for diagnostic imaging and image-guided intervention. Med. Image Anal. 2008;12:577–585. doi: 10.1016/j.media.2008.06.006. [DOI] [PubMed] [Google Scholar]
- 126.Zhu Y., Magee D., Ratnalingam R., Kessel D. A virtual ultrasound imaging system for the simulation of ultrasound-guided needle insertion procedures; Proceedings of the Medical Image Understanding and Analysis; Manchester, UK. 4–5 July 2006; pp. 61–65. [Google Scholar]
- 127.Nikolaev A.V., De Jong L., Weijers G., Groenhuis V., Mann R.M., Siepel F.J., Maris B.M., Stramigioli S., Hansen H.H., De Korte C.L. Quantitative evaluation of an automated cone-based breast ultrasound scanner for MRI–3D US image fusion. IEEE Trans. Med. Imaging. 2021;40:1229–1239. doi: 10.1109/TMI.2021.3050525. [DOI] [PubMed] [Google Scholar]
- 128.Cao X., Li B., Zhou Y., Cao Y., Yang X., Hu X., Chen C., Zhu S., Lin H., Wang T. Effectiveness and clinical impact of using deep learning for first-trimester fetal ultrasound image quality auditing. BMC Pregnancy Childbirth. 2025;25:375. doi: 10.1186/s12884-025-07485-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 129.Liu L., Wang T., Zhu W., Zhang H., Tian H., Li Y., Cai W., Yang P. Intelligent quality assessment of ultrasound images for fetal nuchal translucency measurement during the first trimester of pregnancy based on deep learning models. BMC Pregnancy Childbirth. 2025;25:741. doi: 10.1186/s12884-025-07863-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 130.Kim H., Varghese T. Hybrid spectral domain method for attenuation slope estimation. Ultrasound Med. Biol. 2008;34:1808–1819. doi: 10.1016/j.ultrasmedbio.2008.04.011. [DOI] [PubMed] [Google Scholar]
- 131.Penney G.P., Blackall J.M., Hamady M., Sabharwal T., Adam A., Hawkes D.J. Registration of freehand 3D ultrasound and magnetic resonance liver images. Med. Image Anal. 2004;8:81–91. doi: 10.1016/j.media.2003.07.003. [DOI] [PubMed] [Google Scholar]
- 132.Wein W., Roper B., Navab N. Integrating diagnostic B-mode ultrasonography into CT-based radiation treatment planning. IEEE Trans. Med. Imaging. 2007;26:866–879. doi: 10.1109/TMI.2007.895483. [DOI] [PubMed] [Google Scholar]
- 133.Yu Y., Wang J. Backscatter-contour-attenuation joint estimation model for attenuation compensation in ultrasound imagery. IEEE Trans. Image Process. 2010;19:2725–2736. doi: 10.1109/TIP.2010.2050636. [DOI] [PubMed] [Google Scholar]
- 134.Grady L. Random walks for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2006;28:1768–1783. doi: 10.1109/TPAMI.2006.233. [DOI] [PubMed] [Google Scholar]
- 135.Bachta W., Krupa A. Towards ultrasound image-based visual servoing; Proceedings of the 2006 IEEE International Conference on Robotics and Automation; Orlando, FL, USA. 15–19 May 2006; pp. 4112–4117. [Google Scholar]
- 136.Krupa A., Fichtinger G., Hager G.D. Full motion tracking in ultrasound using image speckle information and visual servoing; Proceedings of the 2007 IEEE International Conference on Robotics and Automation; Rome, Italy. 10–14 April 2007; pp. 2458–2464. [Google Scholar]
- 137.Mebarki R., Krupa A., Chaumette F. 2-D ultrasound probe complete guidance by visual servoing using image moments. IEEE Trans. Robot. 2010;26:296–306. doi: 10.1109/TRO.2010.2042533. [DOI] [Google Scholar]
- 138.Nadeau C., Krupa A. Intensity-based ultrasound visual servoing: Modeling and validation with 2-d and 3-d probes. IEEE Trans. Robot. 2013;29:1003–1015. doi: 10.1109/TRO.2013.2256690. [DOI] [Google Scholar]
- 139.Fujibayashi T., Koizumi N., Nishiyama Y., Zhou J., Tsukihara H., Yoshinaka K., Tsumura R. Image Search Strategy via Visual Servoing for Robotic Kidney Ultrasound Imaging. J. Robot. Mechatron. 2023;35:1281–1289. doi: 10.20965/jrm.2023.p1281. [DOI] [Google Scholar]
- 140.Tang X., Wang H., Luo J., Jiang J., Nian F., Qi L., Sang L., Gan Z. Autonomous ultrasound scanning robotic system based on human posture recognition and image servo control: An application for cardiac imaging. Front. Robot. AI. 2024;11:1383732. doi: 10.3389/frobt.2024.1383732. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 141.Davoodi A., Li R., Cai Y., Niu K., Borghesan G., Vander Poorten E. A Comparative Study for Control of Semi-Automatic Robotic-assisted Ultrasound System in Spine Surgery; Proceedings of the 2023 21st International Conference on Advanced Robotics (ICAR); Abu Dhabi, United Arab Emirates. 5–8 December 2023; pp. 303–310. [Google Scholar]
- 142.Mohan P., Patel N. Semi-Autonomous Ultrasound-Guided Robotic System for Percutaneous Intervention; Proceedings of the 2024 10th International Conference on Control, Automation and Robotics (ICCAR); Singapore. 27–29 April 2024; pp. 153–158. [Google Scholar]
- 143.Zhan J., Cartucho J., Giannarou S. Autonomous tissue scanning under free-form motion for intraoperative tissue characterisation; Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA); Paris, France. 31 May–31 August 2020; pp. 11147–11154. [Google Scholar]
- 144.Du H., Zhang X., Zhang Y., Zhang F., Lin L., Huang T. A review of robot-assisted ultrasound examination: Systems and technology. Int. J. Med. Robot. Comput. Assist. Surg. 2024;20:e2660. doi: 10.1002/rcs.2660. [DOI] [PubMed] [Google Scholar]
- 145.Power D. Ethical considerations in the era of AI, automation, and surgical robots: There are plenty of lessons from the past. Discov. Artif. Intell. 2024;4:65. doi: 10.1007/s44163-024-00166-9. [DOI] [Google Scholar]
- 146.Abbas M., Al Issa S., Dwivedy S.K. Event-triggered adaptive hybrid position-force control for robot-assisted ultrasonic examination system. J. Intell. Robot. Syst. 2021;102:84. doi: 10.1007/s10846-021-01428-9. [DOI] [Google Scholar]
- 147.Wang Y., Xie Z., Huang H., Liang X. Pioneering healthcare with soft robotic devices: A review. Smart Med. 2024;3:e20230045. doi: 10.1002/SMMD.20230045. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 148.Wu Z., Wang X., Cao Y., Zhang W., Xu Q. Robotic Ultrasound Scanning End-Effector with Adjustable Constant Contact Force. Cyborg Bionic Syst. 2025;6:0251. doi: 10.34133/cbsystems.0251. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 149.Tan J., Qin H., Chen X., Li J., Li Y., Li B., Leng Y., Fu C. Point cloud segmentation of breast ultrasound regions to be scanned by fusing 2D image instance segmentation and keypoint detection; Proceedings of the 2023 International Conference on Advanced Robotics and Mechatronics (ICARM); Sanya, China. 8–10 July 2023; pp. 669–674. [Google Scholar]
- 150.Dai J., He Z., Fang G., Wang X., Li Y., Cheung C.-L., Liang L., Iordachita I., Chang H.-C., Kwok K.-W. A robotic platform to navigate MRI-guided focused ultrasound system. IEEE Robot. Autom. Lett. 2021;6:5137–5144. doi: 10.1109/lra.2021.3068953. [DOI] [Google Scholar]
- 151.Ye M., Zhang Q., Li Z., Gu C., Meng Y. Robotic CSP resection and hysterotomy repair. J. Minim. Invasive Gynecol. 2021;28:945–946. doi: 10.1016/j.jmig.2020.11.017. [DOI] [PubMed] [Google Scholar]
- 152.Cai Y., Li R., Davoodi A., Ourak M., Deprest J., Vander Poorten E. Autonomous Robotic Ultrasound Approach for Fetoscope Tracking by Fusing Optical and 2D Ultrasound Data. IEEE Robot. Autom. Lett. 2024;9:7573–7580. doi: 10.1109/LRA.2024.3427556. [DOI] [Google Scholar]
- 153.Osburg J., Scheibert A., Horn M., Pater R., Ernst F. Automatic robotic doppler sonography of leg arteries. Int. J. Comput. Assist. Radiol. Surg. 2024;19:1965–1974. doi: 10.1007/s11548-024-03235-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 154.Huang Y., Xiao W., Wang C., Liu H., Huang R., Sun Z. Towards fully autonomous ultrasound scanning robot with imitation learning based on clinical protocols. IEEE Robot. Autom. Lett. 2021;6:3671–3678. doi: 10.1109/LRA.2021.3064283. [DOI] [Google Scholar]
- 155.Royer L., Krupa A., Dardenne G., Le Bras A., Marchand E., Marchal M. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation. Med. Image Anal. 2017;35:582–598. doi: 10.1016/j.media.2016.09.004. [DOI] [PubMed] [Google Scholar]
- 156.Krupa A., Fichtinger G., Hager G.D. Real-time tissue tracking with B-mode ultrasound using speckle and visual servoing; Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2007: 10th International Conference; Brisbane, Australia. 29 October–2 November 2007; pp. 1–8. [DOI] [PubMed] [Google Scholar]
- 157.Royer L., Marchal M., Le Bras A., Dardenne G., Krupa A. Real-time tracking of deformable target in 3D ultrasound images; Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA); Seattle, WA, USA. 26–30 May 2015; pp. 2430–2435. [Google Scholar]
- 158.Mebarki R., Krupa A., Chaumette F. Image moments-based ultrasound visual servoing; Proceedings of the 2008 IEEE International Conference on Robotics and Automation; Pasadena, CA, USA. 19–23 May 2008; pp. 113–119. [Google Scholar]
- 159.Cafarelli A., Mura M., Diodato A., Schiappacasse A., Santoro M., Ciuti G., Menciassi A. A computer-assisted robotic platform for Focused Ultrasound Surgery: Assessment of high intensity focused ultrasound delivery; Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); Milan, Italy. 25–29 August 2015; pp. 1311–1314. [DOI] [PubMed] [Google Scholar]
- 160.Nadeau C., Ren H., Krupa A., Dupont P. Intensity-based visual servoing for instrument and tissue tracking in 3D ultrasound volumes. IEEE Trans. Autom. Sci. Eng. 2014;12:367–371. doi: 10.1109/TASE.2014.2343652. [DOI] [Google Scholar]
- 161.Velikova Y., Azampour M.F., Simson W., Esposito M., Navab N. Implicit neural representations for breathing-compensated volume reconstruction in robotic ultrasound; Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA); Yokohama, Japan. 13–17 May 2024; pp. 1316–1322. [Google Scholar]
- 162.Cao G., Cao D., Liu H. Respiratory Motion-Robust Robotic Ultrasound Acquisitions via Vision-Haptic Fusion Control and 3D Compensation. IEEE Trans. Instrum. Meas. 2025;74:4018511. [Google Scholar]
- 163.Wang Y., Ge X., Ma H., Qi S., Zhang G., Yao Y. Deep learning in medical ultrasound image analysis: A review. Ieee Access. 2021;9:54310–54324. doi: 10.1109/ACCESS.2021.3071301. [DOI] [Google Scholar]
- 164.Droste R., Drukker L., Papageorghiou A.T., Noble J.A. Automatic probe movement guidance for freehand obstetric ultrasound; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference; Lima, Peru. 4–8 October 2020; pp. 583–592. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 165.Men Q., Teng C., Drukker L., Papageorghiou A.T., Noble J.A. Gaze-probe joint guidance with multi-task learning in obstetric ultrasound scanning. Med. Image Anal. 2023;90:102981. doi: 10.1016/j.media.2023.102981. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 166.Ning G., Zhang X., Liao H. Autonomic robotic ultrasound imaging system based on reinforcement learning. IEEE Trans. Biomed. Eng. 2021;68:2787–2797. doi: 10.1109/TBME.2021.3054413. [DOI] [PubMed] [Google Scholar]
- 167.Li K., Wang J., Xu Y., Qin H., Liu D., Liu L., Meng M.Q.-H. Autonomous navigation of an ultrasound probe towards standard scan planes with deep reinforcement learning; Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA); Xi’an, China. 30 May–5 June 2021; pp. 8302–8308. [Google Scholar]
- 168.Li C., Zhang T., Zhou Z., Zhao B., Zhang P., Qi X. Reinforcement Learning for Robot Assisted Live Ultrasound Examination. Electronics. 2025;14:3709. doi: 10.3390/electronics14183709. [DOI] [Google Scholar]
- 169.Si W., Wang N., Harris R., Yang C. Deep Multimodal Imitation Learning-Based Framework for Robot-Assisted Medical Examination. IEEE Trans. Ind. Electron. 2025;73:928–936. doi: 10.1109/TIE.2025.3589442. [DOI] [Google Scholar]
- 170.He W., Liang L., Ouyang F., Yang G., Ding P., Zhang T., Zhang Z. Ultrasound image quality assessment of robot screening based on dual perspective multi feature collaboration. Displays. 2025;92:103319. doi: 10.1016/j.displa.2025.103319. [DOI] [Google Scholar]
- 171.Miao H., Jia J., Cao Y., Zhou Y., Jiang Y., Liu Z., Zhai G. Ultrasound-qbench: Can llms aid in quality assessment of ultrasound imaging? arXiv. 2025 doi: 10.48550/arXiv.2501.02751.2501.02751 [DOI] [Google Scholar]
- 172.Chlap P., Min H., Vandenberg N., Dowling J., Holloway L., Haworth A. A review of medical image data augmentation techniques for deep learning applications. J. Med. Imaging Radiat. Oncol. 2021;65:545–563. doi: 10.1111/1754-9485.13261. [DOI] [PubMed] [Google Scholar]
- 173.Xie X., Niu J., Liu X., Chen Z., Tang S., Yu S. A survey on incorporating domain knowledge into deep learning for medical image analysis. Med. Image Anal. 2021;69:101985. doi: 10.1016/j.media.2021.101985. [DOI] [PubMed] [Google Scholar]
- 174.Salehi A.W., Khan S., Gupta G., Alabduallah B.I., Almjally A., Alsolai H., Siddiqui T., Mellit A. A study of CNN and transfer learning in medical imaging: Advantages, challenges, future scope. Sustainability. 2023;15:5930. doi: 10.3390/su15075930. [DOI] [Google Scholar]
- 175.Yuan F., Zhang Z., Fang Z. An effective CNN and Transformer complementary network for medical image segmentation. Pattern Recognit. 2023;136:109228. doi: 10.1016/j.patcog.2022.109228. [DOI] [Google Scholar]
- 176.Yao W., Bai J., Liao W., Chen Y., Liu M., Xie Y. From cnn to transformer: A review of medical image segmentation models. J. Imaging Inform. Med. 2024;37:1529–1547. doi: 10.1007/s10278-024-00981-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 177.Zhang S., Wang Y., Jiang J., Dong J., Yi W., Hou W. CNN-based medical ultrasound image quality assessment. Complexity. 2021;2021:9938367. doi: 10.1155/2021/9938367. [DOI] [Google Scholar]
- 178.Balasubramaniam S., Velmurugan Y., Jaganathan D., Dhanasekaran S. A modified LeNet CNN for breast cancer diagnosis in ultrasound images. Diagnostics. 2023;13:2746. doi: 10.3390/diagnostics13172746. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 179.He Q., Yang Q., Xie M. HCTNet: A hybrid CNN-transformer network for breast ultrasound image segmentation. Comput. Biol. Med. 2023;155:106629. doi: 10.1016/j.compbiomed.2023.106629. [DOI] [PubMed] [Google Scholar]
- 180.Yang T.-Y., Zhou L.-Q., Han X.-H., Piao J.-C. An improved CNN-based thyroid nodule screening algorithm in ultrasound images. Biomed. Signal Process. Control. 2024;87:105371. doi: 10.1016/j.bspc.2023.105371. [DOI] [Google Scholar]
- 181.Li J., Wang Y., Lei B., Cheng J.-Z., Qin J., Wang T., Li S., Ni D. Automatic fetal head circumference measurement in ultrasound using random forest and fast ellipse fitting. IEEE J. Biomed. Health Inform. 2017;22:215–223. doi: 10.1109/JBHI.2017.2703890. [DOI] [PubMed] [Google Scholar]
- 182.Wang Y., Dou H., Hu X., Zhu L., Yang X., Xu M., Qin J., Heng P.-A., Wang T., Ni D. Deep attentive features for prostate segmentation in 3D transrectal ultrasound. IEEE Trans. Med. Imaging. 2019;38:2768–2778. doi: 10.1109/TMI.2019.2913184. [DOI] [PubMed] [Google Scholar]
- 183.Wang Y., Wang N., Xu M., Yu J., Qin C., Luo X., Yang X., Wang T., Li A., Ni D. Deeply-supervised networks with threshold loss for cancer detection in automated breast ultrasound. IEEE Trans. Med. Imaging. 2019;39:866–876. doi: 10.1109/TMI.2019.2936500. [DOI] [PubMed] [Google Scholar]
- 184.Dargazany A. DRL: Deep Reinforcement Learning for Intelligent Robot Control--Concept, Literature, and Future. arXiv. 20212105.13806 [Google Scholar]
- 185.Jang Y., Jeon B. Deep reinforcement learning with explicit spatio-sequential encoding network for coronary ostia identification in CT images. Sensors. 2021;21:6187. doi: 10.3390/s21186187. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 186.Zhou S.K., Le H.N., Luu K., Nguyen H.V., Ayache N. Deep reinforcement learning in medical imaging: A literature review. Med. Image Anal. 2021;73:102193. doi: 10.1016/j.media.2021.102193. [DOI] [PubMed] [Google Scholar]
- 187.Zhu K., Zhang T. Deep reinforcement learning based mobile robot navigation: A review. Tsinghua Sci. Technol. 2021;26:674–691. doi: 10.26599/tst.2021.9010012. [DOI] [Google Scholar]
- 188.Tan J. A method to plan the path of a robot utilizing deep reinforcement learning and multi-sensory information fusion. Appl. Artif. Intell. 2023;37:2224996. doi: 10.1080/08839514.2023.2224996. [DOI] [Google Scholar]
- 189.Chen G., Pan L., Chen Y.A., Xu P., Wang Z., Wu P., Ji J., Chen X. Deep reinforcement learning of map-based obstacle avoidance for mobile robot navigation. SN Comput. Sci. 2021;2:417. doi: 10.1007/s42979-021-00817-z. [DOI] [Google Scholar]
- 190.Wang Y., He H., Sun C. Learning to navigate through complex dynamic environment with modular deep reinforcement learning. IEEE Trans. Games. 2018;10:400–412. doi: 10.1109/tg.2018.2849942. [DOI] [Google Scholar]
- 191.Huang J., Wang R., Jiang W., Shao S., Chen T. Agent Based Fetal Face Segmentation for Standard Plane Localization in 3D Ultrasound; Proceedings of the 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC); Honolulu, HI, USA. 1–4 October 2023; pp. 5317–5322. [Google Scholar]
- 192.Bi Y., Jiang Z., Duelmer F., Huang D., Navab N. Machine learning in robotic ultrasound imaging: Challenges and perspectives. Annu. Rev. Control. Robot. Auton. Syst. 2024;7:335–357. doi: 10.1146/annurev-control-091523-100042. [DOI] [Google Scholar]
- 193.Samei G., Tsang K., Kesch C., Lobo J., Hor S., Mohareri O., Chang S., Goldenberg S.L., Black P.C., Salcudean S. A partial augmented reality system with live ultrasound and registered preoperative MRI for guiding robot-assisted radical prostatectomy. Med. Image Anal. 2020;60:101588. doi: 10.1016/j.media.2019.101588. [DOI] [PubMed] [Google Scholar]
- 194.Lim S., Jun C., Chang D., Petrisor D., Han M., Stoianovici D. Robotic transrectal ultrasound guided prostate biopsy. IEEE Trans. Biomed. Eng. 2019;66:2527–2537. doi: 10.1109/tbme.2019.2891240. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 195.Song T., Eck U., Navab N. Optimizing In-Contact Force Planning in Robotic Ultrasound with Augmented Reality Visualization Techniques; Proceedings of the 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR); Bellevue, WA, USA. 21–25 October 2024; pp. 554–563. [Google Scholar]
- 196.Hughes-Hallett A., Pratt P., Mayer E., Di Marco A., Yang G.-Z., Vale J., Darzi A. Intraoperative ultrasound overlay in robot-assisted partial nephrectomy: First clinical experience. Eur. Urol. 2014;65:671–672. doi: 10.1016/j.eururo.2013.11.001. [DOI] [PubMed] [Google Scholar]
- 197.Nguyen T., Plishker W., Matisoff A., Sharma K., Shekhar R. HoloUS: Augmented reality visualization of live ultrasound images using HoloLens for ultrasound-guided procedures. Int. J. Comput. Assist. Radiol. Surg. 2022;17:385–391. doi: 10.1007/s11548-021-02526-7. [DOI] [PubMed] [Google Scholar]
- 198.Guo Z., Tai Y., Du J., Chen Z., Li Q., Shi J. Automatically addressing system for ultrasound-guided renal biopsy training based on augmented reality. IEEE J. Biomed. Health Inform. 2021;25:1495–1507. doi: 10.1109/JBHI.2021.3064308. [DOI] [PubMed] [Google Scholar]
- 199.Wang Y., Liu T., Hu X., Yang K., Zhu Y., Jin H. Compliant joint based robotic ultrasound scanning system for imaging human spine. IEEE Robot. Autom. Lett. 2023;8:5966–5973. doi: 10.1109/LRA.2023.3300592. [DOI] [Google Scholar]
- 200.Yan K., Yan W., Zeng W., Ding Q., Chen J., Yan J., Lam C.P., Wan S., Cheng S.S. Towards a wristed percutaneous robot with variable stiffness for pericardiocentesis. IEEE Robot. Autom. Lett. 2021;6:2993–3000. doi: 10.1109/lra.2021.3062583. [DOI] [Google Scholar]
- 201.Che H., Brown L.G., Foran D.J., Nosher J.L., Hacihaliloglu I. Liver disease classification from ultrasound using multi-scale CNN. Int. J. Comput. Assist. Radiol. Surg. 2021;16:1537–1548. doi: 10.1007/s11548-021-02414-0. [DOI] [PubMed] [Google Scholar]
- 202.Che H., Radbel J., Sunderram J., Nosher J.L., Patel V.M., Hacihaliloglu I. Multi-feature multi-scale CNN-derived COVID-19 classification from lung ultrasound data; Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); Virtual. 1–5 November 2021; pp. 2618–2621. [DOI] [PubMed] [Google Scholar]
- 203.Al-Battal A.F., Gong Y., Xu L., Morton T., Du C., Bu Y., Lerman I.R., Madhavan R., Nguyen T.Q. A CNN segmentation-based approach to object detection and tracking in ultrasound scans with application to the vagus nerve detection; Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); Virtual. 1–5 November 2021; pp. 3322–3327. [DOI] [PubMed] [Google Scholar]
- 204.Sivanandan R., Jayakumari J. A new CNN architecture for efficient classification of ultrasound breast tumor images with activation map clustering based prediction validation. Med. Biol. Eng. Comput. 2021;59:957–968. doi: 10.1007/s11517-021-02357-3. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
No new data were created or analyzed in this study.


