Abstract
This Letter proposes an end-to-end mobile tele-echography platform using a portable robot for remote cardiac ultrasonography. Performance evaluation investigates the capacity of long-term evolution (LTE) wireless networks to facilitate responsive robot tele-manipulation and real-time ultrasound video streaming that qualifies for clinical practice. Within this context, a thorough video coding standards comparison for cardiac ultrasound applications is performed, using a data set of ten ultrasound videos. Both objective and subjective (clinical) video quality assessment demonstrate that H.264/AVC and high efficiency video coding standards can achieve diagnostically-lossless video quality at bitrates well within the LTE supported data rates. Most importantly, reduced latencies experienced throughout the live tele-echography sessions allow the medical expert to remotely operate the robot in a responsive manner, using the wirelessly communicated cardiac ultrasound video to reach a diagnosis. Based on preliminary results documented in this Letter, the proposed robotised tele-echography platform can provide for reliable, remote diagnosis, achieving comparable quality of experience levels with in-hospital ultrasound examinations.
Keywords: biomedical ultrasonics, cardiology, medical robotics, telemedicine, Long Term Evolution
Keywords: 4G wireless networks, teleoperated robot, end-to-end mobile teleechography platform, portable robot, remote cardiac ultrasonography, long-term evolution, LTE wireless networks, responsive robot telemanipulation, real-time ultrasound video streaming, video coding standards, video quality, teleechography sessions, cardiac ultrasound video, robotised teleechography platform, remote diagnosis, in-hospital ultrasound examination
1. Introduction
Providing the desired quality levels of specialised healthcare across the population is a challenging task. This task becomes even more daunting in remote, often isolated areas and developing countries, where the physical presence of specialised physicians is limited. At the same time, already dispersed rural hospitals, due to the lack of appropriate infrastructure and personnel, cannot intercept the ever growing need of patients moving long distances towards the nearest regional hospital. For this purpose the European Union (EU) and the World Health Organization (WHO) have identified the development of mobile health systems and services as a top priority [1, 2]. The latter initiative aims at reducing healthcare expenditures while increasing the quality of healthcare services, and hence patients' quality of life (QoL).
Ultrasound imaging is an integral part of many clinical applications ranging from typical ultrasound examinations (cardiac, foetus, carotid artery, abdominal aortic aneurysm etc.), to medical emergencies and surgical decision making [3]. As a result, there is a great demand for this technique to be available even in the absence of ultrasound specialists, as in the case of most isolated areas, developing countries, and emergency/disaster incidents [4]. On the other hand, this specialised healthcare method has been traditionally expert-dependent, involving a high-level of training.
Over the past decades, numerous mHealth medical video communication systems have been developed to address this issue [5]. Such systems rely on trained ultrasound technicians and/or paramedical staff to acquire an ultrasound video (perform the on-site examination), while a remote medical expert assesses and provides the diagnosis based on the communicated ultrasound video in real time. Over the same period, tele-operated robots for remote ultrasound examination have also witnessed a significant growth [6]. Such systems further benefit from the fact that no additional specialised personnel are required to remotely perform an examination and provide a clinical diagnosis.
Despite the plethora of such systems and services, however, their adoption in standard clinical practice remains limited. As debated in [5], the primary reason is that they do not provide a comparable quality of experience (QoE) level to that of in-hospital ultrasound examinations. The latter is largely attributed to underlying video compression and wireless networks technologies. Mobile tele-manipulated robots performing remote ultrasound examinations typically convey the following data: (i) robot control commands in the form of x, y, z vectors, which have low-bandwidth requirements, but are delay sensitive and loss intolerant, (ii) ultrasound video data which are bandwidth demanding – as the they communicate the clinical information –, are also delay sensitive although to a lesser extent, and minor losses are tolerable, and (iii) ambient audio and video information which require moderate bandwidth, and being not so critical, in most cases certain delays and losses are tolerable. Based on the afore-mentioned, data rate and latencies found in early 3G wireless networks and video encoding capabilities of video coding standards prior to H.264/AVC, essentially prevented the wider adoption of mHealth tele-robotics systems in standard clinical practice.
Such an example was documented within the context of the OTELO project [7], which investigated real-time mobile tele-echography using an early version of the MELODY system considered in this Letter. Performance evaluation based on clinical expert's ratings and quality of service (QoS) parameters concluded that low-resolution cardiac ultrasound video transmission over low-bitrate 3G channels can be effectively used for pre-diagnosis purposes. Remote ultrasound examination was mostly limited by underlying technologies capabilities of the time, which essentially imposed video resolution and frame rate reductions to successfully communicate the medical video.
A wearable tele-echography robot with four degrees-of-freedom (4 DOF) developed for the remote assessment of trauma injuries was presented in [8]. Experimental evaluation involved FAST (focused assessment with sonography for trauma) ultrasound image acquisition timings. The remote medical expert successfully completed the task on four different FAST areas and positively commented on the ultrasound images quality. However, as the experimentation was performed using wired infrastructure, the authors plan on performing remote FAST assessment both from the ambulance and the incident site using the tele- operated robot. A mHealth medical video communication system that can be used for remote trauma assessment scenarios from a moving ambulance appears in [9]. Near diagnostically-lossless high-resolution ultrasound video quality transmission was demonstrated over a low-delay simulated mobile WiMAX topology.
Tele-robotic ultrasonography was also investigated in [3] using a customised lightweight robotic arm with 7 DOF. The objective of the study was to demonstrate the feasibility of long-distance, remote vascular ultrasound examinations, in intercity and trans-Atlantic scenarios. The experiments were materialised over high-bandwidth non-dedicated wired internet connections using a training phantom. Results reported that by the end of the experiments, less than 10 s were required for visualisation and localisation of the ultrasound vascular phantom and less than 40 s for the intercity (30 simulation runs: <100 Mbps) and trans-Atlantic (15 simulation runs: <50 Mbps) scenarios, respectively. Interesting findings reported on the learning curve between advanced and early trainees in cardiovascular ultrasound, the advanced trainee adapting faster to the remote examination using the tele-operated robot.
The use of a tele-operated robot for remote ultrasound examination is also discussed in [10, 11]. The primary focus of these studies concerns robot development and kinematics evaluation (6 DOF parallel robot [10] and hand controller with 4 DOF [11]). However, the ultimate goal of both studies is to exploit the constructed robots for mobile tele-ultrasonography applications in a context similar to this Letter.
In this Letter, the objective is to develop a reliable, robotised tele-echography platform over emerging 4G and beyond wireless networks. The goal is to investigate the hypothesis in [5] that wider adoption of such systems and services in standard clinical practice can be materialised once providing clinicians with comparable QoE to in-hospital examinations. The approach is to exploit open-source video encoding technologies for low-delay diagnostically-lossless ultrasound video communications and responsive robot tele-manipulation over commercially available 4G-long-term evolution (LTE) networks. The contributions of this Letter, which significantly extend prior work in [12], are summarised in the following areas:
Video coding standards comparison for cardiac ultrasonography: Open-source video coding standards comparison for cardiac ultrasound video wireless transmission. Here, the objective is to examine the capabilities of different video compression standards for use in robotised tele-echography systems. Performance evaluation is based on objective metrics (e.g. peak-signal-to-noise ratio (PSNR) and BD-rate algorithm) and clinical assessment by a senior cardiologist. Clinical evaluation involves assessment of both B-mode and colour Doppler mode cardiac videos.
Robot-control and ultrasound video data communications over 4G wireless networks: The capability of commercially deployed 4G-LTE networks to simultaneously accommodate (i) responsive tele-manipulation of the robot situated at the patient's site by the remote medical expert and (ii) low-delay, ultrasound video communication at the acquired video resolution and frame rate encoded at different compression levels is examined.
Real-life scenarios using the proposed end-to-end mobile tele-echography platform: The primary focus of this Letter concerns the development of a robotised tele-echography platform that will be evaluated using realistic real-life scenarios. For this purpose, experiments are performed using two healthy volunteers.
The rest of the Letter is organised as follows: Section 2 describes the mobile tele-echography platform and the undertaken methodology, whereas Section 3 discusses the obtained results. Finally, Section 4 provides the concluding remarks and ongoing work.
2. Materials and methods
The objective here is to investigate the use of a tele-operated robot for performing remote cardiac ultrasonography under real clinical settings. More specifically, to examine different clinical scenarios likely to be facilitated by the proposed mobile tele-echography platform depicted in Fig. 1. For this purpose, wireless communication is considered that provides for scenarios where wired infrastructure is not available. Such scenarios range from emergency and disaster incidents, to specialised healthcare provision in remote areas and mass-population screening in developing countries. The proposed system architecture appears in Fig. 1. In what follows, we provide a more detailed description of each individual component in view of the investigated scenarios.
Fig. 1.
End-to-end mobile tele-echography platform. At the expert side, the medical expert uses a dummy probe to remotely control and manipulate the actual probe using the Melody system's robotic arms. At the patient side, the resulting ultrasound video is captured using a frame grabber, and then wirelessly communicated to the expert site by the open-source mHealth medical video communication system. The medical expert provides a diagnosis based on the communicated ultrasound video displayed on the mobile tele-echography's platform monitor
2.1. Robot control and manipulation
For the needs of this Letter a portable robot with 3 DOF was used. The MELODY system is based on the earlier Teresa system [12] and was commercialised by AdEchoTech [13]. A comprehensive MELODY system description appears in [4, 6].
The remote medical expert uses a dummy probe to control the actual probe, which is situated and manipulated by the robotic arms at the patient side. This operation involves low-bandwidth control and force commands such as moving the probe along the x and y axes (i.e., sideways and vertically, respectively), replicating the positioning movements of the ultrasound probe on the patient's body of an on-site examination. The key concept here is low-delay and hence responsive control, achieving a QoE level comparable to typical examinations in standard clinical practise. The data are conveyed between the robot master–slave stations over the LTE wireless infrastructure.
2.2. Ultrasound video encoding
The ultrasound device at the patient's side is connected to a frame grabber [14] which allows capturing raw ultrasound video at the video resolution and frame rate that is displayed on the device's monitor. The video is then fed to the open-source mHealth video communication platform designed for medical video pre-processing, encoding, and communication [15]. The software allows frame rate reduction (for lowering bandwidth demands and encoding time where applicable) and encoding at different compression levels (e.g. bitrate demands) using different video compression standards. Based on FFMPEG software [16], the platform allows encoding the raw ultrasound video using the MPEG-2, MPEG-4 part-2, H.264/AVC, and high efficiency video coding (HEVC) compression standards, using open-source implementations, namely mpeg2video, xvid, x264, and x265, respectively.
The software is installed on a laptop at the patient's side. At the remote end (expert's side), the same software is used for post-processing, decoding, and rendering of the clinical content using the VLC media player [17]. The medical expert reviews the displayed ultrasound video on the laptop's monitor for navigating the probe to the suitable positions for performing the appropriate diagnosis according to the examined ultrasound video modality.
2.3. Ambient video
As depicted in Fig. 1, ambient video and audio may be also communicated from patient to expert side and vice versa. The latter is particularly helpful for the medical expert to provide further instructions to the carer with respect to the robot position while communicating directly with the patient. The proposed system supports commercially available software for communicating audio and ambient video (e.g. Skype). Moreover, it provides customisable interfaces for real-time ambient video streaming of well-known high-quality outdoor video cameras such as the GoPro camera (particularly useful for disaster incidents scenarios, not examined here).
2.4. Wireless transmission over emerging wireless networks
The objective here is to investigate the capability of commercially available 4G-LTE wireless networks to communicate diagnostically lossless medical video at the acquired video resolution and frame rate, while conforming to strict delay requirements. To overcome the design limitation of the MELODY robot, which requires a dedicated channel for communicating the robot control specific data, a 4G router (D-Link-DWR921) equipped with a machine-to-machine enabled card is used [18]. The latter configuration allows us to communicate both robot control and ultrasound video data over the same wireless channel (in addition to ambient video and audio data), and hence translates to a significant enhancement over previous experiments using the MELODY robot [12]. The hub, which is connected to the 4G router and provides Ethernet connections to the robot and laptop(s) equipped with the mHealth video communication software is introduced for experimental purposes in these series of experiments (see Wireshark protocol analyser below) and is not an essential element in the proposed mHealth Tele-Echography platform.
Wireshark [19], an open-source cross-platform network protocol analyser is used for monitoring both robot control and video data across patient and expert sites. The latter allows for QoS measurements including throughput analysis, estimating packet end-to-end delay and delay jitter, as well as total packet losses.
2.5. Mobile tele-echography platform quality assessment
2.5.1. Scenario 1
The first scenario aims to demonstrate the significant benefits of employing emerging video coding standards both for video communications and storage. Most ultrasound devices today still depend on successful, but soon obsolete video coding standards for storing the necessary video loops during an ultrasound examination, such MPEG-1 and 2. The adoption of DICOM recommendation for using the H.264/AVC standard for storing and network transmission remains extremely limited [20]. In the contrary, they rely on high-quality –often uncompressed – images to revisit a clinical case.
A video coding standards comparison including the latest, HEVC standard, using open-source implementations that can meet the encoding requirements of real-time medical video communications is performed. The objective is to investigate the bitrate demands reductions for equivalent clinical quality achieved by emerging video coding standards.
2.5.2. Scenario 2
The second scenario aims to assess the readiness level of adopting the mobile tele-echography platform in standard clinical practise, for scenarios where wireless infrastructure is required. For this purpose, both technical and clinical evaluation is performed. The scenario includes: (i) a training phase, where the medical expert familiarises with the proposed platform and robot tele-operation, and (ii) a remote cardiac ultrasound examination using the tele-operated robot following the clinically established protocol. During the training phase, the medical expert is asked to rate the responsiveness of the tele-operated robot while maintaining line-of-sight with the patient (and hence robot). The second subtask of the training phase is for the medical expert to familiarise and assess the wirelessly communicated ultrasound video delay with respect to the ultrasound video displayed on the device's screen (having no delay and therefore serving as the ground truth). The dual objective is (i) for the medical expert to become accustomed to the incurred delays of tele-echography attributed to robot control data and ultrasound video communication delays, and then (ii) assess the quality of clinical QoE compared with in-hospital ultrasound examinations.
3. Results and discussion
This section provides preliminary experimental evaluation results of the proposed mobile tele-echography system over currently deployed 4G-LTE systems in Cyprus. A data set composed of ten cardiac ultrasound videos (2 healthy male subjects × 5 videos per subject), including B-mode, colour-Doppler mode, M-mode, and pulsed/continuous Doppler mode video loops is considered. The ultrasound videos maintain the original video resolution output from the ultrasound device monitor: 800 × 600, while there is frame rate reduction from 44 to 30 frames per second compared with the original video. The videos were acquired during two live tele-echography sessions. We first provide and discuss the results obtained from the video coding standards comparison, followed by the performance evaluation of the end-to-end mobile tele-echography platform.
3.1. Video coding standards comparison for cardiac ultrasonography
3.1.1. Objective video quality evaluation
A video coding standards comparison in terms of output clinical quality and associated bitrate demands was performed. More specifically, the most widely used standards were investigated, namely MPEG-2, MPEG-4 part 2, MPEG-4 part 10 (or H.264/AVC), and the emerging HEVC. All videos parting the data set were encoded at the following bitrates: 256, 512, 1024, and 2048 kbps, using the default profiles and encoding settings found in FFMPEG, producing a total of 160 video instances (4 compression levels × 4 video coding standards × 10 ultrasound videos).
Fig. 2 depicts video quality boxplots of all investigated video coding standards for a given target bitrate. Each of the illustrated boxplots corresponds to PSNR scores of the ten cardiac ultrasound videos of the examined data set. Moreover, Fig. 3 shows a rate–distortion graph of all investigated video coding standards and bitrate points using averaged values over the whole data set. As evident from both graphs, newer video coding standards such as HEVC and H.264/AVC significantly outperform older standards such as MPEG-4 part 2 and MPEG-2. They achieve higher PSNR ratings while having less bitrate requirements. The trend is the same for all investigated videos. The latter observation strongly highlights the potential gains in storage capacity once ultrasound machine vendors adopt recommendations such as DICOM Sup. 149 [20] and rely on the compression efficiency of the HEVC standard. More importantly, mHealth systems such as the proposed platform can capitalise coding efficiency towards communicating larger volumes of clinical content, providing equivalent clinical quality to in-hospital examinations.
Fig. 2.
Boxplots of investigated video coding standards that depict PSNR video quality values as a function of bitrate for the ten cardiac ultrasound videos of the examined data set. PSNR boxplots of HEVC (×265), H.264/AVC (×264), MPEG-4 part 2 (xvid), and MPEG-2 (mpeg2v) for (a) 256 kbps, (b) 512 kbps, (c) 1024 kbps, and (d) 2048 kbps. HEVC and H.264/AC significantly outperform earlier MPEG-4 part 2 and MPEG-2 video coding standards for the equivalent bitrate demands. [Bitrate demands refer to target bitrate values during encoding. Resulting values are slightly different.]
Fig. 3.
Video coding standards comparison for cardiac ultrasonography. Rate–distortion curves (average values of the ten 800 × 600 at 30 fps ultrasound videos for all examined rate points). HEVC standard achieves higher PSNR ratings while lowering bitrate demands compared with all prior standards
Table 1 documents bitrate gains for equivalent video quality (in terms of PSNR) for all investigated video coding standards using the BD-rate metric [21]. HEVC achieves average bitrate gains of 25.6 and 68.8% compared with H.264/AVC and MPEG-4 ASP standards, respectively. Bitrate gains extend up to 96.5% when compared with MPEG-2/H.262 MP, which translates into approximately double coding efficiency. It is important to note here that the default FFMPEG encoding settings are selected and all encodings employ one-pass rate control. One-pass rate control is the standard mode for real-time video streaming scenarios, however, it lacks accuracy compared to two-stage encoding. The latter is more obvious for low bitrate encodings and earlier video coding standards, namely MPEG-4 and MPEG-2. As evident in Fig. 3, for the aforementioned standards, the actual bitrate demands are significantly higher than the input target bitrate. Bitrate reductions from using H.264/AVC or other later standards compared to earlier ones are also summarised in Table 1. Overall, the depicted bitrate gains are comparable to [22] for general purpose videos and [23] for carotid artery ultrasound videos. Optimising video encoding settings per video encoding standard would result in closer proximity values. However, the objective here is to demonstrate the trend for real-time video communications.
Table 1.
Video bitrate demands savings of different video coding standards compared with previous video coding standards
Encoding | Bitrate savings relative to | ||
---|---|---|---|
H.264/MPEG-4 AVC HP, % | MPEG-4 ASP, % | MPEG-2/H.262 MP, % | |
HEVC MP | 25.6 | 68.8 | 96.5 |
H.264/MPEG-4 AVC HP | – | 56.6 | 94.9 |
MPEG-4 ASP | – | – | 46.7 |
3.1.2. Clinical video quality evaluation
The clinical evaluation of the video coding standards comparison appears in Table 2. Evaluation is demonstrated for a single video including both B-mode and colour-Doppler mode video loops, but the trend is the same throughout the data set. All evaluations were performed on the laptop equipment where the mHealth medical video communication platform was installed. The laptop had a 1600 × 900 spatial resolution, tuned at maximum screen brightness, and the rendered video instances were displayed in full screen mode. Overall, the viewing conditions were comparable to a routine clinical exam. As evident in Table 2, the clinical ratings are largely in agreement with the objective quality assessment. Less compression (i.e. more bits per pixel) is translated into higher PSNR measurements, and in turn into enhanced clinical quality. At the same time, newer video standards coding efficiency is also translated into superior clinical quality, as they are able to accommodate more clinical information for the same compression levels when compared to earlier video coding standards. An important observation highlighted by the medical expert is that clinical quality is compromised by the reduced frame rate at 30 fps, restraining clinical motion and hence assessment. Beyond the afore-described limitation, cardiac ultrasound videos at 2 Mbps, using both the H.264/AVC and the HEVC standards, provide the required clinical quality levels for a confident diagnosis. While both standards attain equivalent ratings, HEVC standard requires less bitrate than H.264/AVC as depicted in Fig. 3 and Table 1. Competing earlier video coding standards cannot be considered for clinical diagnosis at these rates as they experience certain loss of clinical detail, with MPEG-4 outperforming MPEG-2 (Table 3).
Table 2.
Clinical validation of a single video for all investigated video coding standards
Target bitrate demandsb | Clinical evaluation ratingsa | |||||
---|---|---|---|---|---|---|
512 kbps | 1024 kbps | 2048 kbps | ||||
Cardiac ultrasound mode | B-mode | Colour mode | B-mode | Colour mode | B-mode | Colour mode |
HEVC MP (×265) | 4 | 4 | 4 | 4.5 | 4.5 | 4.5 |
H.264/AVC HP (×264) | 4 | 4 | 4 | 4.5 | 4.5 | 4.5 |
MPEG-4 ASP (xvid) | 3 | 3 | 3.5 | 3.5 | 3.5 | 4 |
MPEG-2 (mpeg2v) | 3 | 3 | 3 | 3 | 3.5 | 3.5 |
aClinical evaluation ratings from 1 (lowest) to 5 (highest). Ratings ≥ 4.5 refer to diagnostically losssless clinical quality.
bBitrate demands refer to target bitrate values during encoding. Resulting values are slightly different (see Fig. 3).
Table 3.
LTE network QoS measurements during experiments
Average QoS measurements of LTE Networka | |||
---|---|---|---|
Download speed | Upload Speed | Latency | Jitter |
10 Mbps | 9 Mbps | 29 ms | 8 ms |
aAverage values of 30 speed test measurements during experiments.
3.2. End-to-end mobile tele-echography platform performance evaluation
3.2.1. Robot control and ultrasound probe manipulation
The medical expert assessed the robot's control and ultrasound probe manipulation both during the training phase and tele-echography sessions. In the – chronically preceding – former case, the medical expert assessed the responsiveness of the robot (e.g. how long it takes for a dummy probe movement at the expert side to be translated into an actual probe movement at the patient side) both visually (implies line-of-sight) as well as with respect to the actual ultrasound video displayed on the ultrasound machine screen [see Table 4(a)]. At this point, wirelessly communicated cardiac ultrasound video was not considered. The medical expert highly appreciated the proposed platform's responsiveness in terms of conveying the control commands from the dummy to the actual probe as documented in Table 4(a). The latter is attributed to low end-to-end delay (latency) facilitated by LTE-channels which accounted for approximately 30 ms throughout the sessions (see Table 3). Assessment included typical movements during an ultrasound examination such as sideways and forth and back tilting of the probe.
Table 4.
(a) Tele-operated robot assessment over LTE networks | |||
---|---|---|---|
Robot responsiveness and tele-manipulation | |||
Line of sight | Ultrasound device monitor | Mobile tele-echography monitor | |
9 | 9 | 8.5 |
(b) Clinical assessment of remote tele-echography over LTE networks | |||
---|---|---|---|
Encoding standard | Ultrasound video bitrate demandsa | ||
512 kbps | 1024 kbps | 2048 kbps | |
MPEG-2 | 5 | 5 | 6 |
aUltrasound video resolution 800 × 600 at 30 fps.
3.2.2. Limitations of the tele-operated robot control
The most important issue raised by the medical expert and prevented higher overall rating and hence clinical QoE is the robot initial positioning on the patient's chest and navigation towards obtaining the cardiac ultrasound. During routine clinical practice, this step involves small movements where the medical experts reposition the ultrasound probe on the patient's chest, to obtain a clear view of the heart (and hence ultrasound video). This view may be obstructed by chest bones and slightly differs per patient. It does not extend over a few seconds. However, during remote ultrasound examination, the medical expert has to instruct the carer holding the remote robot, as to where to (re)position the robot holding the probe on the patient's chest, which in inherently slower. The afore-described process was the major drawback documented throughout this preliminary experimentation cycle and is primarily attributed to the robot design.
3.2.3. Tele-echography over 4G-LTE wireless networks using a tele-operated robot
Following the training session two live tele-echography examinations were performed so as to evaluate the proposed system's performance. For a more realistic assessment of clinical video quality and experienced network-attributed delays, the clinical evaluation setting involved the laptop and ultrasound device monitors situated side by side. The physician, however, was not in LOS with the patient.
The medical expert first proceeded with assessing the induced delay by the LTE network for communicating the cardiac ultrasound video. As deducted by the medical expert's rating [see Table 4(a)], the incurred delay was within acceptable levels to appropriately navigate the ultrasound probe and proceed to the ultrasound examination. This observation strongly highlights commercial LTE channels capacity to accommodate medical video communication systems and services in a manner that was only feasible using wired infrastructure up to a few years ago. As the medical expert quoted, having completed the initial training, video communication delay is not expected to be an issue during remote tele-echography examinations.
In the following, preliminary clinical video quality evaluation, the MPEG-2 protocol was selected. The rationale was to employ the same encoding standard used for storing the ultrasound video clips on the ultrasound device. All video coding standards considered in this Letter will be ultimately assessed. The videos were encoded at 512, 1024, and 2048 kbps. Lower bitrates do not provide the required coding efficiency to convey the necessary clinical information (see Figs. 2 and 3).
The clinical ratings per different rate point appear in Table 4(b). As the medical expert commented, cardiac ultrasound videos communicated at a bitrate of 512 and 1024 kbps using the MPEG-2 standard do not qualify for clinical practise, as there is an evident loss of clinical information. Video quality is significantly improved at 2 Mbps, however, there still exist some loss of clinical information. Overall, the MPEG-2 standard using the investigated video compression levels at the clinically acquired video resolution at 30 fps, can only be used for pre-diagnosis purposes. It is important to note here that the medical expert emphasised that a safe conclusion can only be drawn using a larger number of clinical cases.
4. Concluding remarks
Preliminary results provide a strong indication that the proposed robotised tele-echography platform can be used to provide reliable, remote diagnosis over emerging 4G and beyond wireless networks. Commercially available long term evolution 4G wireless networks in Cyprus facilitate packet round-trip times (latencies) that are well within the stringent, low-delay requirements for responsive robot tele-operation and real-time medical video streaming. At the same time, available data rates, both in the uplink and downlink, linked with new video coding standards efficiency allow for diagnostically lossless video communication at the acquired video resolution and frame rate. Based on the afore-described observation, it is envisioned that mHealth tele-echography systems will be adopted in standard clinical practice, increasing the level of healthcare provision, especially for rural hospitals, emergency incidents, and in developing countries, where the presence of specialised medical personnel is not feasible. To achieve this however, requires that both paramedical personnel and medical experts undergo the appropriate training that will be ultimately integrated into medical education.
Ongoing work focuses on concluding the assessment of open-source implementations of all video coding standards, including the new HEVC standard, and investigating the efficiency of the proposed system over a larger number of clinical cases. Extending the proposed research to different ultrasound video modalities is currently planned.
5. Acknowledgement
This work was partly supported by the University of Cyprus project ‘Dynamically Reconfigurable mHealth Video Communications for Real-time Adaptation to Time-Varying Constraints'. The authors thank MTN Cyprus for proving the M2M cards used in this Letter.
6. Funding and declaration of interests
Conflict of interest: none declared.
7 References
- 1.‘eHealth Action Plan 2012–2020 - Innovative healthcare for the 21st century’
- 2.WHO, mHealth: ‘New horizons for health through mobile technologies’. vol. 3 of Global Observatory for eHealth Series, 2011
- 3.Sengupta P.P., Narula N., Modesto K., et al. : ‘Feasibility of intercity and trans-atlantic telerobotic remote ultrasound’, JACC Cardiovasc. Imaging, 2014, 7, (8), pp. 804–809 (doi: ) [DOI] [PubMed] [Google Scholar]
- 4.Vieyres P., Novales C., Rivas R.: ‘The next challenge for Worldwide Robotized Tele-Echography eXperiment (WORTEX 2012): From engineering success to healthcare delivery; Congreso’. Its.Uvm.Edu, 2012, vol. TUMI II [Google Scholar]
- 5.Panayides A.S., Pattichis M.S., Pattichis C.S.: ‘Mobile-health systems use diagnostically driven medical video technologies’, IEEE Signal Process. Mag., 2013, 30, (6), pp. 163–172 (doi: ) [Google Scholar]
- 6.Avgousti v., Christoforou E.G., Panayides A.S., et al. : ‘Medical telerobotic systems: current status and future trends’, Biomed. Eng. Online, 2016, 15, (1), http://biomedical-engineering-online.biomedcentral.com/articles/10.1186/s12938-016-0217-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Garawi S., Istepanian R.S.H., Abu-Rgheff M.A.: ‘3G wireless communications for mobile robotic tele-ultrasonography systems’, IEEE Commun. Mag., 2006, 44, (40), pp. 91–96 (doi: ) [Google Scholar]
- 8.Ito K., Sugano S., Takeuchi R., et al. : ‘Usability and performance of a wearable tele-echography robot for focused assessment of trauma using sonography’, Med. Eng. Phys., 2013, 35, (2), pp. 165–171 (doi: ) [DOI] [PubMed] [Google Scholar]
- 9.Panayides A., Antoniou Z., Mylonas Y., et al. : ‘High-resolution, low-delay, and error-resilient medical ultrasound video communication using H.264/AVC over mobile WiMAX networks’, IEEE J. Biomed. Health Inf., 2013, 17, (3), pp. 619–628 (doi: ) [DOI] [PubMed] [Google Scholar]
- 10.Monfaredi R., Wilson E., Azizi Koutenaei B., et al. : ‘Robot-assisted ultrasound imaging: overview and development of a parallel telerobotic system’, Minim. Invasive Ther. Allied Technol., 2015, 24, (1), pp. 54–62 (doi: ) [DOI] [PubMed] [Google Scholar]
- 11.Najafi F., Sepehri N.: ‘A novel hand-controller for remote ultrasound imaging’, Mechatronics, 2008, 18, (10), pp. 578–590 (doi: ) [Google Scholar]
- 12.Vieyres P., Poisson G., Courreges F., et al. : ‘The TERESA project: from space research to ground tele-echography’, Ind. Rob., 2003, 30, (1), pp. 77–82 (doi: ) [DOI] [PubMed] [Google Scholar]
- 13.http://www.adechotech.com/
- 14.‘Epiphan DVI2USB 3.0 frame grabber’. Available at http://www.epiphan.com/, [Accessed: 07-Apr-2016]
- 15.Panayides A., Eleftheriou I., Pantziaris M.: ‘Open-source telemedicine platform for wireless medical video communication’, Int. J. Telemed. Appl., 2013, 2013, pp. 1–12, ISSN:1687-6415 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Ffmpeg.org: ‘FFMPEG software’. Available at http://ffmpeg.org/. [Accessed: 07-Apr-2016]
- 17.VideoLan: ‘VLC Media Player’. Available at http://www.videolan.org/ [Accessed: 07-Apr-2016]
- 18.Chen M., Wan J., Li F.: ‘Machine-to-machine communications: architectures, standards and applications’, KSII Trans. Internet Inf. Syst., 2012, 6, (2), pp. 480–497 [Google Scholar]
- 19.Wireshark.org: ‘Wireshark Network Protocol Analyzer’. Available at http://www.wireshark.org/. [Accessed: 07-Apr-2016]
- 20.‘DICOM Supplement 149: MPEG-4 AVC/H.264 Transfer Syntax’. April 2011
- 21.Bjøntegaard G.: ‘Improvements of the BD-PSNR model’. ITU-T SG16 Q.6 Document, VCEG-AI11, Berlin, Germany, July 2008 [Google Scholar]
- 22.Ohm J.-R., Sullivan G.J., Schwarz H., et al. : ‘Comparison of the coding efficiency of video coding standards – including High Efficiency Video Coding (HEVC)’, IEEE Trans. Circuits Syst. Video Tech., 2012, 22, (12), pp. 1669–1684 (doi: ) [Google Scholar]
- 23.Panayides A.S., Pattichis M.S., Loizou C.P., et al. : ‘An effective ultrasound video communication system using despeckle filtering and HEVC’, IEEE J. Biomed. Health Inf., 2015, 19, (2), pp. 668–676 (doi: ) [DOI] [PubMed] [Google Scholar]