Skip to main content
Health Information Science and Systems logoLink to Health Information Science and Systems
. 2018 Sep 18;6(1):10. doi: 10.1007/s13755-018-0053-1

Computer assisted system for precise lung surgery based on medical image computing and mixed reality

Wenjun Tan 1,, Wen Ge 1, Yucheng Hang 1, Simeng Wu 1, Sixing Liu 1, Ming Liu 1
PMCID: PMC6143497  PMID: 30279980

Abstract

The key of a surgical treatment for the lung cancer is to remove the infected part with the least excision and to retain most of the healthy lung tissue. The traditional computer surgery assisted system show that the patient’s CT images or three-dimensional structure in the PC screen. This assisted system is not a real three-dimensional system and can’t display well the position of pulmonary vessels and trachea of the patients to surgeon. To solve the problem, a computer assisted system for precise lung surgery for precise surgery based on medical image and VR is developed in this paper. Firstly, the regional growth and filling algorithm is designed to segment lung trachea and lung vessels. Then, the reference edge grid algorithm is used to construct the model of the segmentation trachea and lung vessels. And the models are saved as an identifiable STL type file. Finally, according to the system analysis for the specific system function, the computer assisted system is implemented to display the three-dimensional pulmonary vessels and trachea on the mixed reality device. The surgeons can observe and interface precisely the real three-dimensional lung structure of the patient to help them operate accurately the lung surgery.

Keywords: Medical image, Computer assisted system, Mixed reality

Introduction

Lung cancer is a serious threat to human health and is the leading cause of death from cancer in the world. There are approximately 1.6 million new cases and 1.4 million deaths each year. Surgical treatment is the preferred method for treating lung cancer. In order to perform the surgery safely and accurately, the surgeon needs to observe and understand the three-dimensional structure of the patient’s lung trachea, arteriovenous and other tissues through the CT image before surgery. The traditional computer-assisted surgery system will use the visualization technology to show the results of the lung tissue segmentation in the image [1, 2]. The doctor only sees a screenshot of the lungs from a certain angle and has the following disadvantages: (1) There is no stereoscopic effect, and it is different from The model only sees a picture in perspective; it does not allow the doctor to have a clear three-dimensional perception of the patient’s entire lung structure. (2) Poor feeling of interaction. Moving the model by clicking the mouse does not allow the doctor to feel that he or she is manipulating this model, and does not allow the doctor to experience the interaction during surgery.

Augmented Reality (MR) includes simulation of the environment, perception, natural skills, and sensing devices [35]. The simulation environment is a computer-generated, real-time, dynamic, three-dimensional, realistic image. The mixed reality device recognizes the gestures of humans through gesture recognition technology to determine the body language that humans want to express, and then changes the model so that people can clearly and stereoscopically see the model, instead of just being alone. A picture seen on a plane [6]. Before surgery, doctors can better observe the three-dimensional realistic lung tissue structure by mixing realistic techniques. At the same time, doctors can hold the trachea and blood vessels in their hands, and they can turn around at any time to understand the three-dimensional structure of the lungs more accurately before surgery. MR technology enables surgeons to effectively improve the safety of surgery.

This paper mainly implements a precision lung assisted surgical system based on mixed reality. The main task is to segment the patient’s trachea, left pulmonary blood vessels, and right pulmonary blood vessels based on the patient’s CT images. Then the segmented data are modeled and the patient’s lung tissue is organized. The model is imported into a mixed reality device for use by a doctor.

System analysis and design

With two-dimensional CT lung image data, it is difficult for a doctor to observe the internal physiological structure of the patient’s lungs, and it is difficult to diagnose the specific location of the lesion, which increases the risk for the doctor when performing surgery on the patient. According to this problem, this system has the following functions:

  • The pulmonary blood vessels and trachea are extracted based on the lung CT images taken by the patient.

    CT images will show the following parts: pulmonary blood vessels, trachea, heart, thoracic aorta, bone, lung nodules. The pulmonary blood vessels and trachea are interspersed with the lung segments inside the lungs, so the segmentation of the lung trachea and blood vessels and the results are shown to be the most important part of the segmental suffocation aid system. If you do not divide a good result, or if the result of the segmentation is very poor, it will not only be of no help to the surgery, but it will also be counterproductive, causing the doctor to cut the lung segment. Therefore, the needs of doctors are translated into functional requirements: CT images of the lungs are divided into trachea and blood vessels.

  • The lung tissue of the patient’s lungs is displayed in three dimensions.

    The segmented data is still a three-dimensional data point set. Refer to the Marching Cubes algorithm for constructing the surface of the triangles based on the data points. Save the data of the triangulated polygons to the STL file, and use any 3D software to open the STL file. You can see the model, so this user needs into funtional requirements: the data points to be modeled to build, 3D software can be made to identify the model file for the mobile side of the mixed reality device display.

  • Allows the doctor to control the model results by operating and observe the patient model from different perspectives.

    Design and implementation of human–computer interaction in mixed reality systems. According to the CT image data of the patient’s lungs, the implemented model is imported into Unity for system development and then deployed to a mixed reality device. The mixed reality device can automatically recognize user gestures, and the doctor can control the state of the model through gestures. Therefore, this user demand is a mixed reality human–computer interaction.

According to the analysis of user needs, we obtained the specific functions required by the user. The specific functional requirements are shown in Fig. 1. The segmental resection medical assist system is divided into two parts, as shown in Fig. 2. One part is image segmentation image processing and model construction. This part is completed by means of a PC-side MIC system. The other part is a mobile-side mixed reality device of the system. We will segment the model STL from the CT image on the PC side. The file, placed in the Unity software for the development of mixed reality devices, allows doctors to move away from the PC and remove stereoscopic perceptions of the model structure and structure of the patient’s lungs in a mixed reality device.

Fig. 1.

Fig. 1

The system functional requirement

Fig. 2.

Fig. 2

System framework

Method

Each element of image data is called Voxel (Volume Element) or lattice element, which can be identified by figures in row, row and column [7, 8]. Two classical segmentation methods, which are regional threshold and regional growth method, are designed to segment trachea and lung vessels. And the surface mesh construction algorithm based on center of gravity boundary processing is designed in this paper to construct the model of the segmentation trachea and lung vessels.

Threshold segmentation

Threshold segmentation is a method based on regional segmentation, and a function of T can be used to deal with the correlation threshold, such as:

T=Tpi,j,fi,j 1

The gray level of the point i,j is fi,j, and the local property of the point is pi,j. The following formula for image segmentation using threshold:

pi,j=1,fi,j>T0,fi,jT 2

The pi,j=1 in the upper form represents the part of the image element of the object, and the pi,j=0 represents the image element part of the background. The maximum degree of the threshold segmentation method depends on the selection of the threshold. By the different methods of determining the threshold, each specific segmentation method is produced. If the threshold is T if and only if depends on the gray level of fi,j, this threshold is called the global threshold; and not only depends on the fi,j also depends on the pi,j threshold is called local threshold; if the threshold depends on the T coordinate space i,j, this threshold is called adaptive threshold, the corresponding methods are based on different threshold judgment and distinction.

Regional growth method

The principle of the region growing method for segmenting an image is to treat a set of pixels that satisfies a certain condition as the same class and to merge them into one region [9]. The method first determines an initial seed point in the target area that needs to be segmented, and then merges the pixel points satisfying certain conditions in the N (N = 4, 8, 26) neighborhood around the seed point into the area where the seed point is located. The pixel points other than the seed point continue to perform the above growth process as the initial seed point. When all the pixel points in the area are selected, the growth ends and the target area is segmented [10, 11].

Three key issues that need to be solved in the regional growth method:

  1. The quality of the image segmentation depends on the selection of the seed point. If the selected position deviates from the target area, the segmentation will fail.

  2. The determination of the growth criterion in the growth process is critical, and the resulting target area will produce slight differences with growth criteria. The difference in gray level between the target area and the background area, the shape of the target area, and the statistical information of the probability in the target area can be used as a growth criterion.

  3. If the stopping criterion is determined, if there is no stopping condition, the region growing method will continue to grow without end. This is undesirable, so the method must design a stopping criterion.

Because the lung parenchyma is a three-dimensional object, the 3D region growing algorithm is used here. Here the queue is used to store the pixels in the growth process. The steps are as follows:

  1. Firstly select one pixel point in the lung parenchyma as the initial seed point (x0, y0, z0) according to the determination condition of the seed point;

  2. Taking (x0, y0, z0) as the center, it traverses (x, y0, z0) the 26-neighborhood pixels (x, y, z), if (x, y, z) satisfies the growth criterion, it will (x, y, z) are regarded as pixels in the lung parenchyma and merged with (x0, y0, z0), while pushing (x, y, z) into the queue;

  3. Take a pixel from the opposite head, treat it as the initial seed point (x0, y0, z0) and return to step 2);

  4. When the queue is empty, growth ends, otherwise repeat steps 1)-3).

Surface mesh construction algorithm based on center of gravity boundary processing

The 8 points of the hexahedron at the boundary are marked as 0 or 1 respectively, and the point marked as 1 is the point near the model. If two points of a critical edge are marked as 0 and 1 respectively, the edge is significant. Take this critical edge midpoint to represent the center of the critical edge. For a hexahedron, we obtain the center point of all the critical edges of this hexahedron, and then take the mean value of the center point of the critical edge as the center of gravity of the critical hexahedron. Finally, linking the center of gravity of the boundary location to form multiple triangular patches is the result we want [1214].

Firstly, the method of obtaining the center of gravity of the 2D image is studied, and then the method is extended to the 3D surface mesh construction. Point C is marked and point D is unmarked. The CD edge is a critical edge. The CD edge has two critical cubes. Each square has a center of gravity. There are two critical edges and two critical edges in the critical cube in the upper left corner. The center point of the edge and the center of gravity of point C are point A, so we will find that there are two centers of gravity around each critical edge. For each critical edge, connect the two centers of gravity around and mark the edges. Points, then form a triangular patch.

Then we extend the 2D boundary acquisition method to 3D. As shown in Fig. 3 below. On the two-dimensional critical edge, there are two critical cubes and two critical centers of gravity. It is generalized that there are four edge hexahedrons and four edge centers of gravity around each critical edge on the three-dimensional image. The four centers of gravity do not necessarily form a planar quadrilateral in one plane, but these four points can form two triangles. Then each edge hexahedron we construct according to this method, and finally the surface of the three-dimensional object will be multiple Covered by triangular patches. It should be noted that when constructing two triangles from four points, one edge should be shared by two triangles. This edge is formed by two points in two diagonal corners. To prevent the appearance of two other points, one must Side of the situation. In Fig. 3, this common edge can be CE or FD, but CF and DE are not desirable.

Fig. 3.

Fig. 3

Hexahedral structure triangle

The data is binarized. Since the trachea has been divided, the tracheal portion is marked as 1 and the non-tracheal portion is marked as 0. The split Mark marker array is used directly as a result of data binarization.

From the isosurface boundary hexahedron [15, 16], there are 8 points, Mark is 0 or Mark is 1, there are three cases: The first case 8 points are marked as 1 is the in vivo data, the second case 8 Points are marked with 0 for in vitro data. The above two cases do nothing. If the 8 points have points marked with 0 and points with 1 are also marked, it is the hexahedral boundary that we are concerned about. We must deal with it and find the center of gravity according to it. First find the edge of Mark(0,1), startPoint is the point of Mark1, endPoint is the point of Mark0, EdgePoint represents the boundary point, and Num is the number of edge points of each boundary hexahedron.

The hexahedron is treated according to the hexahedron obtained in the previous step. A hexahedron has 8 points and 12 edges, and there are 4 cases of 2 point pair label values on each side: (0,1), (1,0), (0,0), (1,1). The edges of the four cases, the first two are the edges that we care about and deal with, because one of the two points on the edge is marked as 0. One marked as 1 best represents the boundary between the inside of the model and the outside of the model.

Acquire the center of gravity of each hexahedron from the boundary obtained in the previous step (step 3).For every 0-1 edge obtained in the third step, there are 4 hexahedrons adjacent to each other, each hexahedron has a center of gravity point, and each 0-1 edge has 4 nearby gravity points. The four centers of gravity form two triangular patches.

Store the triangular patch data information in the STL file. The model construction algorithm flow is showed as Fig. 4.

Fig. 4.

Fig. 4

Model construction algorithm flow chart

Analysis of experimental results

Experimental environment

The hardware and software environment based on the MIC medical imaging platform system is as follows:

  • Hardware environment:
    1. Computer: Inter(R) Core(TM) i5-4210 M CPU @ 2.60 GHz, Memory: 4 GB
    2. Mixed Reality Devices: Microsoft Hololens
  • Software Environment:
    1. Operating System: Windows7 64-bit
    2. Development Platform: MIC Medical Imaging Platform
    3. Development Language: C ++
    4. Programming Environment and Compiler: Visual Studio 2010

Segmentation results and analysis of lung tissue

The DICOM image size on each level of the test data is 512*512, with a total of 378 layers and a layer thickness of 1.0 mm. Read the patient’s lung image, and take a seed point of the trachea on any layer in the middle. First divide the trachea by the region growing algorithm, then fill in the noise to fill it. The more accurate the results of the segmentation of the trachea and blood vessels, the doctor’s lungs to the patient. The more clearly the organizational structure is understood, the more accurate surgical treatment can be performed. Figure 5 shows the results of the 4-layer data.

Fig. 5.

Fig. 5

Trachea segmentation result: a original, b regional growth results, c the results after filling

Algorithm evaluation: It can be seen from the regional growth extraction results in Fig. 4 that there are a few loopholes in the trachea. After applying the filling algorithm to fill in the loopholes, the loopholes disappear. The specific data of trachea segmentation results are shown in Table 1.

Table 1.

Trachea segmentation result

Algorithm steps Segmented data points Average time (s)
Regional growth 164,958 0.98
After the region grows 3722 0.37

The following analysis was performed based on the blood vessel segmentation results. In Fig. 4 we can see that the lung parenchyma and trachea have lower pixel values in the image. The lung parenchyma and the trachea can be extracted together by region growing. The yellow part shown in Fig. 6 is extracted by applying the region growing algorithm. Pulmonary parenchyma and trachea. The white area in the middle of the two-dimensional image is the heart. The white part on the left and right sides is the blood vessel inside the lung. The outer circle of the image is the bone. The trachea and lung parenchyma were marked yellow, and the lung vessels obtained by filling the lungs were marked with red. To facilitate observation and comparison, the yellow was hidden in the graph (b) in Fig. 7, and only the red color was displayed. Compare with the lung vessels in (a).

Fig. 6.

Fig. 6

Pulmonary parenchyma extraction: a original, b pulmonary parenchyma extraction

Fig. 7.

Fig. 7

Pulmonary parenchyma: a original, b two-dimensional vascular, c three dimensional results

From the results, we can clearly see that almost all the blood vessels inside the lungs were extracted through multiple fillings. The accuracy of the segmentation is very high, so that the true structure of the patient’s lungs can be displayed to the doctor more accurately.

The defect lies in the fact that the blood vessels connecting the edges of the lungs to the heart have a large area because they are too large and they are not surrounded by other lungs. But does not affect the doctor’s observation of the patient model.

The results of the blood vessel segmentation algorithm are shown in Table 2.

Table 2.

Vascular segmentation result

Algorithm steps Algorithm Segmented data points Average time (s)
Lung parenchyma and trachea segmentation Regional growth 17,985,731 7.32
Pulmonary blood vessel extraction First fill 974,380 1.64
Second fill 40,213 0.35
Third fill 1452 0.32
Fourth fill 83 0.31

Model construction algorithm results

Open the STL file in Notepad to see the triangular facet data. As shown in Fig. 8, the STL file sequentially stores the data of each triangular patch, first the normal vector of each triangular patch, and then the coordinates of the three data points of the triangular patch. In Fig. 9, the STL file is displayed in the 3D software.

Fig. 8.

Fig. 8

STL file data of the trachea

Fig. 9.

Fig. 9

STL file data of the trachea: a trachea model, b vascular model, c model combination

The trachea model construction algorithm results are shown in Table 3.

Table 3.

Trachea segmentation result

Algorithm steps Number of triangles Average time (s)
Tracheal model construction 48,788 3.23
Vascular model construction 283,874 19.74

System implementation results

The constructed three-dimensional trachea and blood vessel models were introduced into the Hololens system to achieve doctor observation and interactive functions.

Organization’s display and hide functions

In the initial state of the system, left pulmonary vessels, trachea, and right pulmonary vessels are all set to display state.

When the doctor clicks on the corresponding hidden function key, the corresponding function is triggered to hide the model controlled by this function key, and the text of the function key is changed to the model display, and the hidden function key is changed to the display function key.

When the doctor clicks on the display function key, the corresponding function is triggered to display the model controlled by this function key, and at the same time, the function key text becomes model hidden, and the display function key becomes a hidden function key. Figure 10 shows the result of the hidden and showing model.

Fig. 10.

Fig. 10

Hidden and showing the result graph, a shows, b hides

System interaction function

The three result graphs in Fig. 11 show the three steps of this operation flow. In Fig. 11a, when the preliminary gesture is just identified, Fig. 11b is the process in which the user clicks the model to move the model. Figure 11c When the user gestures to return to the preliminary gesture, the model control is released.

Fig. 11.

Fig. 11

Interactive results: a preparatory gesture recognition, b clicking gesture recognition, c finished operation

Conclusion

In this paper, a computer surgery system based on MR is designed and implemented for precise lung surgery. The regional growth algorithm is designed to segment lung trachea and lung vessels. Then, the reference edge grid algorithm is used to construct the model of the segmentation trachea and lung vessels. Finally, the computer assisted system is implemented to display the three-dimensional pulmonary vessels and trachea on the mixed reality device. The surgeons can observe and interface precisely the real three-dimensional lung structure of the patient to help them operate accurately the lung surgery.

Acknowledgements

This work is supported by National Natural Science Foundation of China (NSFC) under Grant No. 61302012, the Fundamental Research Funds for the Central Universities under Grants N150408001 and N161604006.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Hu S, Hoffman EA, Reinhardt JM. Accurate lung segmentation for accurate quantification of volumetric X-ray CT images. IEEE Trans Med Imaging. 2001;20(6):490–498. doi: 10.1109/42.929615. [DOI] [PubMed] [Google Scholar]
  • 2.Armato SG, Sensakovic WF. Automated lung segmentation for thoracic CT: impact on computer-aided diagnosis. Acad Radiol. 2004;11(9):1011–1021. doi: 10.1016/j.acra.2004.06.005. [DOI] [PubMed] [Google Scholar]
  • 3.Pan Zhigeng, Cheok Adrian David, Yang Hongwei, Zhu Jiejie, Shi Jiaoying. Technologies for E-Learning and Digital Entertainment. Berlin, Heidelberg: Springer Berlin Heidelberg; 2006. Virtual Reality and Mixed Reality for Virtual Learning Environments; pp. 6–6. [Google Scholar]
  • 4.Ricci A, Piunti M, Tummolini L, et al. The mirror world: preparing for mixed-reality living. IEEE Pervasive Comput. 2015;14(2):60–63. doi: 10.1109/MPRV.2015.44. [DOI] [Google Scholar]
  • 5.Wei Guo. Characteristics of virtual reality technology and application prospect. Inf Comput Theor Version. 2010;5:29. [Google Scholar]
  • 6.Zhang L, Zhou R, Jiang H, Wang H, Zhang Y. Item group recommendation: a method based on game theory. In: Proceedings of the 26th international conference on world wide web companion; 2017, p. 1405–11.
  • 7.Ohishi S, Takemoto H. Medical image processing apparatus: US, US9036777; 2015.
  • 8.Rosset A, Spadola L, Ratib O. OsiriX: an open-source software for navigating in multidimensional DICOM images. J Digit Imaging. 2004;17(3):205–216. doi: 10.1007/s10278-004-1014-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Jiang HY, Yue-Peng SI, Luo XG. Medical image segmentation based on improved Ostu algorithm and regional growth algorithm. J Northeast Univ. 2006;27(4):398–401. [Google Scholar]
  • 10.Li H, Wang Y, Wang H, Zhou B. Multi-window based ensemble learning for classification of imbalanced streaming data. World Wide Web. 2017;20(6):1507–1525. doi: 10.1007/s11280-017-0449-x. [DOI] [Google Scholar]
  • 11.Peng M, Zhu J, Wang H, Li X, Zhang Y, Zhang X, Tian G. Mining event-oriented topics in microblog stream with unsupervised multi-view hierarchical embedding. ACM Trans Knowl Discov Data (TKDD) 2018;12(3):1–26. [Google Scholar]
  • 12.Kabir E, Siuly S, Cao J, Wang H. A computer aided analysis scheme for detecting epileptic seizure from EEG data. Int J Comput Intell Syst. 2018;11:663–671. doi: 10.2991/ijcis.11.1.51. [DOI] [Google Scholar]
  • 13.Peng M, Gao W, Wang H, Zhang Y, Huang J, Xie Q, Hu G, Tian G. Parallelization of massive textstream compression based on compressed sensing. ACM Trans Inf Syst (TOIS) 2017;17:1–18. [Google Scholar]
  • 14.Huang J, Peng M, Wang H, Cao J, Gao W, Zhang X. A probabilistic method for emerging topic tracking in microblog stream. World Wide Web. 2017;20(2):325–350. doi: 10.1007/s11280-016-0390-4. [DOI] [Google Scholar]
  • 15.Borsci S, Lawson G, Broome S. Empirical evidence, evaluation criteria and challenges for the effectiveness of virtual and mixed reality tools for training operators of car service maintenance. Comput Ind. 2015;67:17–26. doi: 10.1016/j.compind.2014.12.002. [DOI] [Google Scholar]
  • 16.Wang Y, Olano M. Isosurface smoothing using marching cubes and PN-triangles. In: Symposium on Interactive 3d Graphics and Games. ACM; 2015, p. 132.

Articles from Health Information Science and Systems are provided here courtesy of Springer

RESOURCES