Abstract
The investigation of microcirculation is a critical task in biomedical and physiological research. In order to monitor human’s condition and develop effective therapies of some diseases, the microcirculation information, such as flow velocity and vessel density, must be evaluated in a noninvasive manner. As one of the tasks of microcirculation investigation, automatic blood cell tracking presents an effective approach to estimate blood flow velocity. Currently, the most common method for blood cell tracking is based on spatiotemporal image analysis, which has lots of limitations, such as the diameter of microvesssels cannot be too larger than blood cells or tracers, cells or tracers should have fixed velocity, and it requires the image with high qualification. In this paper, we propose an optical flow method for automatic cell tracking. The key algorithm of the method is to align an image to its neighbors in a large image collection consisting of a variety of scenes. Considering the method cannot solve the problems in all cases of cell movement, another optical flow method, SIFT (Scale Invariant Feature Transform) flow, is also presented. The experimental results show that both methods can track the cells accurately. Optical flow is specially robust to the case where the velocity of cell is unstable, while SIFT flow works well when there are large displacement of cell between two adjacent frames. Our proposed methods outperform other methods when doing in vivo cell tracking, which can be used to estimate the blood flow directly and help to evaluate other parameters in microcirculation.
Keywords: Optical flow, SIFT Flow, blood cell, in vivo cell tracking, microscopy imaging
1. Introduction
Microcirculation is pivotal for the maintenance of human’s tissues and organs since exchange oxygen and nutrients between blood and tissue occurs at this level. Lots of studies have shown that some diseases, such as diabetes [1], Raynaud’s phenomenon [2], hypertension [3], and blackfoot disease [4], are related to the changes in microcirculation. To obtain the microcirculation information, such as vessel diameter, flow velocity, vessel density, permeability, and tracer appearance times, the most common non-invasive approach is to obtain the imaging of the microvessels by using a microscopy, which relies on the use of cellular markers tagged with fluorescent protein. Recently, videos or image series created by microscopy present more useful information than signal image [5]. Specifically, we can easily capture dynamic imaging data of the moving cells by microscopy. The analysis of them helps us to obtain lots of parameters of microcirculation, such as flow velocity and permeability.
Currently, several methods are used for automatic tracking of cells and blood flow analysis, such as the level set method [6, 7], the cross correlation method [8, 9], and filter methods directly used on spatiotemporal image [10, 11]. However, the level set method and cross correlation method are difficult to conduct in the case when the blood cells are unidentified from the microvessel contour; and the filter methods can work only when cells have fixed velocity. Furthermore, for in vivo cell tracking, there are some challenging problems we cannot avoid. Firstly, vessels can be quite larger than cells and cells move in vessels may not be tracked clearly since they can move either near the camera or far from the camera, which leads to that some cells disappear in some frames. Secondly, imaging area is not stationary due to the heartbeat and movement of subject. So the image series we obtain have spatial and temporal distortions. Thirdly, the distribution of some blood vessels are tanglesome. Blood flows in a frame of videos may be toward different directions. The common methods aforementioned cannot work well in these cases. Therefore, some more effective methods are highly required.
In this paper, we obtain in vivo blood flow images of mice by a high-resolution microscopy using red blood cells (RBCs) as tracers and propose two methods for automatic tracking and measurement of the motion of blood cells in the microvessels based on optical flow algorithm [12, 13]. The algorithm is usually used to align an image to its neighbors in a large image collection consisting of a variety of scenes. Here we use the algorithm to calculate the displacement of a cell in each frame in the blood cell movement video. In our method, we first use mathematical morphology method to obtain the location of a cell in the first frame. Then, we can calculate the displacement of the cell in each of the following frames. Therefore, it is easy to calculate the cell’s locations in every frame. So the trajectory of cell movement is recorded. With the information, it enables us to calculate the parameters related to blood flow, such as velocity and viscosity. Considering the method cannot solve the problems in all cases of cell movement, another optical flow method, SIFT flow, is also presented. The experimental results show that both of them can track the cell accurately. Optical flow is specially robust to the case where the velocity of cell is unstable, while SIFT flow works well when there are large displacement of cell between two adjacent frames. However, common cell tracking methods cannot conduct effective cell tracking in these cases. Therefore, our proposed methods outperform other common methods in in vivo cell tracking.
The remainder of this paper is organized as follows. Section 2 describes how the cell tracking data are obtained. Section 3 introduces the optical flow algorithm and SIFT flow algorithm, and their applications to blood cell tracking, respectively. The design of experiments are presented in Section 4. Section 5 are experimental results and the corresponding discussions. Finally, Section 6 offers our conclusions.
2. Data Description
Intravital microscopy (IVM) is used to track individual nanoparticles in blood flow. Images are acquired using an upright Nikon laser scanning confocal microscope equipped with a resonant scanner for high-speed imaging [14]. Both noninvasive and acute tissue preparations can be visualized with high spatial and temporal resolution. The imaging areas are the ear of mice where M21 melanoma tumors are implanted. Mice received a one-time injection of fluorescently labeled autologous red blood cells (RBCs) 1–3 days prior to imaging, in order to allow visualization of blood flow dynamics [14]. One of the frame of the videos is presented as Fig. 1. In the video, the green particles are used to visualize the blood vessels and the blue particles are RBCs. To get flow information of the vessels near tumors, we attempt to analyze the motion of RBCs inside the blood vessels.
Fig. 1.
One of the frames of a video
3. Method
Two optical flow based methods, optical flow and SIFT flow are introduced in this section. The methods are commonly used in vehicle tracking and robot navigation [15, 13]. Considering RBCs moving in blood vessels have the similar movement styles as vehicles and robots, we can apply the algorithms to the tracking of RBCs.
3.1. Optical Flow
The study of optical flow has a long history in visual perception and computer vision. Optical flow describes the local motion for each visible point in a image sequence by the local displacement vector field [16]. For digital image sequence, it specifies how much each image pixel moves between adjacent images. In computer vision, optical flow can be efficiently computed using successive image matching. There are lots of algorithms exist to solve or optimize optical flow methods [12, 13, 17, 18, 19, 20, 21]. All of them are developed based the assumption that the grey value of a pixel is not changed by the displacement from time t to time t + 1, which is represented by
(1) |
where I(x; y; t) denotes a rectangular image sequence, and w := (u; v; 1)T is the searched displacement vector between an image at time t and another image at time t + 1. u and v are the horizontal and vertical components of the flow field, respectively.
However, not all optical methods are suitable for the cell tracking problem we discuss in this paper. In the traditional optical flow methods, i.e. Lucas-Kanade [17] and Horn-Schunck [22], the vector field extracted will not be dense or will lost in discontinuities. Also in the recent variational optical flow method such as [19] and [12], the discontinuities in vector field are well-preserved by introducing sophisticated measures for edge regions. Unluckily, the performance of the matching Equ. 1 is limited in cell video because of the noise and the small sizes of the cell objects appeared in the video. In this paper, we recommend to use the algorithm of Liu [23] to obtain the vector field u and v. Liu’s algorithm is also under the brightness constancy assumption.
Different from previous methods which consider the matching of image pixels, Liu’s method considers the matching of SIFT transformed images. The advantages of using SIFT transform in optical flow calculation are, 1) significant image features are well-encoded in the SIFT feature vectors; 2) large distortions due to cell motion or noise are insensitive to SIFT features; 3) long-term cell motion can still be recovered by SIFT-matching; 4) The SIFT-based optical flow can be computed in fast discrete optimization method.
3.2. SIFT Flow
SIFT flow exploits the SIFT feature descriptors that are reasonably invariant to changes in illumination, scaling, rotation, image noise, and small changes in viewpoint. The SIFT descriptor is first proposed by Lowe in 1999 [24], and then further developed and improved by Bay [25] and Dalal [26]. The SIFT descriptor, as its followers, is a vector encoding the local orientation information such as edges and corners of a local area. The orientation information is invariant to illumination and change of scale. Thus SIFT is insensitive to light, shadow, and even most affine transforms encountered in natural images. SIFT flow matching can avoid the noise and change of cell scales in image sequence.
The SIFT flow method includes two major steps: 1) SIFT extraction, and 2) belief propagation optimization. This paper will briefly introduce them in the following sections.
3.2.1. The Computation of Scale Invariant Feature Transform
The goal of SIFT computation is to extract the local orientation histogram. It generally takes four steps in SIFT computation: extrema detection, keypoint localization, edge elimination, and orientation assignment. The first three steps attempt to extract significant pixel candidates which are robust to scale changes. The last step extracts the local orientation information and store the orientation into histogram vectors as Fig. 2 shows [27].
Fig. 2.
SIFT feature histogram. Each pixel in a local area is assigned with a specific orientation (left), the orientations are then accumulated into a 4-bin directional histograms (right). The histogram vectors can be directly used for keypoint matching.
In SIFT flow computation, the major steps in SIFT transform are simplified due to computational efficiency. The orientation histograms are directly extracted from the edge images obtained from the consecutive difference of Gaussian (DoG) convolutions.
3.2.2. Formulation of SIFT Computation
SIFT flow computation is formulated as the minimization over the SIFT-transformed image pair. For a given image pair, the objective function is
(2) |
where s1 is the SIFT descriptor over pixel p in the first frame and s2 is the SIFT descriptor on pixel q in the second frame. w(p) = (u(p); v(p)) is the flow vector at p which represents the warping that aligns both of the frames.
Unlike the general variational approaches such as [19] and [12], the functional Equ. 2 cannot be directly solved by the Euler-Lagrange equation.
Instead, [13] applies a special model of belief-propagation (BP) for solving Equ. 2. The BP model does not require the calculation of function gradient, and the resulting w is iteratively updated by message-passing algorithm in BP model.
4. Experiments
The experiments are conducted in Matlab7.1 on an Intel Core2 Quad 3.0GHz CPU, 8GB Memory desktop. The basic structure of our proposed method is presented in Fig. 3. In our method, for a specific cell, we first use mathematical morphology method to obtain the cell’s location in the first frame. Then find the displacement of the cell between the first and the second frame using the optical flow and SIFT flow algorithm, separately. For the cell moving in blood vessel, if we know its location in the first frame and the displacement between the first and the second frame, it is easy to calculate the cell’s location in the second frame. This process will be irritated until the location of cell in the last frame is found.
Fig. 3.
The structure of the proposal method.
4.1. Locating Cells
In our method, we first use mathematical morphology method to obtain the location of the first frame. In detail, for an input image I, it first finds the reginal maxima of I. The output BW, which has the same size of I, identifies the locations of the reginal maxima in I. In BW, pixels that are set to 1 identify reginal maxima; all other pixels are set to 0. Then, we perform morphological operations on BW to shrink an object to a point.
Fig. 4 shows the cell location using above method. There are a lot of RBCs moving through a blood vassal, which are visualized by blue fluorescence. We use red-circles to mark their locations. Then, randomly select one cell and adopt optical flow and SIFT flow algorithm to obtain its location in the next frame.
Fig. 4.
Locate cells using mathematical morphology method.
4.2. Optical Flow
The algorithm is used to calculate the displacement of a given cell between two frames. The MATLAB package for this algorithm is provided by [28]. According to our previous experimental results, the following parameters are set to obtain the best tracking results. The regularization weight alpha is set to 0.04, the downsample ratio is set to 0.55, the width of the coarsest level min Width is set to 5, the number of outer fixed point iterations nOuterFPIterations is set to 7 and the number of inner fixed point iterations nInnerFPIterations is set to 1, and finally, the number of SOR iterations nSORIterations is set to 30 when we prefer accuracy to training time. However, if we want to get a faster estimated result, nSORIterations can be set to 15. Eventually, the horizontal and vertical components of the flow field are computed using optical flow algorithm, and locations of the cell in the whole image sets or videos are therefore computed.
4.3. SIFT Flow
SIFT flow algorithm is provided by [29]. The following parameters are set to get a good estimated results. The weight of the truncated L1-norm regularization on the flow, alpha is set to 5; the threshold of the truncation d is set to 20; the weight of the magnitude of the flow, gamma is set to 0.01, the number of iterations, nIterations is set to 30 and the number of hierarchies in the efficient BP implementation nHierarchy is set to 2, and finally, the half size of the search window, wsize is set to 5.
4.4. Validation
To test if the estimated locations by using above algorithms are good enough, a validation experiment is proposed. We would like to find out the true value of cell location in each frame and compare it with the estimated value. In our validation experiments, ten people (three biologists, three mathematicians, and the rest four specialists in medical image processing) are asked to run experiments and locate the cells’ locations frame by frame by eyes. Therefore, for a cell in a video, we have such observation matrix:
(3) |
Assume the ten persons’ observations to the same frame are independent. Therefore, given fixed frame j, xi;j (i = 1; ⋯; 10, indicates the No. of person) are with normal distribution. We use p-value (p=0.05) to reject the invalid observations in xi;j(i = 1; ⋯; 10) and average the rest valid values as the true location of a cell in frame j.
5. Results and Discussion
This section describes three experimental results using different videos analyzed by optical flow and SIFT flow and the corresponding validations.
5.1. Experiment 1: track singular cell in complicated background
This experiment shows the cell tracking in very unclear background. In vivo cell tracking, even though cells have been visualized by fluorescence, the dark blood vessels make the tracking of cells quite difficult. Besides, since some vessels have quite large diameter, where cells move freely and not all cells can be tracked very clearly since they can move either near the camera or far from the camera. When they move far from the camera, we cannot capture their trajectories in some frames in a video. In such case, some common methods, such as Gabor filter method will fail to get the missed locations of the cells in the frames.
We hence use our proposed methods to track cells in this case. Fig. 5 (a) and (b) are optical flow and SIFT flow tracking results respectively. Both of them can track the cell’s movement very well. However, only observing the two figures, it is hard to see their performance clearly. So we conducted the validation experiment by recording the cell’s locations in each frame obtained by optical flow and SIFT flow at the same time. The validation results are shown in Fig. 6, where the horizontal axis and vertical axis represent the cell’s location in a frame. Totally we have 90 frames in this video. The green-stars present the cell’s locations computed by SIFT flow, and the blue-squares stand for the same cell’s locations computed by optical flow, while the red-dot-line shows the true locations obtained by ten observers, as introduced in Section 4.4. To see the results clearly, we zoom in the two parts of the original curve, as ellipse A and B show. Ellipse A includes five frames. From the figure we can see, a red-dot is missed in one of the frames in ellipse A. It means all of ten observers cannot tell the location of the cell in that frame. In fact, the cell is disappeared in the frame and appeared again in the next frame. However, both SIFT flow and optical flow can predict the locations of the cell in the five frames, since both green-start and blue-square appear in each frame. Ellipse B represents the similar case as ellipse A shows. It indicates that in some frames, our proposed method can compute the cell’s location accurately when cells cannot be visualized by eyes.
Fig. 5.
Tracking singular cell in complicated background. (a) optical flow, (b) SIFT flow.
Fig. 6.
Validation of optical flow and SIFT flow.
Table 1 presents the accuracy of the two proposed methods comparing with the true values obtained by ten observers. In the same video, we randomly track ten cells’ locations using the two methods. The accuracy is computed by Mean Squared Error (MSE) between the true values and predicted values. It should be noted that since in some frames where some cells are missed, we do not have true values of the cells’ locations and cannot judge whether the predictions of the two methods are good or not. We therefore delete the predicted values in these frames when calculating the prediction accuracy.
Table 1.
Accuracy of two proposed methods
Cell ID | SIFT flow | Optical flow |
---|---|---|
1 | 98.13% | 98.13% |
2 | 97.30% | 97.30% |
3 | 94.67% | 94.67% |
4 | 97.16% | 97.16% |
5 | 96.88% | 96.88% |
6 | 91.58% | 94.71% |
7 | 97.77% | 97.77% |
8 | 92.16% | 92.16% |
9 | 94.49% | 94.48% |
10 | 92.96% | 92.96% |
Average | 95.31% | 95.62% |
In this table, both optical flow and SIFT flow have the same prediction results when predicting the locations of eight of the ten cells. However, optical flow outperforms SIFT flow when predicting the locations of No. 6 cell and SIFT flow have a very slight better performance when predicting the locations of No. 9 cell. Therefore, we can get the conclusion that optical flow works better than SIFT flow in the case that the cells are moving in complicated environment. That is because the characteristics of optical flow is quite robust to abrupt movement of objective, only if the difference between two adjacent frames is not obvious.
5.2. Experiment 2: two flows with different directions
This experiment shows the case when two blood flows in the same video with two different directions. Common cell tracking methods cannot work well in this case since they assume all particles in the flow should have the same velocity and require all particles should have the similar movement tendency.
Fig. 7 shows an example that the blood flows have different movement tendency. Fig. 7 (a) is the experimental result using SIFT flow algorithm, while Fig. 7 (b) is the result of optical flow with the same video. The arrow lines in both figures show the direction of blood flows. From the two figures we can see that both SIFT flow and optical flow can track the movement of cells accurately, even though their directions of movement are quite different. Such case is very normal in blood flow velocity measurement, since the distribution of micro-vessels, especially angiogenic vessels are quite tanglesome. The blood flow velocity is hardly to be measured by using normal blood velocimetry. The application of SIFT flow and optimal flow makes it possible to accurately obtain the measurement.
Fig. 7.
Tracking cells with two different movement tendency. (a) SIFT flow, (b) optical flow.
To compare the ability of the two algorithms, we gave the validation results of this experiment, as shown in Fig. 8 and 9, respectively. Fig. 8 is the cell tracking result of the cell A in Fig. 7. Green-star line shows the result using SIFT flow method, blue-square line presents the result using optical flow, and the red-dot line shows the true value of cell location, which is obtained manually as introduced in Section 4.4. From the figure we can see that both of two algorithms can track the cell exactly. They have excellent performance since the trajectory is quite simple.
Fig. 8.
Tracking cell A: results and validation.
Fig. 9.
Tracking cell B: results and validation.
Fig. 9 is the tracking results of the cell B in Fig. 7. We zoom in a part of original curve to see the detailed tracking results in the ellipse part. We can see that the cell’s movement is abrupt in some place. It indicates there are different performances of SIFT flow and optical flow algorithm when predicting the locations of cell in these frames. SIFT flow method shows the better tracking result than optical flow, especially in the place where the route of the cell is more flexual. It is because SIFT flow is much more robust to the large change of scene between two adjacent frames. Table 2 presents the accuracy of the two proposed methods comparing with the true values obtained by ten observers. The method to calculate the accuracy is exactly same as the previous experiment. Two points should be noted in this experiment: firstly, both of the two algorithms perform well when the trajectory of cell is quite simple. While only SIFT flow shows good performance in the case where the route of cell is quite flexual. Secondly, both of the two algorithms have quite high performance to track cells when a series of cells move toward different directions. However, other methods, such as filter based method and level-set method even can work in such case.
Table 2.
Accuracy of two proposed methods
Cell ID | SIFT flow | Optical flow |
---|---|---|
1 | 86.58% | 86.58% |
2 | 87.30% | 87.71% |
3 | 91.68% | 91.68% |
4 | 92.56% | 92.56% |
5 | 89.22% | 87.54% |
6 | 93.47% | 93.47% |
7 | 91.09% | 91.09% |
8 | 89.65% | 89.65% |
9 | 91.39% | 91.39% |
10 | 90.46% | 88.32% |
Average | 90.34% | 89.90% |
5.3. Experiment 3: large deviation between two adjacent frames
This experiment shows the case that deviation between two frames is very large. This case is very normal because the data we obtained is in vivo microscopic videos. In such videos, because the heart of mouse is pulsating, wobble occurs frequently, which impacts the tracking a lot.
Fig. 10 is the experimental results using the both algorithms. In this video, the location of the cell in some frames are distorted in the place of the two circles, because there is a heartbeat occurs when a cell moves to the place marked by ellipse, as shown in the figure. Normally, other methods will fail to tracking the cell in the following frames since the location of the cell in distorted frame is wrong. However, from the two figures we can see both of our proposed methods can track the cell very well. To compare them clearly, we conducted validation experiments, as shown in Fig. 11. Green-star line shows the result using SIFT flow method, blue-square line presents the result using optical flow, and the red-dot line shows the true value of the cell locations, which are obtained manually, same as the first experiment. To see the two methods’ performance accurately, we randomly select ten cells to track and the accuracy of the two methods are listed in Table 3. From the table we can see that SIFT flow outperforms optical flow in this case. That is because SIFT flow is much more robust to the large change of scene between two adjacent frames.
Fig. 10.
Tracking cells with two different movement tendency. (a) SIFT flow, (b) optical flow.
Fig. 11.
Tracking cell B: results and validation.
Table 3.
Accuracy of two proposed methods
Cell ID | SIFT flow | Optical flow |
---|---|---|
1 | 92.13% | 92.13% |
2 | 94.82% | 94.82% |
3 | 91.79% | 91.48% |
4 | 91.45% | 89.21% |
5 | 90.89% | 90.89% |
6 | 94.05% | 90.89% |
7 | 91.67% | 91.67% |
8 | 90.46% | 90.59% |
9 | 89.78% | 89.78% |
10 | 93.22% | 93.22% |
Average | 92.03% | 91.47% |
6. Conclusion
In this paper, two flow calculation methods, SIFT flow and optical flow are used to track the movement of cell. Three different cases of blood flow are shown and the corresponding experiments were conducted using the two methods. The results indicated that even the methods have their own disadvantages and merits, both of them can solve the problems in quite complicated cases, such as complicated background, tanglesome blood vessel distribution, unstable blood flow velocity, and deviation between two frames because heart beat of mouse. In fact, the proposed methods are not only for one signal cell tracking, they also can be used in multi cells tracking. However, the disadvantage of the two methods are the calculation of displacements are time consuming. How to improve the prediction accuracy and how to make the prediction effectively are our future work.
Acknowledgment
This work was supported by Funding: NIH U01HL111560-01 (Zhou), NIH 1U01CA166886-01 (Zhou), NIH 1R01DE022676-01 (Zhou).
References
- 1.Chang C, Tsai R, Wu W, Kuo S, Yu H. Use of dynamic capillaroscopy for studying cutaneous microcirculation in patients with diabetes mellitus. Microvascular research. 1997;53:121–127. doi: 10.1006/mvre.1996.2003. [DOI] [PubMed] [Google Scholar]
- 2.Cutolo M, Grassi W, Matucci Cerinic M. Raynaud’s phenomenon and the role of capillaroscopy. Arthritis & Rheumatism. 2003;48:3023–3030. doi: 10.1002/art.11310. [DOI] [PubMed] [Google Scholar]
- 3.Levy B, Ambrosio G, Pries A, Struijker-Boudier H. Microcirculation in hypertension. Circulation. 2001;104:735–740. doi: 10.1161/hc3101.091158. [DOI] [PubMed] [Google Scholar]
- 4.Tseng C, Chong C, Chen C, Lin B, Tai T. Abnormal peripheral microcirculation in seemingly normal subjects living in blackfoot-disease-hyperendemic villages in taiwan. Journal of Vascular Research. 1995;15:21–27. doi: 10.1159/000178945. [DOI] [PubMed] [Google Scholar]
- 5.Corpetti T, Heitz D, Arroyo G, Memin E, Santa-Cruz A. Fluid experimental flow estimation based on an optical-flow scheme. Experiments in fluids. 2006;40:80–97. [Google Scholar]
- 6.Padfield D, Rittscher J, Thomas N, Roysam B, et al. Spatio-temporal cell cycle phase analysis using level sets and fast marching methods. Medical image analysis. 2009;13:143–155. doi: 10.1016/j.media.2008.06.018. [DOI] [PubMed] [Google Scholar]
- 7.Ray N, Acton S, Ley K. Tracking leukocytes in vivo with shape and size constrained active contours. Medical Imaging, IEEE Transactions on. 2002;21:1222–1235. doi: 10.1109/TMI.2002.806291. [DOI] [PubMed] [Google Scholar]
- 8.Watanabe M, Matsubara M, Sanada T, Kuroda H, Iribe M, Furue M. High speed digital video capillaroscopy: Nailfold capillary shape analysis and red blood cell velocity measurement. Journal of Biomechanical Science and Engineering. 2007;2:81–92. [Google Scholar]
- 9.Mawson D, Shore A. Comparison of capiflow and frame by frame analysis for the assessment of capillary red blood cell velocity. Journal of medical engineering & technology. 1998;22:53–63. doi: 10.3109/03091909809010000. [DOI] [PubMed] [Google Scholar]
- 10.Yamamoto S, Nakajima Y, Tamura S, Sato Y, Harino S. Extraction of fluorescent dot traces from a scanning laser ophthalmoscope image sequence by spatio-temporal image analysis: Gabor filter and radon transform filtering. Biomedical Engineering, IEEE Transactions on. 1999;46:1357–1363. doi: 10.1109/10.797996. [DOI] [PubMed] [Google Scholar]
- 11.Chen Y, Zhao Z, Liu L, Li H. Automatic tracking and measurement of the motion of blood cells in microvessels based on analysis of multiple spatiotemporal images. Measurement Science and Technology. 2011;22:045803. [Google Scholar]
- 12.Brox T, Bruhn A, Papenberg N, Weickert J. High accuracy optical flow estimation based on a theory for warping. Computer Vision-ECCV 2004. 2004:25–36. [Google Scholar]
- 13.Liu C, Yuen J, Torralba A. Sift flow: Dense correspondence across scenes and its applications. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 2011;33:978–994. doi: 10.1109/TPAMI.2010.147. [DOI] [PubMed] [Google Scholar]
- 14.van de Ven AL, Wu M, Lowengrub J, McDougall SR, Chaplain MA, Cristini V, Ferrari M, Frieboes HB. Integrated intravital microscopy and mathematical modeling to optimize nanotherapeutics delivery to tumors. 2012 doi: 10.1063/1.3699060. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Aires KR, Santana AM, Medeiros AA. Optical flow using color information: preliminary results. Proceedings of the 2008 ACM symposium on Applied computing, ACM; pp. 1607–1611. [Google Scholar]
- 16.Gibson JJ. The perception of the visual world. 1950 [Google Scholar]
- 17.Lucas BD, Kanade T, et al. An iterative image registration technique with an application to stereo vision. Proceedings of the 7th international joint conference on Artificial intelligence. [Google Scholar]
- 18.Heeger DJ. Optical flow using spatiotemporal filters. International Journal of Computer Vision. 1988;1:279–302. [Google Scholar]
- 19.Aubert G, Deriche R, Kornprobst P. Computing optical flow via variational techniques. SIAM Journal on Applied Mathematics. 1999;60:156–182. [Google Scholar]
- 20.Barron J, Thacker N. Tutorial: Computing 2d and 3d optical flow. Imaging Science and Biomedical Engineering Division, Medical School, University of Manchester. 2005 [Google Scholar]
- 21.Bruhn A, Weickert J, Schnörr C. Lucas/kanade meets horn/schunck: Combining local and global optic flow methods. International Journal of Computer Vision. 2005;61:211–231. [Google Scholar]
- 22.Horn BK, Schunck BG. Determining optical flow. Artificial intelligence. 1981;17:185–203. [Google Scholar]
- 23.Liu C, et al. Ph.D. thesis. Massachusetts Institute of Technology; 2009. Beyond pixels: exploring new representations and applications for motion analysis. [Google Scholar]
- 24.Lowe D. Object recognition from local scale-invariant features. Computer Vision, 1999; The Proceedings of the Seventh IEEE International Conference on, volume 2, Ieee; pp. 1150–1157. [Google Scholar]
- 25.Bay H, Tuytelaars T, Van Gool L. Surf: Speeded up robust features. Computer Vision–ECCV 2006. 2006:404–417. [Google Scholar]
- 26.Dalal N, Triggs B. Histograms of oriented gradients for human detection. Computer Vision and Pattern Recognition, 2005. CVPR; IEEE Computer Society Conference on, volume 1, IEEE; 2005. pp. 886–893. [Google Scholar]
- 27.Zhu C. Video object tracking using sift and mean shift. 2011 [Google Scholar]
- 28. http://people.csail.mit.edu/celiu/OpticalFlow/,????. [Google Scholar]
- 29. http://people.csail.mit.edu/celiu/SIFTflow/, ???? [Google Scholar]