Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Nov 21.
Published in final edited form as: Crit Care Med. 2017 Apr;45(4):630–636. doi: 10.1097/CCM.0000000000002265

Measuring Patient Mobility in the ICU Using a Novel Noninvasive Sensor

Andy J Ma 1, Nishi Rawat 2,3, Austin Reiter 1, Christine Shrock 4, Andong Zhan 1, Alex Stone 4, Anahita Rabiee 5,6, Stephanie Griffin 7, Dale M Needham 1,3,5,6,8, Suchi Saria 1,3,9
PMCID: PMC5697716  NIHMSID: NIHMS917770  PMID: 28291092

Abstract

Objectives

To develop and validate a noninvasive mobility sensor to automatically and continuously detect and measure patient mobility in the ICU.

Design

Prospective, observational study.

Setting

Surgical ICU at an academic hospital.

Patients

Three hundred sixty-two hours of sensor color and depth image data were recorded and curated into 109 segments, each containing 1,000 images, from eight patients.

Interventions

None.

Measurements and Main Results

Three Microsoft Kinect sensors (Microsoft, Beijing, China) were deployed in one ICU room to collect continuous patient mobility data. We developed software that automatically analyzes the sensor data to measure mobility and assign the highest level within a time period. To characterize the highest mobility level, a validated 11-point mobility scale was collapsed into four categories: nothing in bed, in-bed activity, out-of-bed activity, and walking. Of the 109 sensor segments, the noninvasive mobility sensor was developed using 26 of these from three ICU patients and validated on 83 remaining segments from five different patients. Three physicians annotated each segment for the highest mobility level. The weighted Kappa (κ) statistic for agreement between automated noninvasive mobility sensor output versus manual physician annotation was 0.86 (95% CI, 0.72–1.00). Disagreement primarily occurred in the “nothing in bed” versus “in-bed activity” categories because “the sensor assessed movement continuously,” which was significantly more sensitive to motion than physician annotations using a discrete manual scale.

Conclusions

Noninvasive mobility sensor is a novel and feasible method for automating evaluation of ICU patient mobility.

Keywords: artificial intelligence, early ambulation, intensive care unit, machine learning, rehabilitation


Early mobilization and rehabilitation of critically ill patients “help” reduce physical impairments and decrease mechanical ventilation duration and length of stay (16). Accurate measurement of patient mobility, as part of routine care, assists in evaluating early mobilization and rehabilitation and helps to understand patient exposure to harmful effects of bed rest (7, 8).

In research, patient mobility measurement may be performed via direct observation by a trained and dedicated researcher. Direct observation techniques, such as behavioral mapping, provide comprehensive descriptive datasets and are more accurate than retrospective report, but are labor-intensive, thus limiting the amount and duration of data collection (9). If evaluated as part of routine clinical care, mobility status is typically estimated using a mobility scale and recorded once or twice daily (911). However, such discrete subjective recordings of a patient’s maximal level of mobility over a 12- or 24-hour period are subject to recall bias and not truly representative of a patient’s overall mobility (e.g., a patient may achieve a maximal mobility level, such as standing, only for a few minutes in a day). Thus, accurate manual measurement and recording of mobility level are not feasible for whole-day observation.

Currently, few techniques exist to automatically and accurately monitor critically ill patients’ mobility. Accelerometry is one method that has been validated in ambulatory populations (12, 13), but has limited evaluation in critically ill populations (9). Accelerometry may correlate well with direct observation for reporting frequency and duration of activity, but cannot accurately differentiate between levels of activity intensity or whether movements are voluntary versus involuntary (e.g., passive range of motion exercises) (12). A device also needs to be worn and cannot record activity in the scene at large. Recently, noninvasive and low-cost video systems have shown promise for use in clinical settings (1416) and home assistance (17). However, to our knowledge, no such system has yet been evaluated for measuring patient mobility in the ICU setting.

An automated approach to measure patient mobility and care processes is timely and feasible due to: 1) the advent of inexpensive sensing hardware and low-cost data storage and 2) the maturation of machine learning and computer vision algorithms for analysis. Hence, our objective is to develop and validate a noninvasive mobility sensor (NIMS) to continuously measure patient mobility.

METHODS

This was a prospective observational study carried out in a surgical ICU at Johns Hopkins Hospital. This study was approved by the Johns Hopkins University Institutional Review Board (IRB). ICU staff consented to participate in the study, and written informed consent was obtained from patients and/or their surrogates upon admission to the ICU.

Data Collection From Sensors

Data were collected from May 2014 to August 2014. Three Kinect sensors were mounted on the walls of a private patient room to permit views of the entire room without obstructing clinical activities (Fig. 1). The sensors were activated and continuously captured color and depth image data from the time of patient consent until ICU discharge. Example color and depth images obtained from the sensor are shown in Figure 1. Each sensor was connected to a dedicated encrypted computer containing a storage drive. The data were deidentified at the local storage drive, and then transferred, using a secure encrypted protocol, to the server for a second level of obfuscation, storage, and analysis. Ultimately, data collected from one sensor were used for analysis.

Figure 1.

Figure 1

ICU noninvasive mobility sensor system. This diagram depicts the sensor system in an ICU room and example color (converted to grayscale for demonstration) and depth images captured by the sensors. The grayscale image on the left provides texture information for human/object detection. Faces are obscured, and the image is blurred for identity protection. The depth image on the right shows the distance from the camera to the human/object with darker gray pixels indicating areas closer to the camera, lighter gray pixels indicating areas farther away, and black pixels indicating that the depth camera cannot capture the distance values around those regions. The depth image provides complementary information for better human/object detection.

Mobility Scale

NIMS automatically analyzes the color and depth image data to measure patient mobility and activity duration. To compare NIMS measurements to provider ones in this study, the NIMS assigned the highest level of mobility within a segment of sensor data. The continuous stream of sensor data was divided into discrete segments, which each consisted of 1,000 images, lasting approximately 33 seconds.

To characterize the highest level of mobility numerically, a validated 11-point mobility scale (10) was collapsed into four mutually exclusive mobility categories for the purpose of this study (Table 1). Walking categories were collapsed because the sensor only captured patients as they walked inside the room or crossed the threshold separating the room from the hallway. Though theoretically possible, we could not record a patient walking outside of the room because we could not install a sensor in the hallway due to IRB limitations (it was not feasible to consent every person who walks in the hallway). The remaining categories were collapsed because our dataset, though it included a significant number of hours of sensed data, did not include a sufficient number of mobility events specific to each mobility category in the 11-point scale.

TABLE 1.

Sensor Scale Crosswalk With ICU Mobility Scale (10)

Sensor Scale ICU Mobility Scale (10) Sensor Label
A. Nothing in bed 0. Nothing, lying in bed (i) i

B. In-bed activity 1. Sitting in bed (ii), exercises in bed (iii) ii
iii

C. Out-of-bed activity 2. Passively moved to chair (no standing) iii→v/vii→iv
3. Sitting over edge of bed (iv) iv
4. Standing (v) v
5. Transferring bed to chair (with standing) iii→v/vii→iv
6. Marching (vi) in place (at bedside) for short duration vi

D. Walking (vii) 7. Walking with assistance of two or more people vii
8. Walking with assistance of one person
9. Walking independently with a gait aid
10. Walking independently without a gait aid

This table compares our sensor scale, containing the four discrete levels of mobility that the noninvasive mobility sensor (NIMS) is trained to categorize from image segments of a patient in an ICU room, to the standardized ICU Mobility Scale, used by clinicians in practice today. The right column shows a descriptive mapping between the sensor scale and the ICU Mobility Scale using the NIMS “Patient Motion Status” (Table A.1, Supplemental Digital Content 1, http://links.lww.com/CCM/C382).

Patient Data

Three hundred sixty-two hours of sensor color and depth image data were recorded and curated into 109 segments, each containing 1,000 images, from eight patients. These segments were specifically sampled to ensure representation of each of the mobility levels. Of the 109 segments, the NIMS was developed using 26 of these from three ICU patients (development data) and validated on 83 of the remainder segments obtained from five different ICU patients (validation data).

Development of NIMS for Automating Mobility Measurement

The algorithmic procedures performed by the NIMS for mobility measurement are shown in Figure 2 and described below. The NIMS algorithm employs the following five steps: 1) analyze individual images to locate the regions containing every person in the scene (person localization); 2) for each person region, assign an identity to distinguish “patient” versus “not patient” (patient identification); 3) determine the pose of the patient, with the help of contextual information (patient pose classification and context detection); 4) measure the degree of patient motion (motion analysis); and 5) infer the highest mobility level using the combination of pose and motion characteristics (mobility classification).

Figure 2.

Figure 2

Algorithmic stages for automated mobility measurement. The images on the left include overlaid bounding boxes to indicate the positions of people in the patient room as detected by the sensor. The flow chart on the right shows the stages of the noninvasive mobility sensor algorithm.

The NIMS was developed using “bounding boxes” of people and objects in the development data (Fig. 2). A bounding box is defined as a region in an image containing a person or object. A researcher annotated who the people were in each image (patient vs not patient) and where objects were located (bed or chair) in the development data. Using these annotations, the NIMS was trained to automate each of the five steps described below.

Person Localization

Given a segment of images, each image was analyzed independently and in order. For each image, NIMS identified all regions containing persons using three steps (18). First, a collection of person-detection algorithms was used to identify candidate locations for persons in each image. Second, these outputs were combined to obtain the high likelihood locations. Finally, false detections were further removed by imposing consistency checks for locations found in consecutive images. The result of this step was bounding boxes around persons in the image (Fig. 2).

Patient Identification

Next, for all persons identified in an image, NIMS determined whether they are a patient or not patient (e.g., caregiver or family member). This was done via a Convolutional Neural Network (CNN) (19) algorithm. A CNN is a machine learning algorithm that can be trained to classify inputs into a specific class of outputs (e.g., image regions into person vs not), and then given a bounded region of a person, the algorithm can automatically determine whether or not they are a patient (Fig. 2). The CNN achieves this automatically through learning characteristics of people’s appearance based on color and geometry.

Patient Pose Classification and Context Detection

Once both location and identity of the people in each image were established, next their pose was characterized for mobility labeling. We classified pose into four categories: 1) lying in bed, 2) sitting in bed, 3) sitting in chair, and 4) standing. We trained a pose detector using the CNN algorithm that automatically learned the person pose. Using annotations from the development data, the CNN was trained to determine if a bounded region of an image contained a person who was “lying down,” “sitting,” or “standing.” Besides pose, we used an object detection algorithm (20) to automatically locate the bounded regions of objects in the images that correspond to “beds” and “chairs” (also called “object detections”). These were then combined to get the patient’s overall pose (Fig. 2).

Motion Analysis

After classifying the pose and context of a person identified as patient, information about their movement was extracted by analyzing consecutive images to measure motion. Specifically, for a bounding region containing a person in a given image, we analyzed the subsequent images in the segment within the same region and measured the mean and variance of the changes in image intensity per pixel. In addition, we computed speed of movement by measuring the total distance that the person moved (as measured by the center of the bounding regions) divided by the duration over which the movement was made (Fig. 2).

Mobility Classification

In this final step, we aggregated the information related to pose, context, and motion computed in the steps above into a series of numbers (often termed “feature”) to determine the final mobility level according to the scale in Table 1. Our feature contained the following values: 1) was a patient detected in the image? 2) what was the patient’s pose? 3) was a chair found? 4) was the patient in a bed? 5) was the patient in a chair? 6) what was the average patient motion value? and 7) how many caregivers were present? These features were used to train a support vector machine (21) classifier to automatically map each feature to the corresponding mobility level from Table 1.

NIMS Validation

The validation data consisted of 83 segments from five patients. We determined this sample size to be sufficient with a 5.22% margin-of-error calculated, based on the Kappa statistic. For validation, two junior and one senior physician independently reviewed the same segments and were blinded to the NIMS evaluation, reporting the highest level of patient mobility visualized during each segment, according to the sensor scale (Table 1). In the 27% of visualizations exhibiting disagreement, these were rereviewed and the majority opinion was considered as the gold standard annotation.

Statistical Analysis

The performance of NIMS was assessed using a weighted Kappa statistic that measured disagreement between the NIMS mobility level output and the gold standard annotation. A standard linear weighting scheme was applied which penalized according to the number of levels of disagreement (e.g., predicting “A” when expecting “B” yielded a 33% weight on the error, whereas a prediction of “A” when expecting “C” yielded a 67% weight on the error, and 100% weight when expecting “D”) (22, 23). The percentage of segments on which NIMS agreed with the gold standard annotation was calculated. The weighted percent observed agreement was computed by one minus the linear weighting of different levels of disagreement (24). A contingency table (25) was created to report the inter-rater agreement for each mobility level.

RESULTS

Patient demographics are detailed in Table 2. The number of segments annotated as each mobility level, as identified by the physicians (gold standard), is 21 (25%) for nothing in bed, 30 (36%) for in-bed activity, 27 (33%) for out-of-bed activity, and 5 (6.0%) for walking.

TABLE 2.

Patient Demographics

Characteristics n = 3 (Development) n = 5 (Validation)
Age (yr), median (IQR) 67 (60–71) 67 (52–77)

Male, n (%) 1 (33) 2 (40)

ICU length of stay (d), median (IQR) 5 (1–5) 3 (2–5)

Type of surgery
 Endocrine (pancreatic) 1 1
 Gastrointestinal 1 2
 Gynecologic 1
 Thoracic 1
 Orthopedics 1

Acute Physiology and Chronic Health Evaluation II score, median (IQR) 13 (12–28) 16 (10–21)

IQR = interquartile range.

Eight nonmechanically ventilated patients were included in this study. Data from three patients were used for development, and data from five patients were used for validation.

Table 3 reports gold standard versus NIMS agreement for each mobility level. In 72 (87%) of the 83 segments, there was perfect agreement between the gold standard and the NIMS automated score. Of the 11 discrepancies, seven were due to “disagreement” between nothing in bed and in-bed activity. The weighted percent observed agreement was 96%, with a weighted Kappa of 0.86 (95% CI, 0.72–1.00).

TABLE 3.

Contingency Table for Mobility Levels

Physician Score
Total
A. Nothing B. In-Bed C. Out-of-Bed D. Walking
Sensor Score A. Nothing 18 (22) 4 (5) 0 0 22 (27)
B. In-Bed 3 (4) 25 (30) 2 (2) 0 30 (36)
C. Out-of-Bed 0 1 (1) 25 (30) 1 (1) 27 (32)
D. Walking 0 0 0 4 (5) 4 (5)
Total 21 (26) 30 (36) 27 (32) 5 (6) 83 (100)

The contingency table demonstrates physician and sensor agreement for each mobility level. Each cell in the table represents the n (%) of segments that are automatically labeled with mobility level (from “A” nothing in bed to “D” walking) by our sensor as compared to the expected mobility level, as determined by the physicians. Entries in boldface font indicate the number of segments with mobility levels that both the sensor and physicians agree on, whereas off-diagonal entries show the disagreements between the sensor and physicians.

DISCUSSION

We developed and validated a noninvasive sensor system to continuously measure patient mobility in the ICU (26). Physician and NIMS agreement regarding mobility level was very high. This sensor system passively senses the clinical environment and automatically measures and reports a patient’s level of mobility. It represents a significant departure from current manual measurement and reporting used in clinical care and research. In the clinical setting, mobility is directly observed and estimated retrospectively by providers. Measurements, if not recorded in real-time, are subject to recall bias. In research, mobility is measured using labor-intensive direct observation techniques. To our knowledge, this study is the first to leverage novel inexpensive sensors, such as the Kinect with machine learning and computer vision algorithms, to evaluate patient activity for the purpose of measuring mobility in the ICU.

Although physician and NIMS agreement was very high, the main source of sensor and physician disagreement lies in differentiating nothing in bed from in-bed activity. The difference was due, in large part, to segments where the patient motion was subtle. This is because the sensor assesses mobility according to a continuous speed scale, but for comparison, we confined the sensor’s measurement of mobility to a discrete human scale. When applying the discrete scale to the sensor, if the patient’s total body speed signature exceeds a threshold, the sensor labels this mobility level as in-bed activity. Below this speed threshold, subtle body activity is labeled as nothing in bed. The physician’s activity threshold, which differentiates nothing in bed from in-bed activity, is subjective and different from that of the sensor, which is quantitative and thus more reproducible. Some segments were challenging due to subtle patient movement; therefore, speed judgment discrepancies are not necessarily sensor errors. In the rare cases that pose detection or patient identification errors exist (Fig. A.1, Supplemental Digital Content 1, http://links.lww.com/CCM/C382), our future work will leverage novel algorithmic developments, from our own work and in the computer vision and machine learning field more broadly, to reduce these further.

We envision that patient mobility data derived from the NIMS could autopopulate the health record real-time such that nurses would no longer have to subjectively assess and document the highest level of mobility. The 1:1 to 1:2 nurse-to-patient ratio in the ICU suggests that an ICU nurse has time to adjudicate a patient’s mobility; however, ICU nurses are busy managing the patient, liaising with family, reviewing orders, and documenting assessments regarding the patient’s overall status, along with that of each organ system, the ventilator and medication administration. NIMS could decrease the burden of nursing assessment and documentation, leaving more time for patient care. Furthermore, NIMS can improve reliability of the measurements collected. Currently, it is only practical to assess and record mobility at discrete and infrequent intervals. NIMS provides an approach to increase measurement frequency by continuously monitoring mobility and reporting a numeric value representing the patient’s highest mobility level during an hour’s time frame (Fig. A.2, Supplemental Digital Content 1, http://links.lww.com/CCM/C382). We speculate that this value, akin to a vital sign, and its trend, could be used to stimulate mobility quality improvement activities. For example, NIMS could provide real-time feedback to providers, patients, and caregivers regarding patient mobility status to compare with predetermined activity goals, prompting care changes to ensure that patients are on track to achieving these.

There are limitations to this technology. First, cost may be a consideration. We estimate that the devices and installation for a 16-bed ICU costs $8,000. The monthly cost of a cloud server to stream and process the data is less than $400. These costs continue to decline as hardware and storage become cheaper. Second, privacy may pose a concern. However, it was our experience that patients and providers often expressed comfort with the presence of sensors given their ubiquity in public areas, and gratitude that we were using the sensors to improve quality of care. Further evaluation is necessary to establish provider and patient comfort with sensing technologies.

There are limitations to this study. First, it was conducted using data from eight patients in one ICU room at one tertiary hospital, thus, affecting its generalizability. The study’s sample size is small with respect to patient number. However, the number of data hours collected is at least 100 times larger than current studies of its kind (14, 16). Furthermore, our analysis is at the segment level and not at the patient level, and as Table 3 demonstrates, NIMS was exposed to many different variations of human activities from lying motionless in a bed to small motions in bed, sitting up, and walking. Second, we do not know whether a continuous representation of patient mobility is more useful clinically compared to discrete and infrequent measurements, specifically, that it will change provider and/or patient behavior and improve patient outcomes. This should be the subject of further research.

Sensor technology and deep machine learning techniques are used in other industries but have only recently been explored in healthcare. We have repurposed off-the-shelf, inexpensive technology and developed novel machine learning and computer vision-based algorithms to capture patient mobility in the ICU. The NIMS addresses a need for effective and continuous evaluation of patient mobility to assist with optimizing patient mobility in the ICU. Our results suggest that new deep learning techniques (27, 28) in machine learning hold promise to automate activity recognition and scene understanding. Other potential applications include delirium assessment (e.g., delirium motoric subtype), patient-provider interactions, and evaluation of patient turning in bed (e.g., as part of pressure ulcer prevention). Adapting these techniques for clinical intervention monitoring offers the potential to improve care measurement and delivery. Our next steps include algorithmic refinements, applying NIMS to measure and provide feedback to providers and extending our repertoire of clinical tasks.

CONCLUSION

In conclusion, we have developed and validated a NIMS to automatically and continuously measure patient mobility in the ICU using Kinect sensors, machine learning, and computer vision technologies.

Supplementary Material

Supplemental Digital Content

Acknowledgments

Dr. Rawat’s institution received funding from the K23 from the National, Heart, Blood and Lung Institute. Dr. Reiter’s institution received funding from the Gordon and Betty Moore Foundation. Dr. Shrock’s institution received funding from the Moore Foundation. Dr. Needham’s institution received funding from National Institutes of Health, Agency for Healthcare Research and Quality, National Health and Medical Research Council (Australia), and Gordon and Betty Moore Foundation (all peer-reviewed grants). He disclosed off-label use of existing sensor technology for monitoring mobility in the ICU. Dr. Saria received support for article research from the Gordon and Betty Moore Foundation.

We thank the Gordon and Betty Moore Foundation for funding this study and Drs. George Bo-Linn, Peter Pronovost, and Deborah Perrone for valuable discussions that helped frame this study. We also thank Rhonda Wyskiel and Dr. Pedro Mendez Tellez for their help during the initial stages of the study implementation and Weinberg ICU staff at Johns Hopkins Hospital for their participation and support in this study.

Supported, in part, by the Gordon and Betty Moore Foundation. This work was also supported by a Patient-Oriented Mentored Career Development Award (K23) from the National, Heart, Blood, and Lung Institute.

Footnotes

This work was performed at the School of Medicine and Department of Computer Science, Johns Hopkins University, Baltimore, MD.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccmjournal).

The remaining authors have disclosed that they do not have any potential conflicts of interest.

For information regarding this article, suchi.saria@gmail.com

References

  • 1.Schweickert WD, Pohlman MC, Pohlman AS, et al. Early physical and occupational therapy in mechanically ventilated, critically ill patients: A randomised controlled trial. Lancet. 2009;373:1874–1882. doi: 10.1016/S0140-6736(09)60658-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Lord RK, Mayhew CR, Korupolu R, et al. ICU early physical rehabilitation programs: Financial modeling of cost savings. Crit Care Med. 2013;41:717–724. doi: 10.1097/CCM.0b013e3182711de2. [DOI] [PubMed] [Google Scholar]
  • 3.Kayambu G, Boots R, Paratz J. Physical therapy for the critically ill in the ICU: A systematic review and meta-analysis. Crit Care Med. 2013;41:1543–1554. doi: 10.1097/CCM.0b013e31827ca637. [DOI] [PubMed] [Google Scholar]
  • 4.Hashem MD, Nelliot A, Needham DM. Early mobilization and rehabilitation in the intensive care unit: Moving back to the future. Respir Care. 2016;61:971–979. doi: 10.4187/respcare.04741. [DOI] [PubMed] [Google Scholar]
  • 5.Morris PE, Goad A, Thompson C, et al. Early intensive care unit mobility therapy in the treatment of acute respiratory failure. Crit Care Med. 2008;36:2238–2243. doi: 10.1097/CCM.0b013e318180b90e. [DOI] [PubMed] [Google Scholar]
  • 6.Ota H, Kawai H, Sato M, et al. Effect of early mobilization on discharge disposition of mechanically ventilated patients. J Phys Ther Sci. 2015;27:859–864. doi: 10.1589/jpts.27.859. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Brower RG. Consequences of bed rest. Crit Care Med. 2009;37:S422–S428. doi: 10.1097/CCM.0b013e3181b6e30a. [DOI] [PubMed] [Google Scholar]
  • 8.Kress JP, Hall JB. ICU-acquired weakness and recovery from critical illness. N Engl J Med. 2014;370:1626–1635. doi: 10.1056/NEJMra1209390. [DOI] [PubMed] [Google Scholar]
  • 9.Berney SC, Rose JW, Bernhardt J, et al. Prospective observation of physical activity in critically ill patients who were intubated for more than 48 hours. J Crit Care. 2015;30:658–663. doi: 10.1016/j.jcrc.2015.03.006. [DOI] [PubMed] [Google Scholar]
  • 10.Hodgson C, Needham D, Haines K, et al. Feasibility and inter-rater reliability of the ICU Mobility Scale. Heart Lung. 2014;43:19–24. doi: 10.1016/j.hrtlng.2013.11.003. [DOI] [PubMed] [Google Scholar]
  • 11.Nydahl P, Ruhl AP, Bartoszek G, et al. Early mobilization of mechanically ventilated patients: A 1-day point-prevalence study in Germany. Crit Care Med. 2014;42:1178–1186. doi: 10.1097/CCM.0000000000000149. [DOI] [PubMed] [Google Scholar]
  • 12.Verceles AC, Hager ER. Use of accelerometry to monitor physical activity in critically ill subjects: A systematic review. Respir Care. 2015;60:1330–1336. doi: 10.4187/respcare.03677. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Sallis R, Roddy-Sturm Y, Chijioke E, et al. Stepping toward discharge: Level of ambulation in hospitalized patients. J Hosp Med. 2015;10:384–389. doi: 10.1002/jhm.2343. [DOI] [PubMed] [Google Scholar]
  • 14.Chakraborty I, Elgammal A, Burd RS. Video based activity recognition in trauma resuscitation. IEEE International Conference and Workshops on Automatic Face and Gesture Recognition; Shanghai, China. April 22–26, 2013. [Google Scholar]
  • 15.Twinanda AP, Alkan EO, Gangi A, et al. Data-driven spatio-temporal RGBD feature encoding for action recognition in operating rooms. Int J Comput Assist Radiol Surg. 2015;10:737–747. doi: 10.1007/s11548-015-1186-1. [DOI] [PubMed] [Google Scholar]
  • 16.Lea C, Facker J, Hager G, et al. 3D sensing algorithms towards building an intelligent intensive care unit. AMIA Jt Summits Transl Sci Proc. 2013;2013:136–140. [PMC free article] [PubMed] [Google Scholar]
  • 17.Stone EE, Skubic M. Fall detection in homes of older adults using the Microsoft Kinect. IEEE J Biomed Health Inform. 2015;19:290–301. doi: 10.1109/JBHI.2014.2312180. [DOI] [PubMed] [Google Scholar]
  • 18.Ma AJ, Yuen PC, Saria S. Deformable distributed multiple detector fusion for multi-person tracking. 2015 arXiv preprint. arXiv:1512.05990. [Google Scholar]
  • 19.Jia Y, Shelhamer E, Donahue J, et al. Caffe: Convolutional architecture for fast feature embedding. Proceedings of the ACM International Conference on Multimedia; Orlando, FL. November 3–7, 2014. [Google Scholar]
  • 20.Girshick R. Fast R-CNN. 2015 arXiv Preprint. arXiv:1504.08083. [Google Scholar]
  • 21.Cortes C, Vapnik V. Support-vector networks. Mach Learn. 1995;20:273–297. [Google Scholar]
  • 22.Cohen J. Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull. 1968;70:213–220. doi: 10.1037/h0026256. [DOI] [PubMed] [Google Scholar]
  • 23.Warrens MJ. Chance-corrected measures for 2 × 2 tables that coincide with weighted kappa. Br J Math Stat Psychol. 2011;64:355–365. doi: 10.1348/2044-8317.002001. [DOI] [PubMed] [Google Scholar]
  • 24.Domingos P. Metacost: A general method for making classifiers cost-sensitive. Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; San Diego, CA. August 15–18, 1999. [Google Scholar]
  • 25.Stehman SV. Selecting and interpreting measures of thematic classification accuracy. Remote Sens Environ. 1997;62:77–89. [Google Scholar]
  • 26.Reiter A, Ma AJ, Rawat N, et al. Process monitoring in the intensive care unit: Assessing patient mobility through activity analysis with a non-invasive mobility sensor. 19th International Conference, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, Proceedings, Part I; Athens, Greece. October 17–21, 2016; [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Karpathy A, Toderici G, Shetty S, et al. Large-scale video classification with convolutional neural networks. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition; Columbus, OH. June 23–28, 2014. [Google Scholar]
  • 28.Farabet C, Couprie C, Najman L, et al. Learning hierarchical features for scene labeling. IEEE Trans Pattern Anal Mach Intell. 2013;35:1915–1929. doi: 10.1109/TPAMI.2012.231. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Digital Content

RESOURCES