Abstract
The objectives of this study were to evaluate the effectiveness and accuracy of monitoring feeding behavior patterns using cumulative summation (CUSUM) procedures to predict the onset of bovine respiratory disease (BRD) in beef cattle. Growing bulls (N = 231) on a 70-d growth and efficiency trial were used in this study. Between days 28 and 38 of the study, 30 bulls were treated for BRD based on observed clinical signs and elevated rectal temperature (>39.5 °C); remaining bulls (n = 201) were considered healthy. Clinically-ill and healthy bulls were used to evaluate sensitivity and specificity of CUSUM models, with accuracy calculated as the average of sensitivity and specificity. All data were standardized prior to generating CUSUM charts in a daily accumulative manner. Eight univariate CUSUM models were evaluated including DMI, bunk visit (BV) frequency, BV duration, head down (HD) duration, eating rate, maximal nonfeeding interval (NFI Max), SD of nonfeeding interval (NFI SD), and time to bunk (TTB). Accuracies for detection of BRD were 80.1, 69.4, 72.4, 79.1, 63.7, 64.6, 73.2, and 48.7%, respectively, and average day of detection prior to observed symptoms of BRD were 1.0, 3.2, 3.2, 4.8, 10.2, 2.7, 1.5, and 0.6 d, respectively. Principal component analysis (PCA) of all 8 univariate traits (full model) was used to construct multivariate factors that were similarly monitored with CUSUM. Two reduced multivariate models were also constructed that included the 3 best performing feeding behavior traits (BV duration, HD duration, NFI SD) with (RBD) and without DMI (RB). Accuracy of the full multivariate model was similar to the best of the univariate models (75.0%). However, both of the reduced multivariate models (RB and RBD) were more accurate (84.0%) than the full multivariate model. All 3 of the multivariate models signaled (P < 0.05) 2.0 to 2.1 d prior to clinical observation. These results demonstrate that the use of PCA-derived multivariate factors in CUSUM charts was more accurate compared with univariate CUSUM charts, for pre-clinical detection of BRD. Furthermore, adding DMI to the RB model did not further improve accuracy or signal day of BRD detection. The use of PCA-based multivariate models to monitor feeding behavior traits should be more robust than relying on univariate trait models for preclinical detection of BRD. Results from this study demonstrate the value of using CUSUM procedures to monitor feeding behavior patterns to more accurately detect BRD prior to clinical symptoms in feedlot cattle.
Keywords: disease detection, dry matter intake, feeding behavior, statistical process control
INTRODUCTION
Mortality and morbidity challenges associated with bovine respiratory disease (BRD) complex continue to negatively impact the feedyard sector of the beef industry from both economic and animal welfare perspectives (Galyean et al., 1999; Duff and Galyean, 2007; Schneider et al., 2009). Respiratory disease accounts for 67 to 87% of the morbidity events within feedyards (Edwards, 1996), which is exacerbated when animals are stressed due to weaning, heat events, shipping, or comingling (Schneider et al., 2009). Early detection of BRD is difficult and current industry approaches rely on visual appraisal of clinical signs (Broom, 2006), which are not always accurate. Detection of BRD based on visual observations of clinical signs has been shown to be highly specific but not very sensitive (92 and 27%, respectively, Timsit et al., 2016). The limitation of relying on clinical observations for disease detection is that prey animals have developed inherent instincts to conceal clinical signs of illness as an evolutionary adaption for survival (Noffsinger and Locatelli, 2004).
Recent developments in sensor technologies have enabled real-time measurements of behavior patterns (Theurer et al., 2013) and feed intake on an individual-animal basis (Lancaster et al., 2009; Kayser and Hill, 2013), which have been shown to be predictive of morbidity events in beef cattle (Sowell et al., 1999; Quimby et al., 2001). These data collection systems coupled with robust mathematical models have utility in the development of an animal health monitoring system for more accurate preclinical detection of BRD. Adoption of accurate animal health monitoring systems would improve the efficacy of the antimicrobial treatment (Cusack et al., 2003), resulting in more judicious administration of antimicrobial therapy (Schafer et al., 2007), and improved animal welfare.
Statistical process control (SPC) charts were first proposed by Shewhart (1931) to monitor and detect abnormal process variation in manufacturing industries. SPC procedures are now being widely used in service, financial, and health care industries, although there has been limited application of SPC procedures within the livestock industries (Montgomery, 2009; De Vries and Reneau, 2010).
The primary objective of this study was to evaluate the effectiveness and accuracy of cumulative summation (CUSUM) procedures to detect/predict the onset of BRD in beef cattle based on deviations in DMI or 1 of 7 feeding behavior traits (univariate models). The secondary objective was to evaluate the effectiveness and accuracy of multivariate CUSUM models derived from principal component analysis (PCA).
MATERIALS AND METHODS
Animals and Experimental Design
All animal care and use procedures were in accordance with the guidelines for use of Animals in Agricultural Teaching and Research as approved by the Texas A&M University Institutional Animal Care and Use Committee. The experimental animals utilized in this study have been previously described by Jackson et al. (2016). Briefly, growing purebred Angus bulls (N = 231) consigned from independent producers for the purpose of evaluating performance and feeding efficiency were used in this study. Although the bulls were previously vaccinated against viral and bacterial pathogens using various vaccine products, all bulls were revaccinated upon arrival at the test facility for bovine herpes virus, parainfluenza-3 virus, bovine viral diarrhea, bovine respiratory syncytial virus (Pyramid 5, Boehringer Ingelheim), and Haemophilus somnus, Pasteurella multocida, and clostridial diseases (Ultrabac7, Zoetis Animal Health), and treated for internal parasites (Valbazen, Zoetis Animal Health). Bulls were fitted with passive electronic identification (EID) ear tags (Half duplex; Allflex USA Inc., Dallas, TX), and adapted to the test diet for 28 d prior to the start of a 70-d trial. During the trial, bulls were evaluated at least twice daily for clinical signs of illness and weighed at 14-d intervals.
Experimental Cohorts
Within a 10-d period beginning on day 28 of the trial, 30 bulls were identified as being morbid due to clinical observations of nasal discharge, lethargy, and/or anorexia. Based on elevated (>39.5 °C) rectal temperatures [mean = 40.5 °C (SD 0.05), range 39.7 to 42.1 °C] during a physical examination, these bulls were diagnosed with BRD, treated with an antimicrobial (enroflaxacin; Baytril 100; Bayer Animal Health LLC., Shawnee Mission, KS), and returned to their respective home pens. The number of BRD cases was equally distributed across all 9 pens that were used in this trial. These clinically-ill bulls were used to assess the true positive or sensitivity rate in evaluating the effectiveness of the CUSUM procedures. The remaining bulls (n = 201) were considered healthy and their feeding behavior responses were monitored to assess the true negative rate or specificity of the CUSUM chart. Although diagnostic tests were not conducted to confirm the presence of pathogens associated with BRD in this study, the observed clinical signs combined with elevated rectal temperatures in the clinically-ill cohort, were indicative of an acute outbreak of respiratory disease in these bulls.
Data Collection System and Behaviors
The bulls were housed in 1 of 9 adjacent pens across 2 alleys, with each pen (18.3 × 36.6 m) equipped with 4 electronic feed bunks (GrowSafe System Ltd., Airdrie, Alberta, Canada) to measure daily DMI and feeding behavior traits. The GrowSafe system consisted of feed bunks equipped with load bars to measure feed disappearance, and an antenna located within each feed bunk to record animal presence via detection of EID tags. Assigned feed disappearance (AFD) rates were computed daily for each feed bunk to assess data quality. During the first 50 d of the feed intake data collected for this study, the mean AFD rates exceeded 99%. Therefore, no data were deleted due to system malfunction, power outage, or low AFD rates. Feeding behavior traits evaluated in this study were based on frequency and duration of bunk visit (BV) events, duration of nonfeeding intervals (NFI), head down (HD) duration, and time to bunk (TTB). A BV event commenced when the EID of an animal was first detected, and ended when the time between consecutive EID recordings exceeded 100 s, was detected at another feed bunk, or when the EID of another animal was detected at the same feed bunk (Mendes et al., 2011). Bunk visit frequency was defined as the number of independent events recorded regardless of whether or not feed was consumed, and BV duration as the sum of the lengths of all BV events recorded during a 24-h period. The interval lengths between BV events were defined as NFI. Maximal NFI was defined as the longest NFI, and NFI SD as the SD of all NFI within each day. HD duration was computed as the sum of the number of times the EID for an animal was detected each day multiplied by the scan rate of the GrowSafe system (1.0 s). TTB was computed daily as the interval length between time of feed-truck delivery within pen and each animal’s first BV event following feed delivery. Feed intake was allocated to individual animals based on continuous recordings of feed disappearance during each BV event. A subroutine of the GrowSafe 6000E software (Process Feed Intakes) was used to compute daily feed intake. Eating rate was computed as the ratio of daily DMI to daily BV duration.
Description of CUSUM
A SPC chart is a graphic display of a process over a time period. Shewhart (1931) first proposed the use of control chart methodology to monitor the variance and mean of a process to identify when a system goes out of control to improve product quality. Control charts contain a centerline, which represents the mean or target value of the process while in control, and upper or lower control limits that are based on the variance of the process (Fig. 1). The control limits are set by the architect of the chart based upon the behavior of the process, and the process is deemed out of control when plotted observations exceed the bounds of these control limits (Quimby et al., 2001; Montgomery, 2009). Although the study lasted 70 d, the CUSUM charts were analyzed for approximately 35 d until all of the bulls in the clinically-ill cohort were treated. This study utilized a 1-way charting strategy, in which the chart only signaled in one direction. The upper threshold of the 1-way CUSUM chart was used to monitor NFI Max, NFI SD, TTB, and eating rate, as Jackson et al. (2016) demonstrated that these response variables increased prior to onset of morbidity. All of the others traits were monitored with the lower threshold of the 1-way CUSUM chart.
Figure 1.
Example of a CUSUM chart for BV frequency that remained within (above) and exceeded (below) the control limit of −3.5 H-parameter value.
The performance of a CUSUM chart is predicated upon the design, which includes the selection of K and H-values. The parameter and CUSUM equations used in this study are presented in Table 1. The H-parameter values ranging from 1 to 5 were evaluated, to identify the optimal H-value for use in this study. Based on this evaluation, the H-value for all of the CUSUM charts in this analysis were set 3.5. The K value of 0.5 was used for all of the CUSUM charts in this analysis based upon the recommendation from Montgomerey, (2009). All CUSUM charts were generated using PROC CUSUM in SAS 9.4 (Cary, NC).
Table 1.
Definitions and equations of parameters used in calculating CUSUM charts
| Parameter | Equation | Description |
| K1 | K = kσ | The magnitude of the deviation of an observation in SD units to be detected. |
| H1 | H = hσ | The magnitude in SD units to conclude that the process is out of control. This is referred to as the decision interval. |
| Upper CUSUM Statistic | Ci+ = max[0,yi -K+ ] | The accumulation of deviations that are greater the than the target μ. |
| Lower CUSUM Statistic | Ci- = min[0, K + yi+ ] | The accumulation of deviations that are less than the target μ. |
1Observations were standardized, therefore value of σ =1 and the parameter equations default to their selected values.
Post-Hoc vs. Daily Accumulative Parameter Estimation
Previously reported methods of parameter estimation for CUSUM construction are conducted in a post-hoc manner, meaning that parameter estimates are calculated after the data has been collected (Quimby et al., 2001). It is difficult to assess the accuracy and validity of these models, because they are utilizing future information that is unavailable in real-time application. For this study, initial parameter estimation was accomplished using the first 4 d as a reference period to calculate baseline μ and σ for each animal. Thereafter, daily observations were used to recompute parameter estimates in an accumulative manner (Mertens et al., 2008) using all of the available data up to that time point in the data set. Therefore, every sequential time series parameter estimate includes more data than previous observations, which enables the chart to behave as it would in real-time application. This daily accumulative procedure allows for more rapid implementation of the monitoring process due to the relatively short reference period required for initial parameter estimation. Furthermore, the accuracy of these parameter estimates increase each day the process is being monitored, which allows the model to provide information about the feeding behavior of individual animals shortly after feedlot arrival.
Standardization of the Observations
All data were standardized on an individual-animal basis prior to monitoring with CUSUM. When standardized variables are used the procedure is known as standardized CUSUM, and is the primary methodology used to monitor biology (Quimby et al., 2001; Mertens et al., 2008; Montgomery, 2009). Furthermore, transforming all of the variables to the same scale minimizes the potential of over-weighting the importance of variables when included in PCA, especially when the range in magnitude of variables is large and/or the unit of measurement differs across the variables (Johnson and Wichern, 2002). For this study, data transformation was achieved using the following equation.
Where = the transformed daily observation for animal i on day k, = the daily observation for animal i on day k, = the accumulative mean of the variable for animal i during time window j and = the accumulative SD of the variable for animal i during time window j, and day k = maximal day during time window j. The time window j represents the number of days that were used to estimate µ and . The number of observations within time window j increased by 1 day in an iterative manner for every additional day the animal was being monitored.
Principal Component Analysis
Three multivariate models, based on PCA analyses, were constructed using Proc Factor in SAS 9.4 (Cary, NC), with the eigenvalues calculated from the covariance matrix on an individual-animal basis. Dimension reduction was achieved by evaluating scree plots of the variance explained by the factors, ultimately resulting in the selection of 2 factors for each model. These 2 factors were monitored using the same CUSUM methods outlined for univariate traits. Based upon the performance of the individual factors, an “either or” rule was developed, such that if either of the factors signaled, the system was deemed “out of control”. Conversely, all false positives were counted when either of the factors signaled “out of control” for the healthy cohort. Three multivariate models were evaluated including the full model that incorporated all 8 univariate traits, and a reduced feeding behavior model (RB) that included the 3 most accurate feeding behavior traits (BV duration, HD duration, and NFI SD) based on sensitivity and accuracy. The third multivariate model (RBD) evaluated in this study included DMI along with the 3 feeding behavior traits in the RB model to assess the relative value of DMI to predict onset of BRD.
Model Accuracy Determinations
The clinically-ill cohort was used to assess the sensitivity of the CUSUM procedure in monitoring the onset of BRD, as such all animals were considered to be morbid on the day they were treated. In order for the signal to be deemed a true positive (TP; signaled when the animal was morbid), the signal had to exceed the bounds of the control limits prior to visual observation of illness and remain out of control until 3 d before visual observation of illness. The healthy cohort was used to assess the specificity of the CUSUM procedure. Within the healthy cohort if the CUSUM statistic exceeded the H-value at any time the signal was classified as a false positive (FP; signaled the animal was sick when healthy). The inverse of these determinations were categorized as false negatives (FN; chart fails to signal when the animal is morbid) and true negatives (TN; fails to signal when the animal is considered healthy), respectively. Ratios of theses classifications; sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), and accuracy were used to evaluate the efficacy of the univariate and multivariate models for predicting onset of BRD (Fig. 2). These diagnostic measures were calculated using PROC FREQ (SAS 9.4) for all univariate and multivariate models. Furthermore, 95% confidence interval (CI) was computed within the FREQ procedure to identify statistical difference between the variables. This method would be synonymous to pair-wise T-tests except that the CI was computed using a chi-square distribution instead of a Gaussian distribution. A high performing model will exhibit high sensitivity, specificity, NPV, PPV, (close to 1.0) and therefore a high accuracy. The signal day is the average number of days prior to visual observation that the chart signaled the system was out of control. Proc Univariate (SAS 9.4) was used to construct 95% CI for signal day. Mean signal day for a trait was considered different from zero if the CI did not span zero.
Figure 2.
Calculations for sensitivity, specificity, and other diagnostic measurements used to determine the efficacy of the statistical process control procedures.
RESULTS
H-Threshold
To determine the optimal H-parameter value for this study, CUSUM model sensitivity, specificity, and accuracy of several behavior response variables were evaluated at multiple H-parameter values ranging from 1 to 5. Figure 3 graphically displays the relationship between sensitivity, specificity, and accuracy for HD duration. As expected, the sensitivity decreased and the specificity increased as the H-parameter value increased from 1 to 5. This was done to identify the H-value at which accuracy was maximized. The optimal H-parameter values based on sensitivity and specificity of the most accurate traits (BV duration, HD duration, NFI Max, and DMI) ranged from 3.0 to 4.0. Thus, the H-parameter value of 3.5 was selected for this study.
Figure 3.
Effects of the H-parameter value on the sensitivity, specificity, and accuracy of the CUSUM model for head down duration.
Univariate Trait Models
Summary statistics for DMI and the feeding behavior traits evaluated in univariate CUSUM models for this study have been previously reported by Jackson et al. (2016). The CUSUM model performance of the 6 feeding behavior and 2 DMI-based traits (DMI, eating rate) are presented in Table 2. The univariate trait with the greatest sensitivity was HD duration (76.7%), followed by BV duration and DMI that were equally sensitive (66.7%). These 3 traits had greater (P < 0.05) sensitivities than TTB (23.3%), which had the lowest sensitivity. HD duration had higher (P < 0.05) sensitivity than NFI max (36.7%), which was the second least sensitive univariate trait. The remaining 3 traits (BV frequency, NFI SD, and eating rate) had intermediate sensitivities that ranged from 46.7 to 53.3%, and did not differ from the other traits. There was greater segmentation of the univariate traits for specificity, which, is in part due to more degrees of freedom available for the CI calculation.
Table 2.
Performance of univariate CUSUM models for preclinical detection of morbidity in clinically-ill and healthy bulls
| Trait1 | Threshold2 | PPV3 | NPV4 | Sensitivity | Specificity | Accuracy |
| DMI | Lower | 91.2b | 73.7c,d | 66.7b,c | 93.5b | 80.1 |
| BV frequency | Lower | 85.4b | 63.3b,c | 46.7a,b,c | 92.0b | 69.4 |
| BV duration | Lower | 75.3b | 70.1c | 66.7b,c | 78.1a | 72.4 |
| HD duration | Lower | 80.6b | 77.8d | 76.7c | 81.6a | 79.1 |
| NFI maximum | Upper | 83.1b | 59.4a,b | 36.7a,b | 92.5b | 64.6 |
| NFI SD | Upper | 88.4b | 66.6b,c | 53.3a,b,c | 93.0b | 73.2 |
| TTB | Upper | 47.4a | 49.2a | 23.3a | 74.1a | 48.7 |
| Eating rate | Upper | 67.3b | 61.4a,b | 53.3a,b,c | 74.1a | 63.7 |
a-cEstimates within column that have unlike superscripts differ at P < 0.05.
1BV = bunk visit, HD = head down. NFI = nonfeeding interval, TTB = time to bunk.
2Threshold = the direction that the one-way CUSUM chart was set to signal that the process was out of control.
3PPV = positive predictive value.
4NPV = negative predictive value.
There were only 30 animals in the clinically-ill cohort compared to 201 for the healthy group; therefore, there is greater confidence in the specificity estimates as well as narrower CI. Of the 8 univariate traits monitored DMI, NFI SD, NFI Max, and BV frequency had greater (P < 0.05) specificity (92.0 to 93.5%) than eating rate, TTB, BV duration and HD duration (74.1 to 81.6%). Overall, CUSUM performance assessed as accuracy was greatest for DMI (80.1%), but was closely followed by HD duration (79.1%).
The accuracies of NFI SD and BV duration (73.2 and 72.4%) were slightly less than DMI, although still predictive of BRD. The remaining univariate traits (BV frequency, NFI max, eating rate, and TTB) were slightly less predictive, with accuracies ranging from 48.7 to 69.4%.
Average signal day prior to observed clinical illness (Fig. 4) were compared using 95% CI. Time to bunk was the only univariate trait with an average signal day that did not differ from 0. Eating rate signaled the earliest at −10 d, and was different (P < 0.05) from all other univariate traits. The average signal day for DMI and TTB (−1.0 and −0.6 d, respectively) were less (P < 0.05) than the average signal day for HD duration (−4.8 d) and BV duration (−3.2 d), but did not differ from all other univariate traits. Although the average signal day for DMI and the other feeding behavior traits did not differ from each other, the average signal day for BV frequency and NFI max were numerically 2.2 and 1.7 d respectively, earlier compared to DMI.
Figure 4.
Mean signal day and 95% upper and lower confidence intervals for the univariate traits. BV = bunk visit, HD duration = head down duration, NFI Max = maximal nonfeeding interval per 24 h, NFI SD = standard deviation of nonfeeding interval within 24 h, TTB = time to bunk. Univariate traits with unlike subscripts differ (P < 0.05).
Multivariate Trait Models
The model performance of the 3 multivariate models and their corresponding factors are presented in Table 3. Sensitivity, specificity, and accuracy of the full model that used the “either or” rule were 70.0, 80.1, and 75.0%, respectively. Compared with the full model, both reduced models were more sensitive (83.3%), more specific (84.6%), and therefore, were more accurate (84.0%). For all 3 of the multivariate models, the sensitivities and specificities for the 2 independent PCA factors were lower and higher, respectively, compared with the corresponding combined models that incorporated an “either or” rule of the 2 factors to detect when the system was out of control. As the magnitude of increase in sensitivity was greater than the reduction in specificity, the accuracies of the combined models exceeded that of the corresponding independent factors generated for the full and reduced multivariate models. The signal day for the 3 multivariate models (Table 3) were not statistically different from each other and ranged from −2.0 d for the RB model and −2.1 d for the RBD model.
Table 3.
Performance of multivariate CUSUM models for preclinical detection of morbidity in clinically-ill and healthy bulls
| Trait | Signal day1 | PPV2 | NPV3 | Sensitivity | Specificity | Accuracy |
| Full4 model | −2.05a | 77.9a | 72.8cd | 70.0bc | 80.1a | 75.0 |
| Factor 1 | −1.10a | 86.5a | 72.9cd | 66.7bc | 89.6abc | 78.1 |
| Factor 2 | −3.43a | 65.2a | 53.3a | 23.3a | 87.6abc | 55.4 |
| RBD5 model | −2.08a | 84.4a | 83.5e | 83.3c | 84.6ab | 84.0 |
| Factor 1 | −1.23a | 89.7a | 77.4d | 73.3bc | 91.5bc | 82.4 |
| Factor 2 | −1.94a | 87.0a | 67.9bc | 56.7bc | 91.5bc | 74.1 |
| RB6 model | −2.04a | 84.4a | 83.5e | 83.3c | 84.6ab | 84.0 |
| Factor 1 | −1.90a | 85.9a | 72.8cd | 66.7bc | 89.1abc | 77.9 |
| Factor 2 | −1.27a | 88.5a | 65.2b | 50.0ab | 93.5c | 71.8 |
a-cEstimates within column that have unlike superscripts differ at P < 0.05.
1Signal day = average day of detection prior to observed clinical signs of disease.
2PPV = positive predictive value.
3NPV = negative predictive value.
4Full model = Full principal component analysis (PCA) based model that included all univariate traits. Results are presented separately for factors 1 and 2, and for both factors using an “either or” rule.
5RBD model = Reduced PCA-based that included bunk visit (BV) duration, head down (HD) duration, nonfeeding interval (NFI) SD and DMI. Results are presented separately for factors 1 and 2, and for both factors using an “either or” rule.
6RB model = Reduced PCA-based that included BV duration, HD duration, and nonfeeding interval (NFI) SD. Results are presented separately for factors 1 and 2, and for both factors using an “either or” rule.
DISCUSSION
The objectives of this study were to evaluate the effectiveness of monitoring deviations in individual-animal feeding behavior traits and DMI using CUSUM procedures to predict the onset of BRD. Additionally, multivariate CUSUM models constructed using PCA were evaluated to determine if they were more sensitive and specific compared to the univariate traits.
Although there were no diagnostic tests performed to confirm the presence of specific pathogens associated with BRD in this study, the observed clinical signs combined with the elevated rectal temperatures were consistent with an acute outbreak of respiratory disease in these bulls. Furthermore, the results from Jackson et al. (2016) demonstrated that there was a marked reduction in DMI the day clinical illness was observed, as well as deviations in several feeding behavior traits in the clinically-ill compared to the healthy cohort, which is consistent with the onset of BRD.
There is widespread agreement that accurate preclinical detection of BRD is crucial for effective intervention of this disease (Apley, 1997), which is the most prevalent and costly of disease complexes in beef cattle. The BRD complex accounts for the majority of morbidity challenges within feedyards (Edwards, 1996) and approximately 70% of the mortalities (Galyean et al., 1999). Identification of morbid cattle earlier in the disease process through objective behavioral monitoring would potentially improve the efficacy of antimicrobial therapy (Ferran et al., 2011), and therefore reduce mortalities as well as the duration of time animals are in a morbid state. The economic burden associated with BRD is more extensive than the direct losses associated with mortality; morbidity is the largest nonfeeding cost associated with feeder cattle production (Pinchak et al., 2004), due to the increases in labor associated with treatment and production losses pre- and postillness (Smith, 1998). When animals are morbid they are unproductive (Smith, 2015) because they divert energy from creating products (milk, muscle, and fiber) to mount a defensive response to the disease. Many cases of BRD go untreated as diagnosis is difficult and relies on visual appraisal of clinical illness (Broom, 2006). Numerous studies have documented poor to fair associations between observational detection of BRD and prevalence of lung lesions at harvest. In fact, White and Renter (2009) reported the sensitivity of BRD detection based on subjective observation of clinical signs was only 62%, which indicates that BRD cases often go undetected, or do not get detected until later in the disease process when successful intervention in less likely (Janzen et al., 1984). Moreover, the clinical signs of illness (e.g., lethargy, elevated temperature) typically associated with BRD are not exclusive for this disease, which limits specificity of BRD detection (63%, White and Renter, 2009) resulting in overuse of antimicrobial drugs in feedlot cattle. Thus, there is a critical need to develop robust animal health monitoring systems that are more sensitive in detecting BRD in order to improve efficacy of antimicrobial intervention and more specific to limit unnecessary use of antimicrobials, which would ultimately lead to improvement in animal welfare, profitability, and perception of the industry.
Altered behavioral patterns associated with consumption of feed and water are among the earliest indicators of the onset of infectious disease. In calves at high risk for BRD upon feedlot arrival, Daniels et al. (2000) and Sowell et al. (1999) found that calves diagnosed and treated for BRD spent 23 to 42% less time at the feed bunk and had 10 to 36% fewer feeding and drinking bouts compared with untreated calves that did not display clinical signs of BRD. Frequency and duration of feeding bouts are known to be positively correlated with feed intake in beef cattle (Lancaster et al., 2009; Kayser et al., 2013). Thus, the calves that had BRD in these studies likely consumed less feed as evidenced by their lower daily gains. Carroll and Forsberg (2007) concluded that the increase in energy required to produce proinflammatory cytokines, acute-phase proteins, antibodies, and mount febrile responses to infectious disease creates a state of hyper-metabolism. As such, animals compensate for this increased energy demand by altering various behavioral responses such as increased time for sleep, reductions in social activity, sexual behavior, and feed intake in order to conserve energy. Jackson et al. (2016) used a 2-slope broken-line regression model to characterize deviations in DMI and feeding behavior patterns preceding the onset of observed clinical signs associated with BRD in cattle. The model-detected breakpoint for DMI occurred 6.8 d prior to observed clinical illness, whereas, breakpoints for BV frequency, and duration were 7.6 and 7.2 d prior to observed clinical illness. In the current study, the CUSUM detected deviations in behavior prior to observed illness, however, the signal was generated within a shorter time frame to the observation. The average signal day for DMI, BV frequency, and duration, respectively were −1.0, −3.2, and −3.2. The differences between this study and Jackson et al. (2016) are due to the analysis method used and it would be expected that a 2-slope broken-line regression model would detect mean shifts in retrospect prior to a CUSUM that is monitoring the mean in “real-time”. Lukas et al. (2008) reported preclinical reductions in DMI in dairy cows with mastitis and reductions in water intake associated with the febrile response. Using pattern recognition techniques, Moya et al. (2015) reported that morbid cattle exhibit distinctive deviations in feeding behavior patterns prior to displaying overt clinical signs of BRD and can be differentiated from the feeding patterns of healthy cattle. Based on discrete survival time analysis of DMI and feeding behavior data, Wolfger et al. (2015) reported that increases in DMI per meal, meal frequency and intermeal interval were associated with a decreased hazard for developing BRD up to 7 d prior to observed clinical signs of disease. The results from these studies suggest that deviations in feeding behavior patterns preceding the display of clinical signs of illness in beef cattle may be useful in the development of predictive algorithms for preclinical detection of BRD.
Few studies have been conducted that have monitored changes in behavioral or physiological patterns of individual animals to predict the onset of disease. Other attempts at early identification have used hazard analyses (Wolfger et al., 2015), mean comparison (Sowell et al., 1999), logistic regression (Schaefer et al., 2007), and cluster analyses (Moya et al., 2015). There are also studies and commercial products available that have published results without clearly defining the method or described the algorithm that was used to predict BRD. Using a novel high-frequency active integrated electronic system that measured BV frequency and duration MacGregor et al. (2015) monitored the health of high-risk calves and reported a reduction in morbidity and an increase in treatment success rate compared to visual observation of clinical illness. White et al. (2015), using the remote early disease identification (REDI) system were in agreement on morbid animals 94% of the time when compared to visual observation, and were able to identify morbidity 0.75 d prior to visual observation. The algorithms in the REDI system are proprietary, therefore they were not described, although it appears the predictions are made based upon a suite of behavior measurements.
Statistical process control procedures have been employed with success in multiple livestock species. Devries and Reneau (2010) reviewed 28 studies that employed SPC procedures to monitor various aspects of livestock production. Of their 28 studies, only 2 were conducted in beef cattle, with the remainder conducted in swine, poultry or dairy production systems. The traits that were monitored in these studies included morbidity, electro-conductivity of milk, muscle pH and conception rates. Furthermore, the majority of the studies were conducted on a herd or pen level rather than on an individual-animal basis. Devries and Reneau (2010) concluded that carefully constructed control charts are powerful methods to monitor animal production systems, and that application of these methods will grow with advancement of biological sensor and computer technologies to monitor individual-animal health status and performance.
The H-parameter setting in CUSUM models is used to differentiate between abnormal and normal variation. The typical H-parameter value used in CUSUM models to monitor industrial processes is 5 (Montgomery, 2009). In this study, a H-parameter value of 3.5 was used, which was similar to the parameter setting used by Lukas et al. (2005) and Quimby et al. (2001) to monitor subclinical mastitis in dairy cattle and BRD in beef steers, respectively. Not all univariate traits were most accurate at H = 3.5, that was the value that the high performing traits were on average the most accurate. The reduced H-threshold utilized in this study and others suggest that the behavior changes are identifiable but exhibit less statistical distance from the mean than processes that are monitored outside of biology. These parameter differences illustrate the ability for the CUSUM procedures to accurately monitor processes with different underlying variation, and that not one set of parameters fits all systems. The parameters need to be tailored to the process being monitored.
Unique to this study was the accumulative methodology of parameter estimation, most CUSUM parameters (μ, σ) are estimated using post-hoc methods (Mertens et al., 2008), or the parameters are estimated once the study has been completed. The accumulative method does not compare an animal to its peers, and there is an inherent risk of flagging healthy outliers on the tails of animal populations when that method of comparison is utilized. The CUSUM statistic calculated in this study is impacted by the daily fluctuations of the behavior an animal exhibits relative to the individuals’ average/normal behavior.
Most of the studies that have monitored animal health have all used high-risk type cattle; Quimby et al. (2001) monitored cattle that had an overall pull rate of 67% compared to 13% in the current study. Other morbidity rates range from 77% (Wolfger et al., 2015) to 29 % (Moya et al., 2015), which are much higher than normal (mean morbidity rate = 14%, Irsik et al., 2006). It is unknown whether using high-risk cattle to develop a predictive model would affect the ability of the model to select morbid animals in a normal population. Using high-risk cattle certainly increases the probability of true positives.
Of the 6 feeding behavior traits evaluated in this study, HD duration, BV duration, and NFI SD had the highest sensitivity and accuracy. Furthermore, HD duration was similar in model accuracy with DMI. The only other study that monitored BV traits with a CUSUM procedure was Quimby et al. (2001) who reported greater accuracy and sensitivity of BV duration than the results of the current study. They were able to detect morbidity on average 4.1 d prior to clinical signs with an overall accuracy, PPV, and sensitivity of 87, 91, and 90%, respectively, and results from their analysis are the most accurate to date. Despite their accuracy, it is difficult to draw conclusions from the comparison of the 2 studies, because the parameters were estimated differently. Quimby et al. (2001) monitored deviations in duration measured over 3 h and used the post-hoc mean and SD from the healthy population of calves to parameterize their CUSUM chart. The results are certainly an improvement over visual observation, but would be difficult to reproduce in application. When a signal is obtained on a chart designed with parameters that are independent of the observations, it is impossible to know whether the signal is caused by a process change or by an incorrect parameter value (Quesenberry, 1997, De Vries and Reneau, 2010).
Unique to this study was the monitoring of eating rate, NFI max, NFI SD, and TTB relative to the onset of BRD. One of the many benefits of the CUSUM procedure is the constant monitoring and ability to detect small sustained mean shifts. In the current study, eating rate, NFI max, NFI SD, and TTB all tended to increase prior to the onset of BRD. The increase in eating rate was also reported by Jackson et al. (2016) who reported that the inflection of the slope change occurred 1.3 d prior to observed clinical illness, where upon the coefficient for the slope increased. They also reported similar increases in TTB with the slope change occurring 1.5 d prior to the onset of clinical disease. Eating rate signaled the earliest of all the traits, but was not very accurate compared to the other traits and TTB was the only trait monitored in this study whose signal day did not differ from zero. In the current study, increases in NFI max and NFI SD were observed prior to when the animal became morbid, which is in agreement with Jackson et al. (2016). Contrary to these results, Wolfger et al. (2015) reported a reduction in the hazard for BRD with a 1-h increase in mean time between meals or intermeal interval. The differences between these studies reflect differences in the feeding behavior traits being monitored or the methods being used to detect shifts in response variables prior to onset of disease. However, increases in NFI max and NFI SD would be consistent with observed behavior of morbid animals that eat on a more irregular basis. These results demonstrate that prior to onset of disease animal took longer to respond to first feeding, consumed feed at a faster rate, spent more time not eating and had greater variation in intervals between BV events compared to “healthy” cohorts.
To the author’s knowledge, this is the first study to evaluate the use of principal component factors within a CUSUM to identify morbidity in beef cattle. Principal components analysis is among the most popular methods for extracting information from multivariate data (Bakshi, 1998). Principal components are a linear combination of variables that have been geometrically rotated to maximize the variability while removing the collinearity of the process variables (Montgomery, 2009). The intent of this analysis is to define the orthogonal directions that explain the variability in the data, in order to reduce the amount of original variables that need to be monitored. The general objectives of PCA are to reduce the data being monitored and assist in interpretation (Johnson and Wichern, 2002). The value of PCA in the deployment of monitoring procedures is 3-fold; reduce the number of charts that need to be monitored (current study 8 vs. 1); monitor the change and interaction of multiple traits; and minimize multicollinearity of traits through orthogonal transformation.
The a priori expectation for this modeling procedure was that the full model would outperform the reduced models, and that the interactions and changes of all traits relative to the onset of BRD would be collective and the chart would signal earlier and with greater accuracy than the reduced models. Although there were no statistical differences between the full and reduced models with regards to sensitivity and specificity, for both measures the reduced models were greater. Therefore, the reduced models were more accurate. One of the assumptions of PCA is that large variances are important and represent the underlying structure of the process, and those structures with lower variance represent error or “noise” (Shlens, 2005). The full model contained 8 variables, 4 of those variables were not very accurate in monitoring BRD, evident in the univariate trait analysis. These traits had greater variance than the traits that signaled during a BRD event and reduced the sensitivity of the full model. The reduced models slightly outperformed the full model.
The reduced models had the greatest sensitivity compared to the full model and the univariate traits, but they also had slightly reduced specificity. There is a delicate balance that needs to be achieved amongst these 2 metrics to evaluate model performance. If specificity is too low users will abandon or lose faith in the procedure because the chart will signal that the animal is morbid when the animal is healthy (De Vries and Renau, 2010). If the sensitivity is too low there will be a similar fate for the charting procedure because sick animals will be visually present, but the chart will say they are healthy. There needs to be consideration placed on the value or cost of an incorrect decision compared to the cost of intervention. Caulcutt (1995) described this as the ratio of selling price to inspection cost, and compared 2 types of production processes to illustrate this point, the “widget” process (low value items at a very fast rate) and the “high value” process (high value items at a very slow rate). As the cost of a health inspection is relatively low compared to the value of the animal, beef cattle production fits the second scenario. In the current study we put equal value on both sensitivity and specificity by selecting an H-value that maximized accuracy. However, in application, it may be relevant to put more relative value on sensitivity compared to specificity when constructing CUSUM charts as well as when deciding what behaviors should be monitored. Even though favoring sensitivity will decrease specificity the need for visual monitoring and or human intervention would still be substantially reduced compared to current practice of visual appraisal of health, which includes monitoring every animal every day. Therefore, benefits of monitoring process would not only include improved application of antimicrobial therapy, but could also potentially reduce labor associated with animal health activities, which account for little over one third of the costs associated with operating a feedyard (Jensen and Mark, 2010). Another benefit of using PCA methodology in SPC models is that additional sensor technologies can be easily incorporated once they become available.
There has been an increased interest in the development of sensor technologies for the purpose of preclinical detection of disease in livestock. The feed-intake measurement system used in the current study has been widely used in research and production facilities to measure feed intake and feeding behaviors, but the widespread use of this system in commercial cattle feeding systems is limited due to cost and labor associated with operation of the intake monitoring system (Lukas et al., 2008). Currently, there are multiple data collection systems commercially available that quantify animal behavior and physiology that have been used to predict morbidity. These systems measures variables such as time spent at the feed bunk and water source (Buhman et al., 2000), real-time positioning through ultra-wide band tag transmitters (White et al. 2015), 3-dimensional accelerometers quantify animal activity (Bayne et al., 2016), and infrared thermography (Schaefer et al., 2007). The SPC methods that were examined in this study have the ability to be applied to any of these biosensor collection systems.
As data collection systems become more robust and less expensive, more behavior traits could potentially be monitored adding to the accuracy of the predictions. Certain behaviors have been shown to be good predictors of morbidity, for instance, Lukas et al. (2008) concluded that water intake can serve as an alternative to DMI for monitoring the health of and estrus of Holstein cows. Deviations is drinking have also been shown to be effective predictors of diarrhea outbreaks in growing pigs and signal 1 d prior to the outbreak (Madsen and Kristensen, 2005). A monitoring program that included more variables could undoubtedly increase the robustness and accuracy of the prediction, which in turn should increase industry adoption.
There were no differences in sensitivity, specificity, or signal day between the RBD and RB models, suggesting that deviations in DMI contributed little additional value to the PCA model that used only feeding behavior traits. This is relevant as the collection of individual-animal DMI data is relatively more expensive compared to the collection of behavioral traits. Both of the reduced models outperformed all of the single trait models, illustrating the value of the PCA analysis compared to the single trait models. The reduced multivariate models were also more sensitive and specific than the estimations of visual appraisal (White and Renter, 2009).
CONCLUSION
These results demonstrate that the use of PCA-derived factors in CUSUM charts was more accurate compared to univariate CUSUM charts. Furthermore, the removal of DMI from RBD model had no effect on accuracy and signal day prior to observed symptoms, demonstrating the validity of monitoring feeding behavior traits. Moreover, due to the multivariate aspect of PCA, the use of PCA-based CUSUM charts to monitor feeding behavior patterns should be more robust in applications for preclinical detection of BRD. These results illustrate the value of an electronic data collection system coupled with robust prediction procedures to to identify morbid morbid animals prior to overt clinical signs of disease. Future research is needed to evaluate other behavioral traits that may improve disease detection as well as development of an economic model for selection of parameters so that producers can tailor the SPC procedures too their operation.
Conflict of interest statement. None declared.
LITERATURE CITED
- Apley M. 1997. Antimicrobial therapy of bovine respiratory disease. Vet. Clin. North. Am. Food. Anim. Pract. 13:549–574. doi: 10.1016/S0749-0720(15)30313-3 [DOI] [PubMed] [Google Scholar]
- Bakshi B. R. 1998. Multiscale PCA with application to multivariate statistical process monitoring. AIChE journal. 44:1596–1610. doi: 10.1002/aic.690440712 [DOI] [Google Scholar]
- Bayne J. E., Walz P. H., Passler T., White B. J., Theurer M. E., and van Santen E.. 2016. Use of three-dimensional accelerometers to evaluate behavioral changes in cattle experimentally infected with bovine viral diarrhea virus. Am. J. Vet. Res. 77:589–596. doi: 10.2460/ajvr.77.6.589 [DOI] [PubMed] [Google Scholar]
- Broom D. M. 2006. Behaviour and welfare in relation to pathology. App. Anim. Behav. Sci. 97:73–83. doi: 10.1016/j.applanim.2005.11.019 [DOI] [Google Scholar]
- Buhman M. J., Perino L. J., Galyean M. L., Wittum T. E., Montgomery T. H., and Swingle R. S.. 2000. Association between changes in eating and drinking behaviors and respiratory tract disease in newly arrived calves at a feedlot. Am. J. Vet. Res. 61:1163–1168. doi: 10.2460/ajvr.2000.61.1163 [DOI] [PubMed] [Google Scholar]
- Carroll J. A., and Forsberg N. E.. 2007. Influence of stress and nutrition on cattle immunity. Vet. Clin. North Am. Food Anim. Pract. 23:105–149. doi: 10.1016/j.cvfa.2007.01.003 [DOI] [PubMed] [Google Scholar]
- Caulcutt R. 1995. The rights and wrongs of control charts. Appl. Stat. 44:279–288. doi: 10.2307/2986037 [DOI] [Google Scholar]
- Cusack P. M., McMeniman N., and Lean I. J.. 2003. The medicine and epidemiology of bovine respiratory disease in feedlots. Aust. Vet. J. 81:480–487. doi: 10.1111/j.1751-0813.2003.tb13367.x [DOI] [PubMed] [Google Scholar]
- Daniels T., Bowman J., Sowell B., Branine M., and Hubbert M.. 2000. Effects of metaphylactic antibiotics on behavior of feedlot calves. Prof. Anim. Sci. 16:247–253. doi: 10.15232/S1080-7446(15)31707-1 [DOI] [Google Scholar]
- De Vries A., and Reneau J. K.. 2010. Application of statistical process control charts to monitor changes in animal production systems. J. Anim. Sci. 88(13 Suppl):E11–E24. doi: 10.2527/jas.2009-2622 [DOI] [PubMed] [Google Scholar]
- Duff G. C., and Galyean M. L.. 2007. Board-invited review: Recent advances in management of highly stressed, newly received feedlot cattle. J. Anim. Sci. 85:823–840. doi: 10.2527/jas.2006-501 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edwards A. 1996. Respiratory diseases of feedlot cattle in central USA. Bovine Practitioner. 30:5–10. [Google Scholar]
- Ferran A. A., Toutain P. L., and Bousquet-Mélou A.. 2011. Impact of early versus later fluoroquinolone treatment on the clinical; microbiological and resistance outcomes in a mouse-lung model of Pasteurella multocida infection. Vet. Microbiol. 148:292–297. doi: 10.1016/j.vetmic.2010.09.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Galyean M. L., Perino L. J., and Duff G. C.. 1999. Interaction of cattle health/immunity and nutrition. J. Anim. Sci. 77:1120–1134. doi: 10.1016/j.vetmic.2010.09.005 [DOI] [PubMed] [Google Scholar]
- Irsik M, Langemeier M., Schroeder T., Spire M. and Roder JD.. 2006. Estimating the effects of animal health on the performance of feedlot cattle. Bov. Pract. 40:65–74. [Google Scholar]
- Jackson K. S., Carstens G. E., Tedeschi L. O., and Pinchak W. E.. 2016. Changes in feeding behavior patterns and dry matter intake before clinical symptoms associated with bovine respiratory disease in growing bulls. J. Anim. Sci. 94:1644–1652. doi: 10.2527/jas.2015-9993 [DOI] [PubMed] [Google Scholar]
- Janzen E. D., Stockdale P. H., Acres S. D., and Babiuk L. A.. 1984. Therapeutic and prophylactic effects of some antibiotics on experimental pneumonic pasteurellosis. Can. Vet. J. 25:78–81. [PMC free article] [PubMed] [Google Scholar]
- Jensen R., and Mark D. R.. 2010. 2010 Nebraska feedyard labor cost benchmark and historical trends. Ext. Publ., Univ, Nebraska–Lincoln. [Google Scholar]
- Johnson R. A., and Wichern D. W.. 2002. Applied multivariate statistical analysis. Prentice hall Upper Saddle River, New Jersey. [Google Scholar]
- Kayser W., and Hill R. A.. 2013. Relationship between feed intake, feeding behaviors, performance, and ultrasound carcass measurements in growing purebred angus and hereford bulls. J. Anim. Sci. 91:5492–5499. doi: 10.2527/jas.2013-6611 [DOI] [PubMed] [Google Scholar]
- Lancaster P. A., Carstens G. E., Ribeiro F. R. B., Tedeschi L. O., and Crews D. H.. 2009. Characterization of feed efficiency traits and relationships with feeding behavior and ultrasound carcass traits in growing bulls. J. Anim. Sci. 87:1528–1539. doi: 10.2527/jas.2008-1352 [DOI] [PubMed] [Google Scholar]
- Lukas J. M., Hawkins D. M., Kinsel M. L., and Reneau J. K.. 2005. Bulk tank somatic cell counts analyzed by statistical process control tools to identify and monitor subclinical mastitis incidence. J. Dairy Sci. 88:3944–3952. doi: 10.3168/jds.S0022-0302(05)73080-0 [DOI] [PubMed] [Google Scholar]
- Lukas J. M., Reneau J. K., and Linn J. G.. 2008. Water intake and dry matter intake changes as a feeding management tool and indicator of health and estrus status in dairy cows. J. Dairy Sci. 91:3385–3394. doi: 10.3168/jds.2007-0926 [DOI] [PubMed] [Google Scholar]
- MacGregor D. S., Sjeklocha D., and Holland R.. 2015. Eating behavior in the feedlot: a tool to assist in the detection of bovine respiratory disease. Bovine Pract. 48:58–60. [Google Scholar]
- Madsen T. N., and Kristensen A. R.. 2005. A model for monitoring the condition of young pigs by their drinking behaviour. Comput. Electron. Agric. 48:138–154. doi: 10.1016/j.compag.2005.02.014 [DOI] [Google Scholar]
- Mendes E. D., Carstens G. E., Tedeschi L. O., Pinchak W. E., and Friend T. H.. 2011. Validation of a system for monitoring feeding behavior in beef cattle. J. Anim. Sci. 89:2904–2910. doi: 10.2527/jas.2010-3489 [DOI] [PubMed] [Google Scholar]
- Mertens K., Vaesen I., Löffel J., Ostyn B., Kemps B., Kamers B., Bamelis F., Zoons J., Darius P., and Decuypere E.. 2008. Data-based design of an intelligent control chart for the daily monitoring of the average egg weight. Comput. Electron. Agric. 61:222–232. doi: 10.1016/j.compag.2007.11.010 [DOI] [Google Scholar]
- Montgomery D. C. 2009. Statistical quality control. Wiley, New York. [Google Scholar]
- Moya D., Silasi R., McAllister T. A., Genswein B., Crowe T., Marti S., and Schwartzkopf-Genswein K. S.. 2015. Use of pattern recognition techniques for early detection of morbidity in receiving feedlot cattle. J. Anim. Sci. 93:3623–3638. doi: 10.2527/jas.2015-8907 [DOI] [PubMed] [Google Scholar]
- Noffsinger T., and Locatelli L.. 2004. Low-stress cattle handling: An overlooked dimension of management. Proc. Meet. Academy of Veterinary Consultants. 32:65–78. [Google Scholar]
- Pinchak W. E., Tolleson D. R., McCloy M., Hunt L. J., Gill R. J., Ansley R. J., and Bevers S. J.. 2004. Morbidity effects on productivity and profitability of stocker cattle grazing in the southern plains. J. Anim. Sci. 82:2773–2779. doi: 10.2527/2004.8292773x [DOI] [PubMed] [Google Scholar]
- Quesenberry C. P. 1997. SPC methods for quality improvement. John Wiley & Sons, New York. [Google Scholar]
- Quimby W. F., Sowell B. F., Bowman J. G. P., Branine M. E., Hubbert M. E., and Sherwood H. W.. 2001. Application of feeding behaviour to predict morbidity of newly received calves in a commercial feedlot. Can. Vet. J. 81:315–320. doi: 10.4141/A00-098 [DOI] [Google Scholar]
- Schaefer A. L., Cook N. J., Church J. S., Basarab J., Perry B., Miller C., and Tong A. K.. 2007. The use of infrared thermography as an early indicator of bovine respiratory disease complex in calves. Res. Vet. Sci. 83:376–384. doi: 10.1016/j.rvsc.2007.01.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schneider M. J., Tait R. G. Jr, Busby W. D., and Reecy J. M.. 2009. An evaluation of bovine respiratory disease complex in feedlot cattle: Impact on performance and carcass traits using treatment records and lung lesion scores. J. Anim. Sci. 87:1821–1827. doi: 10.2527/jas.2008-1283 [DOI] [PubMed] [Google Scholar]
- Shewhart W. A. 1931. Economic control of quality of manufactured product. D. Van Nostrand Company, New York, New York. doi: 10.1002/j.1538-7305.1930.tb00373.x [DOI] [Google Scholar]
- Shlens J. 2005. A tutorial on principal component analysis. Systems Neurobiology Laboratory, Salk Insitute for Biological Studies, La Jolla, CA 92037 and Institute for Nonlinear Science. University of California, San Diego, La Jolla, CA. 92093-0402 arXiv preprint arXiv:1404.1100 [Google Scholar]
- Smith R. A. 1998. Impact of disease on feedlot performance: A review. J. Anim. Sci. 76:272–274. doi: 10.2527/1998.761272x [DOI] [PubMed] [Google Scholar]
- Smith D. R. 2015. Investigating outbreaks of disease or impaired productivity in feedlot cattle. Vet. Clin. North Am. Food Anim. Pract. 31:391–406, vi. doi: 10.1016/j.cvfa.2015.05.003 [DOI] [PubMed] [Google Scholar]
- Sowell B. F., Branine M. E., Bowman J. G., Hubbert M. E., Sherwood H. E., and Quimby W.. 1999. Feeding and watering behavior of healthy and morbid steers in a commercial feedlot. J. Anim. Sci. 77:1105–1112. doi: 10.2527/1999.7751105x [DOI] [PubMed] [Google Scholar]
- Theurer M. E., Anderson D. E., White B. J., Miesner M. D., Mosier D. A., Coetzee J. F., Lakritz J., and Amrine D. E.. 2013. Effect of pneumonia on behavior and physiologic responses of calves during high ambient environmental temperatures. J. Anim. Sci. 91:3917–3929. doi: 10.2527/jas.2012-5823 [DOI] [PubMed] [Google Scholar]
- Timsit E., Dendukuri N., Schiller I., and Buczinski S.. 2016. Diagnostic accuracy of clinical illness for bovine respiratory disease (BRD) diagnosis in beef cattle placed in feedlots: A systematic literature review and hierarchical Bayesian latent-class meta-analysis. Prev. Vet. Med. 135:67–73. doi: 10.1016/j.prevetmed.2016.11.006 [DOI] [PubMed] [Google Scholar]
- White B. J., Goehl D. R., and Amrine D. E.. 2015. Comparison of a remote early disease identification (REDI) system to visual observations to identify cattle with bovine respiratory diseases. Int. J. Appl. Res. Vet. Med. 13:23–30. [Google Scholar]
- White B. J., and Renter D. G.. 2009. Bayesian estimation of the performance of using clinical observations and harvest lung lesions for diagnosing bovine respiratory disease in post-weaned beef calves. J. Vet. Diagn. Invest. 21:446–453. doi: 10.1177/104063870902100405 [DOI] [PubMed] [Google Scholar]
- Wolfger B., Schwartzkopf-Genswein K. S., Barkema H. W., Pajor E. A., Levy M., and Orsel K.. 2015. Feeding behavior as an early predictor of bovine respiratory disease in North American feedlot systems. J. Anim. Sci. 93:377–385. doi: 10.2527/jas.2013-8030 [DOI] [PubMed] [Google Scholar]




