Skip to main content
Poultry Science logoLink to Poultry Science
. 2019 Dec 30;99(1):637–646. doi: 10.3382/ps/pez564

Broiler stunned state detection based on an improved fast region-based convolutional neural network algorithm

Chang-wen Ye *, Khurram Yousaf *, Chao Qi *, Chao Liu *, Kun-jie Chen *,1
PMCID: PMC7587773  PMID: 32416852

Abstract

An improved fast region-based convolutional neural network (RCNN) algorithm is proposed to improve the accuracy and efficiency of recognizing broilers in a stunned state. The algorithm recognizes 3 stunned state conditions: insufficiently stunned, moderately stunned, and excessively stunned. Image samples of stunned broilers were collected from a slaughter line using an image acquisition platform. According to the format of PASCAL VOC (pattern analysis, statistical modeling, and computational learning visual object classes) dataset, a dataset for each broiler stunned state condition was obtained using an annotation tool to mark the chicken head and wing area in the original image. A rotation and flip data augmentation method was used to enhance the effectiveness of the datasets. Based on the principle of a residual network, a multi-layer residual module (MRM) was constructed to facilitate more detailed feature extraction. A model was then developed (entitled here Faster-RCNN+MRMnet) and used to detect broiler stunned state conditions. When applied to a reinforcing dataset containing 27,828 images of chickens in a stunned state, the identification accuracy of the model was 98.06%. This was significantly higher than both the established back propagation neural network model (90.11%) and another Faster-RCNN model (96.86%). The proposed algorithm can complete the inspection of the stunned state of more than 40,000 broilers per hour. The approach can be used for online inspection applications to increase efficiency, reduce labor and cost, and yield significant benefits for poultry processing plants.

Key words: broiler, convolutional neural network, deep learning, electrical stunning, stunned state detection

INTRODUCTION

Electric stunning (Siqueira et al., 2017; Sirri et al., 2017) is an important aspect of poultry slaughtering and processing (Berg and Raj, 2015). Moderate stunning can render broilers unconsciousness for between 40 and 52 s. This condition provides the best bloodletting rate (Huang et al., 2014), easier feather removal, and minimal carcass damage (Lines et al., 2011), and the meat is more tender (Xu et al., 2011). However, when insufficiently stunned (i.e., the electrical current reaching the brain is too low), broilers still sensitive to pain and stress (Devos et al., 2018), clonic-tonic convulsions (death struggle) will occur, leading to carcass damage (broken wings and clavicles) (Bourassa et al., 2017). Excessive stunning can also result in quality defects, such as clavicle rupture, bleeding from arteries and capillaries, and a large number of needle-like blood spots near the top of the chest (Ciobanu et al., 2013). Therefore, moderate stunning plays an important role in ensuring the quality of chicken meat.

The amount of current, the electrical frequency, the electrical waveform, and the stunning time are the most common parameters that can be optimized to improve the stunning effectiveness (Girasole et al., 2016). Exactly the same electric stunning conditions can have widely divergent effects on broilers, depending on their breed, age, and body weight (Prinz et al., 2009, 2010). To obtain an optimal stunning effect, the frequency and voltage of the electric stunning machine has to be adjusted in real time, according to the condition of the broiler and its stunned state after checking. However, due to individual differences, the moderate stunning of broilers is not currently verified in many small and medium broiler processing plants. Most broiler slaughtering companies do not apply objective criteria or online detection methods and techniques to ensure that broilers are moderately stunned. Workers preset the stun voltage and frequency, according to their experience, and these settings then remain fixed and are not adjusted according to the breed, weight, age, or stunned condition of the broiler being slaughtered. This results in a significant number of insufficiently or excessively stunned broilers being slaughtered in broiler slaughterhouses (Sabow et al., 2017).

Previous studies conducted by Sams and McKee (2010) found that, after moderate stunning, chickens hold their wings in close contact with their bodies and their necks are arched and stiff. When they are improperly stunned, their appearance is significantly different. These characteristics make it possible to clearly discriminate whether a broiler is in an appropriate stunned condition.

To overcome the existing problems and variable quality in the chicken industry, the frequency and voltage of electric stunning needs to be properly adjusted by correctly detecting and identifying the stunned state of each individual broiler. Currently, the stunned state of shocked broilers is usually left to manual vision detection. This is not only time-consuming, inconvenient, and subjective, but it does not allow for a rapid adjustment of voltage and frequency. Over the past 2 decades, machine vision and image processing technology has developed rapidly and it has begun to be used more and more frequently in agriculture for the detection and identification of a range of phenomena (Mahlein, 2016; Bai et al., 2017). A new technology has been developed that uses modified pressure and imaging to detect microcracks in eggs. Research has shown that the system to have an accuracy of 99.6% in detecting both cracked and intact eggs (Jones et al., 2010). In relation to broilers, A line-scan machine vision system and multispectral inspection algorithm were developed and evaluated for differentiation of wholesome and systemically diseased chickens on a high-speed processing line, which correctly identified 97.1% of systemically diseased chickens (Yang et al., 2010). Ye et al. (2018) have recently proposed a new method to identify broiler's stunned condition by using machine vision and a back propagation neural network (BP-NN). This can provide good recognition accuracy (90.11%). However, the method is inefficient and the recognition accuracy is not ideal, so further improvement is required.

The image recognition accuracy largely depends on the extraction and feature selection (Amara et al., 2017). To accurately determine and identify the stunned state of a broiler, it is necessary to accurately extract the features of its stunned state and to then select the features that are meaningful. In recent years, deep learning has produced outstanding results in the field of image recognition. Amongst a range of possible approaches, convolutional neural networks (CNN) are particularly effective at automatically extracting the appropriate features from a training dataset without the need for manual feature extraction (McCool et al., 2017; Rahnemoonfar and Sheppard, 2017). Although the training period is long, it takes less time to test this approach than other methods based on machine learning (Chen et al., 2014), and it is widely recognized to be one of the best approaches to image recognition (Dyrmann et al., 2016).

When using machine vision technology to identify the stunned state of broilers, the recognition target is a unitary broiler and the features to be identified remain largely the same. In this paper, we propose using a multi-layer residual module (MRM) to obtain detailed feature extraction. Based on this, we have developed an improved and optimized fast region-based convolutional neural network (Faster-RCNN+MRMnet) model that can precisely identify the stunned state of broilers. Development of the model has involved the creation of training image datasets containing 3 types of stunned condition: insufficiently stunned, moderately stunned, and excessively stunned.

MATERIALS AND METHODS

Image Datasets

Electric stun testing and image collection were carried out at the Dongtai Poultry Slaughter Factory of Jiangsu Yueda Agricultural Group Poultry Technology Co., Ltd. The sample used in the test was a 42-day-old white feather broiler, produced by the same company. The electric hemp machine was an SQ05 series variable frequency electric hemp machine, manufactured by Jiangsu Wujiang Aneng Electronic Technology Co., Ltd, suzhou, China. This machine uses water bath-based electric stunning, and it was set to an output frequency of 700 Hz. The shock duration was 10 s, and various voltages, 5, 15 and 25 V were tested. During the test, the broilers were hung in the slaughter line and stunned for 10 s at the pre-set frequency at 1 of the 3 selected voltages. Images of the stunned broilers were captured using a CMOS camera (Microvision EM130C, Shanxi, China). A total of 2,319 images, at 240 × 320 pixels, for different stunned states were collected. Then, using an annotation labeling tool, the broiler heads and wings in the original images were marked in accordance with the format used in the PASCAL VOC (pattern analysis, statistical modeling and computational learning visual object classes) database (PASCAL VOC Project, 2012). This enabled a dataset of the 3 stunned states to be obtained.

The stunned states of the shocked broilers were divided into 3 categories: insufficiently stunned, moderately stunned, and excessively stunned (Sams and McKee, 2010; Ye et al., 2018). Figure 1 shows these 3 stunned conditions. Insufficiently stunned broilers (Figure 1a–d) are still vaguely conscious. After applying the current, the broilers flutter or raise their heads. The moderately stunned broilers (Figure 1e and f) temporarily lose consciousness and appear to be still, with their wings tucked in and their necks arched and stiff. Excessively stunned broilers (Figure 1g and h) have completely lost consciousness or are dead, and their nerves are no longer in control of their bodies. Thus, their heads hang loosely and their wings are open.

Figure 1.

Figure 1

Sample images of broilers in the 3 stunned conditions.

On the basis of the above observations, the 2,319 image samples were divided into the 3 categories of insufficiently stunned, moderately stunned, and excessively stunned, with the quantity of images for each category being 1,075, 626, and 618, respectively. The dataset was then divided up into training sets and test sets by a ratio of 8:2, with the images in each set being randomly selected. As a result of the small overall number of datasets, it was possible for overfitting to occur during the training. Data augmentation can help to expand a dataset and reduce the likelihood of this happening (Sladojevic et al., 2016), thereby improving the learning process and performance (Grinblat et al., 2016). Data augmentation has to be done before any training. Data augmentation techniques include random cropping, scaling, rotation, transposition, flipping, and PCA (Dyrmann et al., 2016; Chen et al., 2017). In this study, the method proposed by Ma et al. (2018) was used to enhance the dataset by rotating the original dataset by 90, 180, and 270°, and through horizontal and vertical flipping.

Faster-RCNN+MRMnet Model Development

Multi-Layer Residual Module

For broiler stunned state recognition, the principal objects to be identified are broilers, rather than different attributes and other categories. This results in the extracted features for each stunned state having many identical principal parts. Special attention, therefore, has to be paid to subtle feature differences in the images for each stunned condition. To capture more comprehensive and finer-grained image features, a MRM was used, based on the principle of residual networks (Liu et al., 2019). Its structure is shown in Figure 2.

Figure 2.

Figure 2

Structure of the multi-layer residual module.

The MRM consisted of 3 convolutional layers (CONV1, CONV2, CONV3), 3 ReLU activation functions, and a dimension-matching shortcut connection. CONV1 and CONV3 were 2 × 3 × 3 filters, with a step size of 1. CONV2 was an X 3 × 3 filter, also with a step size of 1. After convolution through the 3 convolutional layers, CONV1, CONV2, and CONV3, the low-level X feature input and high-level X3 detailed features were linked by means of the dimension-matching shortcut connection. The output of this was then passed to the network structure below to continue with finer feature extraction. X and X3 were added together and the output H(X) = X3+WiX was obtained. In MRM, if X and X3 are dimension-matched, the addition operation can be performed directly. If the dimensions do not match, an equivalent map is used to directly increase the dimensions through zero padding.

MRMnet

To extract the basic features of the stunned state of the broilers, an additional MRMnet feature extraction network can be used. The MRMnet architecture is shown in Figure 3. MRMnet consists of 2 convolutional layers, 7 MRMs, and 4 max-pooling layers. The max-pooling layer filter is 2 × 2 and has a span of 2. This means that the feature map is reduced by a factor of 2 along its width and height (Barré et al., 2017). MRMnet is composed of 5 modules. The first module consists of the 2 convolutional layers and 1 maximum pooled layer. The convolutional layers have 64 3 × 3 filters, with a step size of 1. The second module consists of 1 MRM and 1 maximum pooling layer, where the number of channels in the MRM, x, is 64. The third module consists of 2 identical MRMs and a maximum pooling layer, where the number of channels in the MRM, x, is 128. The fourth module consists of 2 identical MRMs and a maximum pooling layer, where the number of channels in the MRM, x, is 256. The fifth module consists of 2 identical MRMs, where the number of channels in the MRM, x, is 256. Figure 4 shows, from left to right, an arbitrary sized color image being passed through the network, the input image having been resized to an identical MxN. After the 2 convolutional layers and the max-pooling layer, the low-level feature information of the image is acquired. Then, through the 7 MRMs and 3 max-pooling layers, convolution feature maps of the stunned state of the broiler can finally be obtained. For further detail, see the MRMnet algorithm flow diagram in Figure 4.

Figure 3.

Figure 3

MRMnet architecture.

Figure 4.

Figure 4

MRMnet flow diagram.

Faster-RCNN+MRMnet

The stunned state classification of broilers requires both high accuracy and temporal efficiency. On the basis of the Faster-RCNN network architecture proposed by Ren et al. (2017), and with MRMnet being used as a basic feature extraction network, we designed a broiler stunned state classification model entitled Faster- RCNN+MRMnet. The network structure of the model is shown in Figure 5. An input image of any size is first resized to 224 × 224 pixels, then MRMnet is used to extract convolution feature maps of 14 × 14 pixels. After this, an region proposal network (Sun et al., 2018; Yang et al., 2018) is used to extract a set of object proposals, formulated according to the region of the objects. Each object proposal is mapped to the convolution feature map to get a corresponding feature map. This is then passed through a region of interest (Quan et al., 2019) pooling layer to get a fixed length feature vector. Finally, the feature vector is input into the fully connected layer sequence to obtain 2 sibling-level output layers. The first generates the respective Softmax probability estimates for the 3 broiler stunned state categories and the background class. The second represents 4 coordinate parameters indicating the position of the bounding box for the 4 categories. The details of the network model are shown in the MRMnet flow diagram in Figure 6.

Figure 5.

Figure 5

The Faster-RCNN+MRMnet architecture.

Figure 6.

Figure 6

The developed MRMnet flow diagram.

For this study, Faster-RCNN+MRMnet was run on an Ubuntu16.04, Python2.7.12 and CUDA8.0 parallel computing framework. The training was conducted on a GTX 1070Ti AERO Caffe frame. To support the process being focused upon here, the number of images contained in the broiler stunned state dataset needs to be quickly adapted to each new task using transfer learning (Pan and Yang, 2010) in a smaller dataset. A pre-trained model taken from the large dataset, ImageNet (1,000 classes, 10 million images), was therefore used to share the underlying structural weight parameters, followed by modification and fine-tuning of the top-level network structure of the model (Sa et al., 2016).

Faster-RCNN+MRMnet was trained using approximate joint training (Ren et al., 2017). A dropout layer was used to reduce the overfitting effect of the deep neural network. The dropout factor was set to 0.5. A step distance gap model was used to optimize the network weights. The initial value of the learning rate was 0.001, which was uniformly distributed. The iterative learning rate had decreased by 0.1 gamma after every 10,000 iterations. The amount of display data per sample set was 20. The momentum was 0.9 and this remained unchanged during the training. The weight attenuation term for the parameter-weight decay was 0.0005 and the number of iterations was 120,000.

Faster-RCNN+MRMnet Evaluation

Based on a confusion matrix (Powers, 2011), the performance of the classifier was evaluated according to its sensitivity, precision, F1 score, and accuracy. The calculations for these 4 indicators are as follows:

Sensitivity=number of correct predictionsnumber of true cases×100% (1)
Precision=number of correct predictionsnumber of predictions×100% (2)
F1 score=2×sensitivity×precisionsensitivity+precision×100% (3)
Accuracy=total number of correct predictionsall samples×100%

RESULTS AND DISCUSSION

Chicken Stunned Status Recognition

An augmented dataset containing 27,928 stunned state images was constructed by using the data augmentation method mentioned above. The 3 datasets are listed in Table 1.

Table 1.

Details of the datasets used to construct the model.

Stunned state Original dataset Augmented dataset Training Validation Test
Insufficiently stunned 1,075 12,900 8,256 2,064 2,580
Moderately stunned 626 7,512 4,812 1,200 1,500
Excessively stunned 618 7,416 4,740 1,188 1,488
Total 2,319 27,828 17,808 4,452 5,568

MRMnet was used to extract features from the input images and to visualize the feature map. The results are shown in Figure 7. It can be seen from the feature maps extracted from each convolution layer that the low-level convolution layer extracted the shape and color features of the image, while the more abstract features were obtained from the high-level convolution layer. MRMnet automatically extracts features that are more abundant rather than artificially extracted features, which are difficult to imitate.

Figure 7.

Figure 7

Partial feature maps extracted from the convolution layers.

The developed model was tested using the test dataset. The results are shown in Figure 8 and Table 2. It can be seen from the confusion matrix that the accuracy of Faster-RCNN+MRMnet reached 98.06%, indicating that the predictions matched the real situation. The average detection time for a single image was 0.0822 s, so 43,700 broilers can be detected per hour. With regard to the detection performance for each category, Faster-RCNN+MRMnet had the highest detection sensitivity for the insufficiently stunned category (F1 = 98.39), with 98.41% of the 2,580 real samples being correctly predicted. The prediction sensitivities for moderately stunned and excessively stunned were 98.20 and 97.31%, respectively. Faster-RCNN+MRMnet was especially effective at detecting the insufficiently stunned category. This may because the amount of data in this category was greater than the other categories, making Faster-RCNN+MRMnet more inclined to detect it.

Figure 8.

Figure 8

Prediction results for a partial test set.

Table 2.

Faster-RCNN+MRMnet confusion matrix statistics with unbalanced data.

Stun states Insufficiently stunned Moderately stunned Excessively stunned Sensitivity (%) Precision (%) F1 score (%) Accuracy (%) Time on GPU (s)
Insufficiently stunned 2,539 18 23 98.41 98.37 98.39
Moderately stunned 17 1,473 10 98.20 97.81 98.00 98.06 0.0822
Excessively stunned 25 15 1,448 97.31 97.78 97.54

A balanced dataset was used to test whether the amount of data can affect the detection performance of Faster-RCNN+MRMnet. A total of 4,000 stunned state images from each category in the augmented training dataset were randomly selected, together with 1,000 stunned state images from each category in the augmented test dataset. In total, 15,000 stunned state images were collected to build the balanced dataset, which contained the same number of image samples for each category. A total of 9,600 samples were used for model training, 2,400 samples were used to verify the model, and 3,000 samples were used for testing. The test results are shown in Table 3.

Table 3.

Faster-RCNN+MRMnet confusion matrix statistics with balanced data.

Stun states Insufficiently stunned Moderately stunned Excessively stunned Sensitivity (%) Precision (%) F1 score (%) Accuracy (%)
Insufficiently stunned 967 12 21 96.70 96.60 96.65
Moderately stunned 16 979 5 97.90 97.80 97.85 97.27
Excessively stunned 18 10 972 97.20 97.34 97.27

When compared with the unbalanced data, the detection performance accuracy of Faster-RCNN+MRMnet had slightly declined. This suggests that the amount of data in the dataset does have an impact on the performance of Faster-RCNN+MRMnet, which is consistent with prior observations that CNN gets better results when it is given more training data (Kamilaris and Prenafeta-Boldú, 2018). Unlike the unbalanced data, Faster-RCNN+MRMnet had the best detection performance for the moderately stunned category (F1 = 97.85). Approximately 97.90% of the 1,000 real samples were correctly predicted. Faster-RCNN+MRMnet had the lowest sensitivity for the 1,000 insufficiently stunned samples, with only 96.70% being correctly predicted. This may be due to the morphological characteristics of the insufficiently stunned broiler in the images, which are more complicated than they are for the other 2 types. This implies that the complexity of the morphological characteristics of each stunned state may also affect the accuracy of the Faster-RCNN+MRMnet classification.

Comparison Between Faster-RCNN+MRMnet, Faster-RCNN, and BP-NN

To compare the detection performance of Faster-RCNN+MRMnet with Faster-RCNN across the same pre-training parameters presented above, Faster-RCNN was tested using both the unbalanced dataset and the balanced dataset. The confusion matrix for the Faster-RCNN test results is shown in Table 4.

Table 4.

Faster-RCNN confusion matrix statistics.

Stun states
Insufficiently
Moderately
Excessively
Sensitivity
Precision
F1 score
Accuracy
Time on
stunned stunned stunned (%) (%) (%) (%) GPU (s)
Faster-RCNN confusion matrix statistics with unbalanced data
 Insufficiently stunned 2,509 32 39 97.25 97.55 97.40
 Moderately stunned 28 1,453 19 96.87 96.42 96.64 96.86 0.0954
 Excessively stunned 35 22 1,431 96.17 96.10 96.13
Faster-RCNN confusion matrix statistics with balanced data
 Insufficiently stunned 955 19 26 95.50 95.21 95.35
 Moderately stunned 23 967 10 96.70 96.89 96.79 96.17 0.0954
 Excessively stunned 25 12 963 96.30 96.40 96.35

The results show that the accuracy of Faster-RCNN across the 2 datasets was 96.86 and 96.17%, respectively, which is lower than the accuracy of Faster-RCNN+MRMnet. The average detection time for Faster-RCNN was 0.0954 s, about 16% higher than Faster-RCNN+MRMnet. Therefore, Faster-RCNN+MRMnet is able to identify the stunned state of broilers with higher degrees of accuracy and more quickly. Table 4also shows that Faster-RCNN trained with unbalanced data achieved the best detection performance for the insufficiently stunned category (F1 = 97.40), but, when it was trained with balanced data, the best detection performance was for the moderately stunned category (F1 = 96.79). At the same time, the accuracy for the balanced dataset was lower than it was for the unbalanced dataset, which is consistent with the results for Faster-RCNN+MRMnet. This also shows that both the number of datasets and the proportion of each category in the dataset will affect CNN performance.

If either Faster-RCNN+MRMnet or Faster-RCNN is compared to the broiler stunned state recognition accuracy of 90.11% obtained by using BP-NN, as documented by Ye et al. (2018), the recognition accuracy is significantly improved. This suggests that using fast region-based convolutional neural networks to identify the stunned state of broilers will provide better prediction accuracy than traditional classifiers.

CONCLUSION

By using the improved fast region-based convolution neural network algorithm proposed in this paper to detect the stunned state of broilers, better results can be achieved than previously proposed methods. The detection accuracy can reach 98.06% (for unbalanced datasets) and 97.27% (for balanced datasets), and 43,700 broilers can be tested every hour. The amount of data in the dataset and the complexity of the morphological characteristics of the detected objects may affect the classification accuracy of Faster-RCNN+MRMnet. When compared with the performance of Faster-RCNN, the introduction of an MRM into Faster-RCNN+MRMnet further enhanced its performance. The average detection time for Faster-RCNN was 0.0954 s, about 16% higher than Faster-RCNN+MRMnet. We have also found here that, whether using Faster-RCNN+MRMnet or Faster-RCNN, the detection results for the stunned state of broilers are significantly better than those produced by traditional classifiers such as BP-NN. In future work, we intend to use the Faster-RCNN+MRMnet method developed in this research for the design of a smart electric stun control system that can integrate stunned state recognition and automatic stun optimization. Our goal is to promote this approach for the electric stunning of broilers in the poultry slaughter industry, thereby replacing the currently flawed processes of manual detection and adjustment. This should help to alleviate the problem of insufficiently stunned and excessively stunned broilers and the concomitant carcass damage caused by improper stunning.

ACKNOWLEDGEMENTS

The authors would like to express their gratitude to EditSprings (https://www.editsprings.com/) for the expert linguistic services provided. Financial support for this research was received from the China National Science and Technology Support Program, 2015BAD19806 and the China National Broiler Industry Technology System, CARS-42–5. Our thanks to Professor Chen Kunjie of Nanjing Agricultural University for his technical support.

REFERENCES

  1. Amara J., Bouaziz B., Algergawy A. Datenbanksysteme für Business, Technologie und Web BTW, Workshop, Stuttgart, Germany. 2017. A deep learning-based approach for banana leaf diseases classification; pp. 79–88. [Google Scholar]
  2. Bai X., Li X., Fu Z., Lv X., Zhang L. A fuzzy clustering segmentation method based on neighborhood grayscale information for defining cucumber leaf spot disease images. Comput. Electron. Agric. 2017;136:157–165. [Google Scholar]
  3. Barre P., Stover B.C., Muller K.F., Steinhage V. LeafNet: a computer vision system for automatic plant species identification. Ecol. Inform. 2017;40:50–56. [Google Scholar]
  4. Berg C., Raj M. A review of different stunning methods for poultry-animal welfare aspects (stunning methods for poultry) Animals. 2015;5:1207–1219. doi: 10.3390/ani5040407. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bourassa D.V., Bowker B.C., Zhuang H., Wilson K.M., Harris C.E., Buhr R.J. Impact of alternative electrical stunning parameters on the ability of broilers to recover consciousness and meat quality. Poult. Sci. 2017;96:3495–3501. doi: 10.3382/ps/pex120. [DOI] [PubMed] [Google Scholar]
  6. Chen Y.S., Lin Z.H., Zhao X., Wang G., Gu Y.F. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014;7:2094–2107. [Google Scholar]
  7. Chen S.W., Shivakumar S.S., Dcunha S., Das J., Okon E., Qu C., Taylor C.J., Kumar V. Counting apples and oranges with deep learning: a data-driven approach. IEEE Robot. Autom. Lett. 2017;2:781–788. [Google Scholar]
  8. Ciobanu M.M., Corneliu B.P., Roxana L., Narcisa P.A., Elena S., Teodor B., Edi P. Influence of electrical stunning voltage on bleed out, sensory parameters and color in chicken meat quality. Curr. Opin. Biotechnol. 2013;24:S89. S89. [Google Scholar]
  9. Devos G., Moons C.P.H., Houf K. Diversity, not uniformity: slaughter and electrical waterbath stunning procedures in Belgian slaughterhouses. Poult. Sci. 2018;97:3369–3379. doi: 10.3382/ps/pey181. [DOI] [PubMed] [Google Scholar]
  10. Dyrmann M., Karstoft H., Midtiby H.S. Plant species classification using deep convolutional neural network. Biosyst. Eng. 2016;151:72–80. [Google Scholar]
  11. Girasole M., Marrone R., Anastasio A., Chianese A., Mercogliano R., Cortesi M.L. Effect of electrical water bath stunning on physical reflexes of broilers: evaluation of stunning efficacy under field conditions. Poult. Sci. 2016;95:1205–1210. doi: 10.3382/ps/pew017. [DOI] [PubMed] [Google Scholar]
  12. Grinblat G.L., Uzal L.C., Larese M.G., Granitto P.M. Deep learning for plant identification using vein morphological patterns. Comput. Electron. Agric. 2016;127:418–424. [Google Scholar]
  13. Huang J.C., Huang M., Yang J., Wang P., Xu X.L., Zhou G.H. The effects of electrical stunning methods on broiler meat quality: effect on stress, glycolysis, water distribution, and myofibrillar ultrastructures. Poult. Sci. 2014;93:2087–2095. doi: 10.3382/ps.2013-03248. [DOI] [PubMed] [Google Scholar]
  14. Jones D.R., Lawrence K.C., Yoon S.C., Heitschmidt G.W. Modified pressure imaging for egg crack detection and resulting egg quality. Poult. Sci. 2010;89:761–765. doi: 10.3382/ps.2009-00450. [DOI] [PubMed] [Google Scholar]
  15. Kamilaris A., Prenafeta-Boldú F.X. Deep learning in agriculture: a survey. Comput. Electron. Agric. 2018;147:70–90. [Google Scholar]
  16. Lines J.A., Wotton S.B., Barker R., Spence J., Wilkins L., Knowles T.G. Broiler carcass quality using head-only electrical stunning in a waterbath. Br. Poult. Sci. 2011;52:439–445. doi: 10.1080/00071668.2011.587181. [DOI] [PubMed] [Google Scholar]
  17. Liu S.P., Tian G.H., Xu Y. A novel scene classification model combining ResNet based transfer learning and data augmentation with a filter. Neurocomputing. 2019;338:191–206. [Google Scholar]
  18. Ma J.C., Du K.M., Zheng F.X., Zhang L.X., Gong Z.H., Sun Z.F. A recognition method for cucumber diseases using leaf symptom images based on deep convolutional neural network. Comput. Electron. Agric. 2018;154:18–24. [Google Scholar]
  19. Mahlein A.K. Plant disease detection by imaging sensors - parallels and specific demands for precision agriculture and plant phenotyping. Plant Dis. 2016;100:241–251. doi: 10.1094/PDIS-03-15-0340-FE. [DOI] [PubMed] [Google Scholar]
  20. McCool C., Perez T., Upcroft B. Mixtures of lightweight deep convolutional neural networks: applied to agricultural robotics. IEEE Robot. Autom. lett. 2017;2:1344–1351. [Google Scholar]
  21. Pan S.J., Yang Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010;22:1345–1359. [Google Scholar]
  22. PASCAL VOC Project The PASCAL Visual Object Classes. Obtenido de. 2012. http://host.robots.ox.ac.uk/pascal/VOC/
  23. Powers D.M.W. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. J. Mach. Learn. Technol. 2011;2:37–63. [Google Scholar]
  24. Prinz S., van Oijen G., Bessei W., Ehinger F., Coenen A. The electroencephalogram of broilers before and after DC and AC electrical stunning. Arch. Geflugelkd. 2009;73:67–70. [Google Scholar]
  25. Prinz S., Van Oijen G., Ehinger F., Coenen A., Bessei W. Electroencephalograms and physical reflexes of broilers after electrical waterbath stunning using an alternating current. Poult. Sci. 2010;89:1265–1274. doi: 10.3382/ps.2009-00135. [DOI] [PubMed] [Google Scholar]
  26. Quan L.Z., Feng H.Q., Li Y.J., Wang Q., Zhang C.B., Liu J.G., Yuan Z.Y. Maize seedling detection under different growth stages and complex field environments based on an improved Faster R-CNN. Biosyst. Eng. 2019;184:1–23. [Google Scholar]
  27. Rahnemoonfar M., Sheppard C. Deep count: fruit counting based on deep simulated learning. Sensors. 2017;17 doi: 10.3390/s17040905. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Ren S., He K., Girshick R., Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017;39:1137–1149. doi: 10.1109/TPAMI.2016.2577031. [DOI] [PubMed] [Google Scholar]
  29. Sa I., Ge Z.Y., Dayoub F., Upcroft B., Perez T., McCool C. DeepFruits: a fruit detection system using deep neural networks. Sensors. 2016;16 doi: 10.3390/S16081222. Artn 1222. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Sabow A.B., Nakyinsige K., Adeyemi K.D., Sazili A.Q., Johnson C.B., Webster J., Farouk M.M. High frequency pre-slaughter electrical stunning in ruminants and poultry for halal meat production: a review. Livest. Sci. 2017;202:124–134. [Google Scholar]
  31. Sams A.R., McKee S.R. First processing—slaughter through chilling. In: Owens C.M., Alvarado C.Z., Sams A.R., editors. Poultry Meat Processing. 2nd ed. CRC Press; Boca Raton, FL: 2010. pp. 25–50. [Google Scholar]
  32. Siqueira T.S., Borges T.D., Rocha R.M.M., Figueira P.T., Luciano F.B., Macedo R.E.F. Effect of electrical stunning frequency and current waveform in poultry welfare and meat quality. Poult. Sci. 2017;96:2956–2964. doi: 10.3382/ps/pex046. [DOI] [PubMed] [Google Scholar]
  33. Sirri F., Petracci M., Zampiga M., Meluzzi A. Effect of EU electrical stunning conditions on breast meat quality of broiler chickens. Poult. Sci. 2017;96:3000–3004. doi: 10.3382/ps/pex048. [DOI] [PubMed] [Google Scholar]
  34. Sladojevic S., Arsenovic M., Anderla A., Culibrk D., Stefanovic D. Deep neural networks based recognition of plant diseases by leaf image classification. Comput. Intell. Neurosci. 2016;2016 doi: 10.1155/2016/3289801. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Sun X.D., Wu P.C., Hoi S.C.H. Face detection using deep learning: an improved faster RCNN approach. Neurocomputing. 2018;299:42–50. [Google Scholar]
  36. Xu L., Zhang L., Yue H.Y., Wu S.G., Zhang H.J., Ji F., Qi G.H. Effect of electrical stunning current and frequency on meat quality, plasma parameters, and glycolytic potential in broilers. Poult. Sci. 2011;90:1823–1830. doi: 10.3382/ps.2010-01249. [DOI] [PubMed] [Google Scholar]
  37. Yang C.C., Chao K., Kim M.S., Chan D.E., Early H.L., Bell M. Machine vision system for on-line wholesomeness inspection of poultry carcasses. Poult. Sci. 2010;89:1252–1264. doi: 10.3382/ps.2008-00561. [DOI] [PubMed] [Google Scholar]
  38. Yang Q., Xiao D., Lin S. Feeding behavior recognition for group-housed pigs with the faster R-CNN. Comput. Electron. Agric. 2018;155:453–460. [Google Scholar]
  39. Ye C., Yousaf K., Zhao Y., Chen K. Effectiveness of computer vision system and back propagation neural network in poultry stunning prediction. Int. Agric. Eng. J. 2018;27:289–297. [Google Scholar]

Articles from Poultry Science are provided here courtesy of Elsevier

RESOURCES