Skip to main content
. 2021 Apr 3;38(7):1627–1639. doi: 10.1007/s10815-021-02123-2

Table 1.

Comparison table of studies associated with the cell counting procedure

Paper AI Architecture Input data type Training data size CNN function Annotation limit Overall accuracy
Khan et al. (2016) [33] Custom CNN architecure [39] Dark-field time-lapse image sequence 265 sequence/ 148.993 frames Automatically count number of cells Up to 5 cells stage ± 92.18%
Ng and McAuley et al. (2018) [35] Modified ResNet model [40] EmbryoScope time lapse videos 1309 videos/ 191.449 frames Automatically detect embryo developmental stage Up to 4+ cells stage ± 87%
Malmsten et al. (2019) [36] InceptionV3 model [41] Time-lapse microscopy image 47.584 images Predict/detect cell division Up to 4-cell stage ± 90.77%
Liu et al. (2019) [42] Multi-task deep learning with dynamic programming (MTDL-DP) Time-lapse microscopy video Extracted 59.500 frames in total Classify embryo development stages From initialization (tStart) to 4+ cells (t4+) ± 86%
Leahy et al. (2020) [37] ResNeXt101 [43] Embryoscope time-lapse images 73 embryos with 23.850 labels Multiple function pipeline, one of which is detecting embryo developmental stage From start of incubation to finish ± 87.9%%
Dirvanauskas et al. (2019) [38] AlexNet [39] + 2nd classifier Miri TL-captured images 3000 images from 6 embryo sets + 600 images of embryo fragmentation Classify the development stage of an embryo Classify for 1 cell, 2 cells, 4 cells, 8 cells and No Embryo ± 97.5%%
Lau et al. (2019) [44] RPN + ResNet-50 [40] EmbryoScope-captured videos 1309 time-lapse videos extracted by frames Locate cell and classify development stage Classify for tStart, tPnf, t2, t3, t4, t4+ ± 90%
Raudonis et al. (2019) [45] Comparing VGG [46] and AlexNet [39] Miri TL-captured images 300 TL sequences for a totl of 114793 frames Locate cell and classify development stage Classify for 1 cell, 2 cells, 3 cells, 4 cells, > 4 cells VGG: ± 93.6% AlexNet: ± 92.7%