Table 1.
A Comparison of prior research on deepfake video recognition.
Research | DF Collections | CNN Attributes |
---|---|---|
A. Jaiswal [24]. | FF++ | Bidirectional recurrent neural networks (RNN) with DenseNet/ResNet50 are used to analyze the spatiotemporal properties of video streams. |
P.Dongare [18] | Hollywood-2 Human Actions | It takes into consideration the deep-fake video's temporal irregularities. Inception-V3+LSTM |
S. Lyu [25]. | Closed Eyes in the Wild (CEW) | Used long-term recurrent convolution networks the frequency of eye blinking VGG16+LSTM + FC |
A. Irtaza [14]. | Fusion of datasets | Differences in facial structure, missing detail in the eyes and mouth, and a neutral network and logarithmic regression model |
Nguyen [23]. | Four major datasets | VGG-19 + Capsule Network |
Hashmi [26]. | DFDC whole dataset | CNN + LSTM Used facial landmarks and convolution features |
Ganiyusufoglu [27]. | FF++, Celeb-DF | Triplet architecture, Metric learning approach |