Skip to main content
. 2023 May 15;23(10):4774. doi: 10.3390/s23104774

Table 2.

The table illustrates the hyperparameter tuning experiment, such as the number of skipped convolution blocks, the width of the head model, the number of recurrent layers, and the width of the ensemble network, in terms of the sensitivity of the architecture. Table is based on all possible outcomes of the best results obtained from hyperparameter tuning. We performed extensive experimentation with various hyperparameter configurations to find the optimal settings for our ensemble deep neural network architecture. As mentioned in the paper, our proposed model is an ensemble of convolutional neural network (CNN) and recurrent neural network (RNN), which is a nonlinear model.

Head Model Ensemble Model Sensitivity
No. of skipped Convolutional Blocks Width No. of Recurrent layers Width Non-Fall Pre-Fall Fall
2 (16,16,16) 1 (16) 0.89 0.87 0.94
2 (16,16,32) 1 (32) 0.88 0.85 0.91
2 (16,16,16) 2 (16,16) 0.90 0.88 0.94
2 (16,16,32) 2 (32,64) 0.88 0.87 0.91
2 (16,16,16) 3 (32,32,64) 0.88 0.88 0.90
2 (16,32,64) 3 (64,64,128) 0.87 0.86 0.89
2 (16,16.16) 4 (16,16,32,32) 0.89 0.88 0.90
2 (16,16,32) 4 (32,32,64,64) 0.89 0.82 0.88
3 (16,16,16) 2 (16,16) 0.91 0.89 0.96
3 (16,32,32) 2 (32,32) 0.90 0.87 0.94
3 (16,32,64) 3 (64,64,128) 0.90 0.88 0.94
3 (16,16,16) 3 (16,16,16) 0.91 0.89 0.97
3 (16,16,32) 4 (32,32,64,64) 0.91 0.87 0.94
3 (16,32,64) 4 (64,64,128,128) 0.90 0.88 0.93
4 (16,16,16) 2 (16,16) 0.92 0.90 0.94
4 (16,16,32) 2 (32,64) 0.90 0.89 0.92
4 (16,32,64) 3 (64,64,128) 0.89 0.87 0.89
4 (16,16,32) 3 (32,32,64) 0.90 0.88 0.90
4 (16,16,16) 4 (16,16,16,16) 0.92 0.90 0.95
4 (16,32,64) 4 (64,64,128,128) 0.89 0.88 0.91