Table 10.
Comparison of TinyML models trained on Google Speech Commands Dataset.
| Model | Accuracy (%) | Latency (ms) | Power consumption | Suitability for TinyML |
|---|---|---|---|---|
| Transformer-based Model43 | 92.0 | 80–150 | High | Not ideal due to computational cost |
| Hybrid CNN-RNN39 | 89.5 | 100–250 | Moderate | Suitable for noise-resilient applications |
| DNN (Deep Neural Network)36 | 99.0 | 50–100 | High | Requires optimization |
| CNN38 | 94.0 | 30–70 | Moderate | Highly suitable for real-time |
| RNN37 | 95.0 | 100–200 | High | Effective for sequential learning tasks |
| Decision Tree41 | 90.0 | 10–30 | Low | Best for ultra-low-power applications |
| SVM42 | 91.5 | 90–180 | Moderate | Balanced approach for edge deployment |