Table 4.
Models | Categ | Parameter | Vis-NIR (%) | Parameter | NIR (%) | ||||
---|---|---|---|---|---|---|---|---|---|
Train a | Val b | Test c | Train | Val | Test | ||||
SVM | 0 | 2.0, 0.1, poly | 95.9 | 94.8 | 91.4 | 6.6, 1.0, linear | 99.4 | 100.0 | 96.6 |
1 | 1.2, 0.1, poly | 98.4 | 96.3 | 92.7 | 1.0, 1.0, poly | 100.0 | 100.0 | 96.3 | |
2 | 1.0, 1.0, poly | 1.00 | 88.0 | 93.2 | 1.0, 1.0, poly | 100.0 | 100.0 | 95.9 | |
LR | 0 | 1 × 105, liblinear | 100.0 | 89.7 | 93.1 | 100, lbfgs | 99.4 | 93.1 | 98.3 |
1 | 1 × 105, liblinear | 100.0 | 98.8 | 93.9 | 1 × 105, liblinear | 100.0 | 100.0 | 100.0 | |
2 | 1 × 104, liblinear | 100.0 | 92.0 | 95.9 | 100, newton-cg | 100.0 | 98.7 | 97.3 | |
RF | 0 | 8, 450 | 100.0 | 77.6 | 79.3 | 6, 750 | 100.0 | 74.1 | 81.0 |
1 | 7, 500 | 99.6 | 72.3 | 73.2 | 5, 550 | 98.8 | 86.7 | 87.8 | |
2 | 8, 200 | 100.0 | 66.7 | 75.7 | 4, 250 | 99.1 | 98.7 | 93.2 | |
CNN | 0 | 500, 32, 0.001 | 99.4 | 98.3 | 93.1 | 500, 32, 0.001 | 100.0 | 100.0 | 98.3 |
1 | 500, 32, 0.001 | 97.6 | 97.6 | 92.7 | 500, 32, 0.001 | 100.0 | 100.0 | 98.8 | |
2 | 500, 32, 0.001 | 100.0 | 98.7 | 93.2 | 500, 32, 0.001 | 99.5 | 100.0 | 98.6 | |
ResNet | 0 | 1000, 32, 0.005 | 100.0 | 94.8 | 93.1 | 600, 32, 0.005 | 100.0 | 93.1 | 86.2 |
1 | 1000, 32, 0.005 | 100.0 | 100.0 | 98.8 | 1000, 32, 0.005 | 100.0 | 100.0 | 97.6 | |
2 | 1000, 32, 0.005 | 100.0 | 97.3 | 94.6 | 600, 32, 0.005 | 97.7 | 100.0 | 97.3 |
a,b,c represent training, validation, and test sets for the model; 0,1,2 represent Cabernet, Red grape and Munage, respectively, Categ mean Category of the grape. Parameters of the SVM, LR, RF, and CNN ResNet are shown. The parameters of the SVM, are (C, gamma, kernel); those of the LR are (C, solver); those of the RF are (n_estimator, max_depth); those of the CNN and ResNet are (epoch, batchsize, learning rate).