Skip to main content
. 2018 Dec 24;19(1):59. doi: 10.3390/s19010059

Figure 13.

Figure 13

A sketch of our two-stage MLP model (top) and an affiliated training procedure (bottom). For an input histogram of size n=625 and an output layer size of m=10 neurons, the secondary MLP receives the primary MLPs’ output activation plus another histogram as input. In our training procedure, we split our dataset D in three parts D1, D2 and D3, such that we may train our two MLPs on independent parts of the dataset and test it on unseen data.