Skip to main content
. 2016 Oct 13;16(10):1701. doi: 10.3390/s16101701
Algorithm 2: SCDNN
Input: dataset, cluster number, number of hidden-layer nodes HLN, number of hidden layers HL.
Output: Final prediction results
/*Note the symbols of “/*” and “*/” represent comments in this algorithm.*/
1 Divide the raw dataset into two components: a training dataset and a testing dataset.
/*get the largest matrix eigenvectors and training data subsets*/
2 Obtain the cluster centres and SC results using Algorithm 1. Here, the clustering results are regarded as training data subsets.
/*Train each DNN with each training data subset*/
3 The learning rate, denoising and sparsity parameters are set and the weight and bias are randomly initialised.
4 The HL are set two hidden layers, HLN is set 40 nodes of the first hidden layer and 20 nodes of second hidden layer.
5 Compute the sparsity cost function Jsparse(W,b)=J(W,b)+βj=1s2KL(ρρ^j).
6 Parameter weights and bias are updated as Wij(l)=Wij(l)-εWij(l)Csparse(W,b) and bi(l)=bi(l)-εbi(l)Csparse(W,b).
7 Train k sub-DNNs corresponding to the training data subsets.
8 Fine-tune the sub-DNNs by using backpropagation to train them.
9 The final structure of the trained sub-DNNs is obtained and they are labelled with each training data subset.
10 Divide the testing dataset into subsets with SC. Cluster centre parameters from the training data clusters are used.
11 The testing data subsets are used to test corresponding sub-DNNs, based on each corresponding cluster centre between the testing and training data subsets.
/*aggregate each prediction result*/
12 Results are generated by each sub-DNN, are integrated and the final outputs are obtained.
13 return classification result = final output