Skip to main content
. 2022 Aug 10;22(16):5986. doi: 10.3390/s22165986
Algorithm 2. Proposed HNIDS method (IRF working)
Input: NSL dataset ds for training, v variable, n1 represents the total the nodes in a tree
Output: Random Forest ensemble tree RFTree(k1)
Step1: Construct an initial tree
1.1 for each j to k repeat
1.2 construct a sample set by using an initial IDS dataset (original) of size n
1.3 start feeding the new bootstrapped dataset to a decision tree DFTree
Step2: Choose the best fit
2.1 for each n1 to min node size, repeat
2.2 start feeding the new bootstrapped dataset to a Random forest tree RFTree
2.3 Select randomly variables v’, towards a variable set v
2.4 Choose the fittest variable divided among all these variables v’
2.5 divide a parent tree node into the new child nodes
2.6 return ensemble tree RFTree(k1)
2.7 End
Step 3: Verify a constructed Random Forest ensemble tree RFTree(k1)
3.1 if (ensemble tree RFTree(k1) == Best_fit_tree BFTree(k1))
3.2 Proceed with the best fir ensemble tree
Step 4: Determine the best class
4.1 apply classification to find the best fit from BFTree(k1)
4.2 Calculate the number of votes
4.3 if the number of votes is maximum
4.4 return BFTree(k1)
4.5 End