Table 3.
Comparison of existing research based on dimensionality reduction in IDS.
Reference | Method Used | Major Contribution | Challenges/Limitations |
---|---|---|---|
[23] | Principal component analysis | Able to manage large datasets, more efficient. | Not able to handle nonlinear problems. |
[24] | Auto-encoders | It does not require any prior assumptions for the reduction. | Slower in speed. |
[25] | Missing value ratio | Mainly finds out the missing values and NULL values. | Works on the specific data framework. |
[26] | Low variance filter | It eliminates the low variance filter in specific dimensions. | Can work on limited data. |
[27] | Factor analysis | It can analyze various data factors. | Slower. |
[28] | Forward feature selection | It works in the forward direction. | Works on the specific data framework. |
[29] | Uniform manifold approximation and projection (UMAP) | UMAP is crafted from a theoretical foundation predicated on Riemannian manifolds and algebraic configuration. | Can work on limited data. |
[30] | Random forest | It constructs a dimension reduction tree based on the decision. | Can work on limited data. |