|
Algorithm 1.-BidLSTM implementation process |
|
▹ Obtain the test scores for each feature using statistical model
▹ Rank(sort) the features in descending order based on their test scores
-
1:
← {}
-
2:
for i ← 1 to
n do
-
3:
▹ Compute the score between features in the dataset D and class labels L
-
4:
append (i, ) to
-
5:
end for
-
6:
rank the features of ▹ Sort the features in a descending order based on their test scores
-
7:
store the feature scores of to
-
8:
return
▹ Find the features with the highest test value from the ranked features
▹ Obtain the best feature subset for training using forward search
-
9:
← {}
-
10:
←−1
-
11:
← index of D
-
12:
while != NULL do
-
13:
-
14:
for i ← 0 to length do
-
15:
-
16:
-
17:
if then
-
18:
index ← i
-
19:
end if
-
20:
end for
-
21:
if index == NULL then
-
22:
break
-
23:
else
-
24:
append to
-
25:
Remove from
-
26:
end if
-
27:
end while
-
28:
return as optimal set
▹ Model training interface with a K-fold cross-validation using the optimal set
-
29:
for f = 1 to k do
-
30:
[ ]Training_set = New_List[]
-
31:
[ ]Testing_set = New_List[] ▹ Construct the training set
-
32:
for m = 1 to k do
-
33:
if m == f then
-
34:
continue
-
35:
end if
-
36:
for v = 1 to do
-
37:
Train[v] + fold[v][m]
-
38:
end for
-
39:
end for ▹ Construct the testing set
-
40:
for v = 1 to do
-
41:
Test[v] + fold[v][m]
-
42:
end for ▹ Fit BidLSTM model for training and testing
-
43:
model = BidLSTM()
-
44:
BidLSTM.Fit(Train) ▹ Train model with K-1 folds
-
45:
Evaluate model perfomance with remaining Kth folds
-
46:
scores = cross_val_scores()
-
47:
Return scores ▹ Return the classification accuracy and validation scores
-
48:
end for
-
49:
Test model with an unseen test dataset
-
50:
Return test scores
|