Skip to main content
. 2025 Mar 18;25(6):1868. doi: 10.3390/s25061868
Algorithm 4. DSCNet Model with Self-Attention
1: Default configurations:
2: num layers ← 4
4: kernel size ← 3
5: activation function ← ‘relu’
6: dropoutrate ← 0.5
7: Initialize the model
8:   model ← initialize model()
9: for i = 1 to num layers do
10:  model.add layer (Conv2D(num filters[i], kernel size, activation = activation
    function))
11:  model.add layer (MaxPooling2D(pool size = 2))
12:  model.add layer (Dropout(dropout rate))
13:  model.add_layer(SelfAttention())
14: end for
15: model.add layer (Flatten())
16: model.add layer (Dense(128, activation = activation function))
17: model.add layer (Dropout(dropout rate))
18: model.add layer (Dense(1, activation = ‘linear’))