Table 2.
Parameter configuration of the Transformer-PID gain regulator.
| Parameter | Symbol | Value | Description |
|---|---|---|---|
| Input feature dimension | ![]() |
20 | Length of the feature vector |
| Embedding dimension | ![]() |
64 | Dimensionality of the input embedding space |
| Number of encoder layers | ![]() |
3 | Depth of the Transformer stack |
| Number of attention heads | ![]() |
4 | Number of parallel attention mechanisms |
| Key vector dimension | ![]() |
16 | Denominator of the self-attention scaling factor |
| Hidden layer size of FFNN | ![]() |
256 | Number of neurons in the feedforward neural network within the encoder |
| Baseline proportional gain | ![]() |
![]() |
Initial for roll/pitch/yaw/thrust channels |
| Baseline integral gain | ![]() |
![]() |
Initial for each control channel |
| Baseline derivative gain | ![]() |
![]() |
Initial Kd for each control channel |
| Gain adjustment coefficients |
, ,
|
0.6, 0.7, 0.4 | Scaling factors for the gain adjustment range |
















