Table 3.
Model evaluation with previous work including the used approach, optimizer, learning rate, batch size, loss function, and accuracy.
Ref | Approach | Optimizer | Learning Rate | Batch Size | Loss Function | Accuracy |
---|---|---|---|---|---|---|
Chan et al. [27] | Geometry-aware 3D using GAN | Adam | 0.0002 | 32 | Cross-entropy | 87.2% |
Xing et al. [23] | Deep Q-learning approximates the Q-value function, which estimates the expected cumulative reward for taking a particular action in a particular state | Adam | 0.001 | 32 | Q-learning with mean squared error (MSE) | Average reward |
Hetzel et al. [24] | RL with DDPG for virtual keyboard typing | Adam | 0.001 | 64 | DDPG | Average reward |
Our Model | RL-PipTrack: AR-assisted deep RL-based model | PPO | 0.0005 | 10 | Clipped surrogate objective | Average reward + standard deviation |