Skip to main content
. 2023 Nov 8;9:e1562. doi: 10.7717/peerj-cs.1562

Algorithm 1. Aojmus.

Input:
   Current frame It; The video for tracking Svideos;
   Target position pt1 and scale st1 of previous frame;
   The target feature template, x^t1 and the filter template, α^t1;
   The scale model At1scale, Bt1scale.
Output:
   Target position pt and scale st of current frame;
   The updated target feature template, x^t and filter template, α^t;
   The updated scale model Atscale, Btscale.
1:  for each ItSvideos do
2:    Sample the new patch zt from It at pt1;
3:    Extract a scale sample zscale from It at pt and st1;
4:    Extract the HOG and CN features and fused with Eq. (7);
5:    Calculate the response f^(zt) with Eq. (5), and get Fmax;
6:    Calculate N, WAPCE and RSFM with Eqs. (12)(14), and adaptively judge whether there is occlusion and the scope;
7:    Get the ξR and ξ by Eqs. (16) and (17);
8:    Compute the scale correlations yscale using zscale, At1scale and Bt1scale in Eq. (10);
9:    Set st to the maximum of yscale;
10:   Use Eq. (18) to update x^t and α^t with x^t1 and α^t1 adaptively;
11:   Use Eq. (9) to update Atscale and Btscale with At1scale and Bt1scale.
12:   Return pt and the updated x^t, α^t, Atscale, Btscale.
13:  end for