Skip to main content
. Author manuscript; available in PMC: 2021 Nov 2.
Published in final edited form as: Int Conf Big Data Smart Comput. 2021 Mar 10;2021:10.1109/bigcomp51126.2021.00023. doi: 10.1109/bigcomp51126.2021.00023

Algorithm 1.

MP-Boost

MP-Boost (X, y, n, m, μ)
Initialization (t = 0):
p(1) = U[N] // observation probabilities
q(1) = U[M] // feature probabilities
F(1) (xi) = 0, ∀i ∈ [N] // ensemble output
G(1) (xi) =0, ∀i ∈ [N] // out-of-patch output
while Stopping – Criterion(oop(t)) not met do tt +1
 1) Sample a minipatch:
  a) R(t)=Sample(N,n,p(t)) // select n instances
  b) C(t)=Sample(M,m,q(t)) // select m features
  c) (X(t),y(t))=(XR(t),C(t),yR(t)) // minipatch
 2) Train a weak learner on the minipatch:
  a) h(t)H: weak learner trained on X(t), y(t)
 3) Update outputs:
  a) F(t)(xi)=F(t1)(xi)+h(t)((xi)C(t)),i[N]
 4) Update probability distributions:
  a) pi(t+1)=L(yi,F(t)(xi))k=1NL(yk,F(t)(xk)),i[N]
  b) qj(t+1)=(1μ)qj(t)+μrIjh(t),jC(t)
   where, r=jC(t)qj(t)
 5) Out-of-Patch Accuracy:
  a) G(t)(xi)=G(t1)(xi)+h(t)((xi)C(t)),iR(t)
  b) oop(t)=1Ni=1N1{sgn(G(t)(xi))=yi}
end while
Return sgn(F(T)), p(T), q(T)