Skip to main content
. Author manuscript; available in PMC: 2015 Nov 14.
Published in final edited form as: IEEE Trans Affect Comput. 2011 Apr-Jun;2(2):79–91. doi: 10.1109/T-AFFC.2011.10

Algorithm 1.

Initial learning

Input:
  • Positive data set P0 (contains AU peak frames p and p ± 1);

  • Negative data set Q (contains other AUs and nonAUs);

  • Target false positive ratio Fr;

  • Maximum acceptable false positive ratio per cascade stage fr;

  • Minimum acceptable true positive ratio per cascade stage dr


Initialize:
  • Current cascade stage number t = 0;

  • Current overall cascade classifier’s true positive ratio Dt = 1.0;

  • Current overall cascade classifier’s false positive ratio Ft = 1.0;

  • S0 = {P0, Q0} is the initial working set, Q0Q. The number of positive samples is Np. The number of negative samples is Nq = Np × R0, R0 = 8;


While Ft > Fr
  1. t = t + 1; ft = 1.0; Normalize the weights ωt,i for each sample xi to guarantee that ωt = {ωt,i} is a distribution.

  2. While ft > fr

    1. For each feature ϕm, train a weak classifier on S0 and find the best feature ϕi (the one with the minimum classification error).

    2. Add the feature ϕi into the strong classifier Ht, update the weight in Gentle AdaBoost manner.

    3. Evaluate on S0 with the current strong classifier Ht, adjust the rejection threshold under the constraint that the true positive ratio does not drop below dr.

    4. Decrease threshold until dr is reached.

    5. Compute ft under this threshold.

    END While

  3. Ft+1 = Ft × ft; Dt+1 = Dt × dr; keep in Q0 the negative samples that the current strong classifier Ht misclassified (current false positive samples), record its size as Kfq.

  4. Use the detector Ht to bootstrap false positive samples from negative Q randomly and repeat until the negative working set has Nq samples.


END While
Output:
 A t-levels cascade where each level has a strong boosted classifier with a set of rejection thresholds for each weak classifier.