Skip to main content
. 2020 Dec 24;21(1):52. doi: 10.3390/s21010052
Algorithm 1 CorrNet
Input: Training set with n instances in modality 1: X1={x1i}i=1n,x1iRL×C1 and modality 2: X2={x2i}i=1n,x2iRL×C2
Output: Fine-grained emotion labels (i.e., valence:Va={vi}i=1n and arousal Ar={ai}i=1n)
  • 1:

    for j = 1 and 2 do

  • 2:

      Encoderϕj=Xjψ(ω,c)

  • 3:

      Decoderηj=ϕ¯ψ(Cj,c)

  • 4:

    end for

  • 5:

    Group instances according to video stimulus:

  • 6:

    for t in T = number of video stimulus do

  • 7:

      (H1t,H2t)=CCA(ψ1t,ψ2t)

  • 8:

      Ft=[ψ1t·H1t,ψ2t·H2t]

  • 9:

    end for

  • 10:

    F={Ft}t=1T,FRn×k

  • 11:

    (ai,vi)i=1n=BLS(F)