Skip to main content
. 2014 Nov 5;9(11):e109381. doi: 10.1371/journal.pone.0109381

Table 2. Quantitative and qualitative properties of selected studies on activity and intention recognition.

Reference F.latent.infty F.plan.synth F.duration F.action.sel F.probability F.struct.state F.non.monoton F.complexity Method Scenario N.states N.plan.length N.classes N.subjects M.accuracy M.conf.based
1 [1] [4] 1 BD M 70,000 20 3 23
2 [2] [56] 1 BD OM sim
3 [3] 1 BPF O 70,000 15 10 6
4 [4] [20] t BPl K 10,000 3 sim
5 [5] [20] 1 BP K 70,000 6 5 sim
6 [57] [19] 1 BD A 200,000 5 6 6
7 [12] [19] 1 BD K 70,000 40 2
8 [21] [18] 1 BD O 250,000 5 5
9 [58] [59] t NBN M 1,000 15 sim
10 [60] 1 BH K 28 6 6
11 [61] 1 BH A 300 12 15 3
12 [62] [19] 1 BRP K 96 13 2
13 [63] [19] 1 BRP O 3,500 3 3 2
14 [64] [56] t OML M 20 4 14
15 [29] [19] 1 BD AK 528 33 3
16 [65] [19] 1 NMH O 720 2 1
17 [66] [19] 1 LDL K 15 6 sim
18 [67] [19] 1 LDL AK 24 8 3
19 [7] t 2 OG M 50 2
20 [68] [19] 1 LP A 100 40 7 6
21 [8] [18] 1 BMF A 20,000 14 14 3

“▪”  =  feature included in study.

“□”  =  feature not included.

x ”  =  value/property x not explicitly stated in study description.

“–”  =  value unknown.

“◊”  =  property not meaningful considering target of study.

Method codes: L: logic-based (DL  =  description logic, P  =  combined with possibility theory). B: using some variant of sequential Bayesian filtering (exact: H  =  HMM or extension, D  =  other DBN, Pl  =  transformation into a planning problem, P  =  partially observable Markov decision process; approximate: PF  =  particle filter, RP  =  Rao-Blackwellized particle filter, MF  =  marginal filter). N  =  Non-sequential Bayesian inference (MH  =  Metropolis-Hastings, BN  =  unrolled Bayes Net). O  =  other exact method (G  =  some kind of grammar, ML  =  Markov Logic net). Scenario codes: K  =  kitchen task, A  =  other activities of daily living, O  =  office, M  =  miscellaneous other scenario.

We consider the first five studies as CSSM-like approaches.