Skip to main content
. 2019 Jul 26;13:45. doi: 10.3389/fnbot.2019.00045

Algorithm 1.

Intrinsic phase: one step of learning of affordances and forward models

1: (object_image, object_position) ← Scan(environment)
2: (action, motivation_signal) ← SelectActionWithHighestIM(action_list, predictors, object_image, object_position)
3: if (motivation_signal ≥ motivation_threshold) then
4:       ExecuteAction(action, object_image, object_position)
5:       (new_object_image, new_object_position) ← ScanEffect(new_environment, environment)
6:       affordance ←…
7:            Affordance(action, new_object_image, new_object_position, object_image, object_position)
8:       UpdateWeights(affordance_predictor, action, object_image, affordance)
9:       UpdateWeights(affordance_predictor, action, object_image, affordance, improve_predictor) ⊳ Only IMP
10:       if (affordance = TRUE) then
11:              UpdateWeights(effect_predictors, action, object_image, object_position,…
12:                  new_object_image, new_object_position)
13:       end if
14:       motivation_threshold ← LeakyAverage(motivation_threshold, motivation_signal) ⊳ Only IGN/IMP
15: else
16:       motivation_threshold ← LeakyAverage(motivation_threshold, 0) ⊳ Only IGN/IMP
17: end if