Fig. 5.
Schematic illustration of the computations used by the model. (A-D) Learning to identify ‘containment’ relations (bottom row), ‘in-front’ (top), and ‘behind’ (middle). Between the object and the container, boundaries marked in blue are owned by the object and in red by a container. (A) Identifying containment in dynamic input. Each row represents a dynamic event, depicting an object placed (top to bottom) ‘in-front’, ‘behind’, or ‘inside’ a stationary object. The model segments the objects (colored regions), detects the motion boundary between them, and detects the switch from ‘blue in-front of red’ to ‘red in-front of blue’. (B) Detecting ‘containment’ in static images (two examples, bottom). ‘Paradoxical occlusion’ is detected along the common border, where the object is in front of the container (at the blue boundary) but behind it at the rim (red boundary). (C) ‘Loose’ vs. ‘tight’ fit is measured by the fraction of the detected boundary (solid-red) relative to the full length of the internal boundary (dotted-red). (D) High-angle view: The container’s region is segregated into ‘front’ (red) and ‘back’ (orange) regions, separated by the internal boundary. Detection of ‘containment’ is extended to include occlusion (blue boundary) confined to the ‘back’ region. (E) ‘Cover’ relation (when the internal part, i.e. the back region, is invisible) and ‘support’ (in F) are related to containment, but in the model they require additional learning and predicted to appear at later stages. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
