| Algorithm A2. Trajectory synthesis procedure |
| Input: Video sequences (1-4) |
|
Output
: List
of tracking rectangles coordinates of the most visible position from all
views/
frame file; frame number of the analyzed position and the movement attribute (moving, freezing). |
| 01. Choose the camera (view) set to be included in the
project; Generate the list results: one list for each project camera; one list for fused trajectory; For each project video frame is allocated an entry list include ing: camera number; localization coordinates and movement attributes. |
| 02. Establish the tracking mode: parallel (two to
four camera tasks) or sequential (for debug purpose); |
| 03. for each project camera do |
| 04. set the polygon of useful view of the entire camera scene; |
|
05. set the areas of doors scene through panda or
zoo technician could enter in or quit the scene; |
|
06. set the areas where could not be considered
asleep in case of lack of movement features for the current camera; |
|
07. set the areas where the movement features should
be erased because are generated by the time and frame counters or by the water waves for the views including water ponds (mainly the fourth view); |
| 08. end for |
|
09. for each frame, denoted iFrame, of the
current video sequence (start_frame to end_frame) do |
|
10. if
iFrame–start_Frame < 3 compute the gray level average (denoted gray_level_average_current) for each view; |
|
11. if at least one gray_level_average_current
is too low or too high (the image is entire black or white) |
| 12. start_Frame = iFrame; |
| 13. continue; end if |
| 14. fill the list gray_level_average[3] of each view: |
| task(i).gray_level_average[2] = task(i). gray_level_average[1] |
| task(i).gray_level_average[1] = task(i) .gray_level_average[0] |
| task(i).gray_level_average[0] = task(i). gray_level_average_current; |
| 15. continue; end if |
|
16. if
parallel _mode launch tracking
task (iFrame, camera_nmb, start_frame) of the all cameras in the project; |
| 15. else |
|
16. for each camera in project execute tracking
task (iFrame, camera_nmb, start_frame); end for, end if |
|
17. if at least one of the tasks returns a
value related to zoo technician presence in the scene do not consider the current frame in the fusion process; end if |
|
18. if a task returns non valid coordinates,
others than those signalizing the technician presence do not consider it in the fusion process; end if |
| 19. if only one task returns valid position retain its view as most suitable view; end if |
|
20. if at least two tasks return valid
positions retain most suitable view considering area localization weight and the previous view in the fused views chain; |
|
21. if the previous selection is near the
border of a full view area, neighbor of another full view area, and the movement speed is fast (lot of movement features) do not switch views |
| else |
|
22. if the previous selection is near
the fading view border of a full view area and at least one of the others available views is in a full view area consider this one instead the current view; end if (lines 22, 21, 20) |
| 23. end for; |