Replay includes contents from both new and old memories |
Prevents forgetting |
Interleave new data with old data to overcome forgetting |
Only a few selected experiences are replayed |
Increased efficiency, weighting experiences based on internal representation |
Related to subset selection for what should be replayed |
Replay can be partial (not entire experience) |
Improves efficiency, allows for better integration of parts, generalization, and abstraction |
Not explored in deep learning |
Replay observed at sensory and association cortex (independent and coordinated) |
Allows for vertical and horizontal integration in hierarchical memory structures |
Some methods use representational replay of higher level inputs or feature maps |
Replay modulated by reward |
Allow reward to influence replay |
Similar to reward functions in reinforcement learning |
Replay is spontaneously generated (without external inputs) |
Allows for all of the above features of replay without explicitly stored memories |
Some methods replay samples from random inputs |
Replay during NREM is different than replay during REM |
Different states allow for different types of manipulation of memories |
Deep learning currently focuses on NREM replay and ignores REM replay |
Replay is temporally structured |
Allows for more memory combinations and follows temporal waking experiences |
Largely ignored by existing methods that replay static, uncorrelated inputs |
Replay can happen in reverse |
Allow reward mediated weighting of replay |
Must have temporal correlations for reverse replay |
Replay is different for novel versus non-novel inputs |
Allows for selective replay to be weighted by novelty |
Replay is largely the same independent of input novelty |