Skip to main content
Frontiers in Neurorobotics logoLink to Frontiers in Neurorobotics
editorial
. 2023 Jan 6;16:1120167. doi: 10.3389/fnbot.2022.1120167

Editorial: Toward and beyond human-level AI, volume II

Witali Dunin-Barkowski 1,*, Alexander Gorban 2,3
PMCID: PMC9853958  PMID: 36687208

AI systems have surpassed human-level in a lot of computational competencies, and the number of these competencies is growing almost daily. Still, modern AI systems are specialized gadgets which are aimed at solving specific particular problems or limited sets of problems. The universal human type intellectuality for these systems still remains unattained. In the opinion of many professionals, obtaining practical solution for that general problem seems to be very hard (Zhang et al., 2022) and it is not clear when it can be achieved (Stein-Perlman et al., 2022). Also, this task seems to be tough for general public (Schmidt, 2022).

In the first paper of this topic (Rivera, Popek et al.), authors explore a novel framework called PICO to detect presence of novel behaviors and construct a library of behavior primitives from unlabeled demonstrations. A gap in trajectory is defined as a region where actions cannot be predicted with high enough probability in a current behavior model. When such a gap occurs a new behavior primitive is added. The approach is evaluated using a reach-grab-lift task using a robotic arm, showing better label and reconstruction accuracies when compared to similar approaches.

The paper Rivera, Staley et al. is devoted to multi-agent reinforcement learning in complex environments such as dense urban defense-related scenarios. The authors introduce the AI Arena framework. There, different agents control tanks and fight with each other in a 5 vs. 5 tank combat game. It is shown that agents of the same team converge to a cooperative set of behaviors. Thus emergence of cooperative behavior has been demonstrated in the work.

In the third paper (Limbacher and Legenstein), authors demonstrated the emergence of clustering of temporally correlated inputs on dendritic branches in a setting with a generic stochastic rewiring principle with a simple synaptic plasticity rule. The mechanism is demonstrated in a computational model and is backed up with a heavy theoretical analysis. The hypothesis is proposed that such clustering might serve to protect memories from catastrophic forgetting on a medium time scale.

Finally, Krauss and Maier gives a brief review on theories of consciousness in neuroscience and AI. The general philosophical perspective on the problem is given, the main directions in philosophy of consciousness are described. Some noteworthy experiments in neurophysiology of consciousness are reviewed. Schmidhuber, 1991 as well as other popular theories of consciousness are discussed. Overall, many interesting experiments, theories and ideas are described in the paper.

Still it is not clear what can be done to achieve the goal of AGI. One of the most significant problems in moving toward general intelligence is the speed of learning. In this respect, a topic of special importance is one or few-shot learning. This kind of learning really works in practice in technical and biological systems, despite the apparent contradiction to what classical statistical theory states. It turns out that it is possible to find a general approach to the modern mathematical formulation of the problem (Tyukin et al., 2021a). In some cases it is possible to show the relationship between intrinsic dimensions of the transformed data and the probabilities to learn successfully from few demonstrations (Gorban et al., 2021b; Tyukin et al., 2022).

The problem of learning from a small number of examples is strongly related to the recently discovered phenomenon of dimensionality blessing (Gorban et al., 2016) as opposed to the “curse of dimensionality” (Sutton et al., 2022). The connection between these two problems has a fundamental mathematical nature (Gorban and Tyukin, 2017). In particular it is connected to the Hilbert's sixth problem (Gorban and Tyukin, 2018) and the fact of surprising effectiveness of small neural ensembles in high-dimensional brain (Gorban et al., 2019). These properties can be effectively applied to concrete features of the live brain hippocampus (Tyukin et al., 2019).

The further analysis reveals the existence of a fundamental tradeoff between complexity and simplicity in high-dimensional spaces (Gorban et al., 2020) and effectively uses the geometry of few-shot learning (Tyukin et al., 2021c). Among the important tasks for creating practical AI, it is important to note the development of new methods (Akinduko et al., 2016; Mirkes et al., 2022; Zhou et al., 2022) and tools (Rybnikova et al., 2020, 2021; Bac et al., 2021) for creation of AI systems.

In search of AGI one of the most important clusters of problems is connected with medical applications. Even in such seemingly routine problems as child health monitoring (Roland et al., 2021) there can be inherent obstacles. In the more complicated cases of psychological profiling within the context of drug abuse avoidance machine learning yields impressive results (Fehrman et al., 2015). The general consideration of adaptability of individuals and technical complexes also yields useful hints for solving the problem of AGI (Gorban et al., 2021a). Finally, the safety of AI systems should be prioritized without any doubts (Tyukin et al., 2021b).

The grand modern trend in developing AI systems is coming back to attempts to understand mechanisms of real brain functioning as exposed in recent Nature editorial (Mehonic and Kenyon, 2022) and in appeal of world prominent researchers in AI and computational neuroscience (Zador et al., 2022). There it has been argued that much more studies of natural neural intelligence need to be done for obtaining real general intelligence in technological systems. In this respect, the role of glia in information processing in the brain has been revealed in thorough studies (Gordleeva et al., 2019). Also more attention is needed toward probably unduly overlooked ideas about the operation principles and functions of the cerebellum (Dunin-Barkowski and Wunsch, 2000; Shakirov, 2022). Besides motion control the cerebellum deals with human emotions (Adamaszek et al., 2022) and with organism's rewards (Kostadinov and Häusser, 2022) etc. It deals with many cognitive problems especially of those connected with vision (Vaina et al., 2001) including creative tasks (Saggar et al., 2015). Unfortunately, even the crude details of cerebellar circuit operation present the subject of disagreement between theoreticians (Willshaw et al., 2015) and experimentalists (Streng et al., 2018).

Author contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Funding Statement

The work is financially supported by State Program of SRISA RAS No. FNEF-2022-0003.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  1. Adamaszek M., Manto M., Schutter D. J. L. G. (2022). The Emotional Cerebellum. New York, NY: Springer. [Google Scholar]
  2. Akinduko A. A., Mirkes E. M., Gorban A. N. (2016). SOM: Stochastic initialization versus principal components. Inf. Sci. 364–365, 213–221. 10.1016/j.ins.2015.10.013 [DOI] [Google Scholar]
  3. Bac J., Mirkes E. M., Gorban A. N., Tyukin I., Zinovyev A. (2021). Scikit-dimension: a Python package for intrinsic dimension estimation. arXiv. 23, 1–12. 10.3390/e23101368 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Dunin-Barkowski W. L., Wunsch D. C. (2000). Phase-based cerebellar learning of dynamic signals. Neurocomputing. 32–33, 709–725. 10.1016/S0925-2312(00)00236-8 [DOI] [Google Scholar]
  5. Fehrman E., Muhammad A. K., Mirkes E. M., Egan V., Gorban A. N. (2015). The five factor model of personality and evaluation of drug consumption risk. arXiv:1506.06297. 1–57. 10.48550/arXiv.1506.06297 [DOI] [Google Scholar]
  6. Gorban A. N., Grechuk B., Mirkes E. M., Stasenko S. V., Tyukin I. Y. (2021b). High-dimensional separability for one- and few-shot learning. Entropy. 23, 1090. 10.3390/e23081090 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Gorban A. N., Makarov V. A., Tyukin I. Y. (2019). The unreasonable effectiveness of small neural ensembles in high-dimensional brain. Phys Life Reviews. 29, 55–88. 10.1016/j.plrev.2018.09.005 [DOI] [PubMed] [Google Scholar]
  8. Gorban A. N., Makarov V. A., Tyukin I. Y. (2020). High-dimensional brain in a high-dimensional world: blessing of dimensionality. Entropy. 22, 82. 10.3390/e22010082 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Gorban A. N., Tyukin I. Y. (2017). Stochastic separation theorems. arXiv. 10.1016/j.neunet.2017.07.014 [DOI] [PubMed] [Google Scholar]
  10. Gorban A. N., Tyukin I. Y. (2018). Blessing of dimensionality: mathematical foundations of the statistical physics of data. Phil. Trans. R. Soc. A. 376:20170237. 10.1098/rsta.2017.0237 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Gorban A. N., Tyukin I. Y., Romanenko I. (2016). The blessing of dimensionality: separation theorems in the thermodynamic limit. IFAC. 49–24, 64–69. 10.1016/j.ifacol.2016.10.755 [DOI] [Google Scholar]
  12. Gorban A. N., Tyukina T. A., Pokidysheva L. I., Smirnova E. V. (2021a). Dynamic and thermodynamic models of adaptation. arXiv. 10.1016/j.plrev.2021.03.001 [DOI] [PubMed] [Google Scholar]
  13. Gordleeva S. Y., Lotareva Y. A., Krivonosov M. I., Zaikin A. A., Ivanchenko M. V., Gorban A. N. (2019). Astrocytes Organize Associative Memory. Advances in Neural Computation, Machine Learning, and Cognitive Research III: Selected Papers from the XXI International Conference on Neuroinformatics, October 7–11, 2019, Dolgoprudny, Moscow Region, Russia. New York: Springer. p. 384–391. 10.1007/978-3-030-30425-6_45 [DOI] [Google Scholar]
  14. Kostadinov D., Häusser M. (2022). Reward signals in the cerebellum: Origins, targets, and functional implications. Neuron. 110, 1290–1303. 10.1016/j.neuron.2022.02.015 [DOI] [PubMed] [Google Scholar]
  15. Mehonic A., Kenyon A. J. (2022). Brain-inspired computing needs a master plan. Nature. 604, 255–260. 10.1038/s41586-021-04362-w [DOI] [PubMed] [Google Scholar]
  16. Mirkes E. M., Bac J., Fouché A, Stasenko S. V., Zinovyev A., Gorban A. N. (2022). Domain adaptation principal component analysis: base linear method for learning with out-of-distribution data. arXiv. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Roland D., Suzen N., Coats T. J., Levesley J., Gorban A. N., Mirkes E. M. (2021). What can the randomness of missing values tell you about clinical practice in large data sets of children's vital signs? Pediatr Res. 89, 16–21. 10.1038/s41390-020-0861-2 [DOI] [PubMed] [Google Scholar]
  18. Rybnikova N., Mirkes E. M., Gorban A. N. (2021). CNN-based spectral super-resolution of panchromatic night-time light imagery: city-size-associated neighborhood effects. Sensors. 21, 7662. 10.3390/s21227662 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Rybnikova N., Portnov B. A., Mirkes E. M., Zinovyev A., Brook A., Gorban A. N. (2020). Coloring panchromatic nighttime satellite images: comparing the performance of several machine learning methods. arXiv:2008.09303. 1–68. 10.48550/arXiv.2008.09303 [DOI] [Google Scholar]
  20. Saggar M., Quintin E., Kienitz E., Bott N. T., Sun Z., Hong W., et al. (2015). Pictionary-based fMRI paradigm to study the neural correlates of spontaneous improvisation and figural creativity. Sci. Rep. 5, 10894. 10.1038/srep10894 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Schmidhuber J. (1991). “A Possibility for implementing curiosity and boredom in model-building neural controllers,” in Proceedings of the first international conference on simulation of adaptive behavior on From animals to animats. (Paris: MIT Press; ) p. 222–227. [Google Scholar]
  22. Schmidt H. (2022). Mit dem Roboter Optimus will Tesla wieder einmal die Welt retten. Available online at: https://www.nzz.ch/mobilitaet/tesla-humanoidroboter-optimus-soll-millionenfach-verkauft-werden-ld.1705371 (accessed October 31, 2022).
  23. Shakirov V. V. (2022). Advances in Neural Computation, Machine Learning, and Cognitive Research VI. Studies in Computational Intelligence. Cham: Springer. [Google Scholar]
  24. Stein-Perlman Z., Weinstein-Raun B., Grace K. (2022). Expert Survey on Progress in AI. AI Impacts. Available online at: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai (accessed December 7, 2022).
  25. Streng M. L., Popa L. S., Ebner T. J. (2018). Complex Spike Wars: a New Hope. The Cerebellum. 17, 735–746. 10.1007/s12311-018-0960-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Sutton O. J., Gorban A. N., Tyukin I. Y. (2022). Towards a mathematical understanding of learning from few examples with nonlinear feature maps. arXiv. 1–18. 10.48550/arXiv.2211.03607 [DOI] [Google Scholar]
  27. Tyukin I. Y., Higham D. J., Woldegeorgis E., Gorban A. N. (2021b). The feasibility and inevitability of stealth attacks. arXiv. 1–26. 10.48550/arXiv.2106.13997 [DOI] [Google Scholar]
  28. Tyukin I. Y., Gorban A. N., Alkhudaydi M. H., Zhou Q. (2021a). Demystification of few-shot and one-shot learning. arXiv. 1–7. 10.1109/IJCNN52387.2021.953439527295638 [DOI] [Google Scholar]
  29. Tyukin I. Y., Gorban A. N., Calvo C., Makarova J., Makarov V. A. (2019). High-dimensional brain: a tool for encoding and rapid learning of memories by single neurons. Bull. Math. Biol. 81, 4856–4888. 10.1007/s11538-018-0415-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Tyukin I. Y., Gorban A. N., McEwan A. A., Meshkinfamfard S., Tang L. (2021c). Blessing of dimensionality at the edge and geometry of few-shot learning. Inf. Sci. 564, 124–143. 10.1016/j.ins.2021.01.022 [DOI] [Google Scholar]
  31. Tyukin I. Y., Sutton O., Gorban A. N. (2022). Learning from few examples with nonlinear feature maps. arXiv. 10.48550/arXiv.2203.16935 [DOI] [Google Scholar]
  32. Vaina L. M., Solomon J., Chowdhury S., Sinha P., Belliveau J. W. (2001). Functional neuroanatomy of biological motion perception in humans. PNAS Biol. Sci. 98, 11656–11661. 10.1073/pnas.191374198 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Willshaw D. J., Dayan P., Morris R. G. M. (2015). Memory, modelling and Marr: a commentary on Marr (1971) ‘Simple memory: a theory of archicortex'. Phil. Trans. R. Soc. B370, 20140383. 10.1098/rstb.2014.0383 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Zador A., Richards B., Ölveczky B., Escola S., Bengio Y., Boahen K., et al. (2022). Toward next-generation artificial intelligence: catalyzing the neuroAI revolution. arXiv. 1–11. 10.48550/arXiv.2210.08340 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Zhang B., Dreksler N., Anderljung M., Kahn L., Giattino C., Dafoe A., Horowitz M. C. (2022). Forecasting AI progress: evidence from a survey of machine learning researchers. arXiv:2206.04132. [Google Scholar]
  36. Zhou Q., Gorban A. N., Mirkes E. M., Bac J., Zinovyev A., Tyukin I. Y. (2022). Quasi-orthogonality and intrinsic dimensions as measures of learning and generalisation. arXiv. 10.1109/IJCNN55064.2022.989233727295638 [DOI] [Google Scholar]

Articles from Frontiers in Neurorobotics are provided here courtesy of Frontiers Media SA

RESOURCES