Skip to main content
Heliyon logoLink to Heliyon
. 2023 Mar 10;9(3):e14443. doi: 10.1016/j.heliyon.2023.e14443

An architectural approach to modeling artificial general intelligence

Boris B Slavin 1
PMCID: PMC10010987  PMID: 36925529

Abstract

This study presents an architectural approach for building a conceptual model of artificial general intelligence (AGI). The architectural approach is generally used to model information systems (IS) of enterprises and can be also used as part of a system-wide approach to describe other complex open systems. The paper suggests three layers and five levels of the AGI model. Two levels (entropy and process) are at the technological layer of AI functioning, two more levels (social and linguistic ones) are at the relationship layer responsible for the behavior of AI, and, finally, the uppermost level (actualization) supposes general intelligence. All the components of each upper layer are connected to the components of the lower layers forming the AGI model. The feature of the social layer is determined by the requirements to the subjectivity of the intellect, its ability to make decisions independently and be responsible for them. The task of the upper layer is self-identification of AGI and understanding its place. The hypothesis has been put forward that the limitation of the life cycle is an important condition for the actualization of intelligence.

Keywords: Artificial intelligence, General artificial intelligence, Artificial intelligence modeling, Architectural approach, Intelligence subjectivity

1. Introduction

The topic of artificial intelligence, the interest of many researchers since the middle of the last century [1] has been renewed in the second decade of our century due to outstanding results of using deep machine learning technology [2]. However, advances in recognition and predictive analytics have demonstrated the limitations of new artificial intelligence technologies for modeling real human intellectual activity. This was especially obvious when chatbots imitating human communication [3] were widely used in real practice. It has become clear that artificial intelligence is still very far from human mind.

As a result, technologies for predicting behavior and trends, as well as speech and image recognition are called weak artificial intelligence, while technologies for modeling human intelligence – strong or general artificial intelligence [4], which has not yet been created, and according to some researchers [5] will not be created soon, or be an addition to a human being [6]. The main arguments why general artificial intelligence cannot be created in the foreseeable future since the time of Dreyfus involve [7] the inability to algorithmize implicit human knowledge or the absence of numerous AI computers that could socialize them. In any case, there is general agreement that for the development of AGI, it is not enough to increase the calculated capacities and the number of neurons, but the functionality of AI systems should be expanded as well.

The purpose of this work is to show that the architectural approach as a particular type of system-wide approach, usually used to model information systems of organizations, can be effectively used to model artificial general intelligence. The similarity between the enterprise information model and the human computer model (AGI) is quite significant. Both a person and an enterprise are complex systems that interact with external environments (a person interacts with the social environment; an enterprise interacts with the economic environment). Both a person and an enterprise can distinguish the lower providing levels, and the upper managing ones. Data exchange at the infrastructure layer of the IS does not provide data about the enterprise business. Although, if something at the infrastructure layer fails, a part of the business support fails to function. This is how (through destruction) we obtain information today about the connection of human intelligence functions with the operation of the brain. The architecture of the enterprise allows building a complete picture of the relationship between business and technology through intermediate layers, and therefore this approach can be applied to modeling artificial intelligence (AI).

In this paper, we will adhere to the definition of general artificial intelligence as a computing system that implements fully intellectual human activity, i.e., equal in cognitive abilities to a human. There is no unambiguous definition of AGI in the literature used [8]. Although this term appeared as an opposition to the term weak or “narrow” AI, some researchers [9] believe that it is wrong to define AGI as “Human-level AI” or as “Strong AI”. In their opinion, the use of the term AI in the definition of AGI narrows approaches to AGI, since AGI may turn out to be completely different from AI. The abbreviation [10] ANI (Artificial Narrow Intelligence) is sometimes used instead of AI to indicate differences. However, such disputes about the definition of AGI are largely scholastic in their nature. It is somehow clear that AGI, unlike AI, must implement the “Human-level” in general, to the fullest. The literature discusses the possibility that AI can surpass human intelligence in general [11], and even the abbreviation ASI (Artificial SuperIntelligence) is reserved for it, but so far this is rather of futurology than practical science.

2. Literature review

The fact that modeling artificial general intelligence should be approached according to combinations of various technologies has been known for a long time. For example, the paper [12] proposes a combined approach of the Theory of Global Workspace Theory and the IDA (Intelligent Distribution Agent) model, which is aimed at solving the problem of modeling human cognition. The work [13] suggests combining two directions of AI development: algorithms based on neural networks and application programs within a single Tianjin hybrid chip, that will allow implementing more functional systems with AI. However, according to the authors [14], simply complicating the AI functionality will not result in creating AGI, as it is necessary to teach AI systems to simulate the interaction of living organisms. Therefore, this kind of research is underway. For example, the study [15] suggests considering the reproduction of AI as reproduction of biological species.

Another way to create AGI is to simulate the work of the human brain as close to reality as possible. This approach is described in the study of Canadian scientists “A Large-Scale Model of the Functioning Brain”, published in the journal Science [16]. The authors created a neural network of 2.5 million neurons that simulated neurophysiological, psychological, and neuroanatomic functions of the human brain. At the same time, neither the first way nor the second one takes into account social human functions, although Cassio Pennachin and Ben Goertzel say in their section “Modern approaches to artificial general intelligence” in the book “Artificial General Intelligence” [8] that it is impossible to model a general AI without taking into account human social functions.

To model social and behavioral human functions, the IDA (Intelligent Distribution Agent) model is used, which assumes that human intelligence is a “black box” that can receive and transmit information in accordance with human behavior, i.e., to be an autonomous agent (see the work of Joscha Bach [17]). This approach makes it possible to implement human cognitive functions with neural networks without copying the brain, the idea is copying real human behavior and communication. An autonomous agent has its own goals and communicates with the environment. In fact, the idea of such approach goes back to the thoughts of Turing, who assumed that artificial intelligence would be created when communications with a person do not allow understanding that they are artificial. However, this approach has not yet allowed to move far along the path of AGI modeling.

Probably the most complete overview of various hybrid architectural models of artificial general intelligence was presented in Ref. [18]. Large Scale Brain Simulation Architectures and Biologically inspired Cognitive Architectures (BICA) are highlighted in this review with the above-mentioned IDA model being part of the latter. The authors note the need for a sensory system in artificial general intelligence, which will allow the intelligence to receive information independently. However, the paper does not present models considering the social nature of human intelligence.

The implementation of general artificial intelligence is impossible without understanding human intelligence. The paper [19] discusses various definitions of intelligence and tests (one of them is the Turing test) to identify them. Either by chance or by the will of the authors, all definitions of AI are related to goal setting, and therefore the tests are aimed at realization of certain goals.

However, neither goal setting nor the hierarchy of needs explain the essence of human intelligence, and therefore cannot serve as a basis for modeling general artificial intelligence. Recently, an increasing number of researchers is inclined to the social nature of human intelligence, its connection with collective intelligence. The most striking publication in this regard is an article by three scientists [20] representing different sciences: linguistics, philosophy and psychology. They state that human intelligence is not only in the brain, but also distributed among other people, connected with cooperation and outsourcing a part of knowledge by a person. The authors refer to the work in the field of collective intelligence technologies of the Mallone group from the Massachusetts Institute of Technology [21].

Collective intelligence technologies have recently become widespread due to the extensive usage of network technologies [22]. Such technologies make it possible to increase the efficiency of collective intellectual activity, but in fact they realize the social essence of intelligence and can be used to model general artificial intelligence. For example, the authors of the article [23] highlight three approaches to AI: a technological, based on existing AI technologies; human-centered approach (AI is a tool for humans) and collective intelligence approach, suggesting to combine all three approaches within the framework of creating hybrid collective intelligence.

3. Architectural approach and AGI level model

How can we model human intelligence if it is inherently social? The easiest way is to find a similar system with models already built in. Such samples are often sought among biological systems, but not all biological systems are social. A good example, strange as it may seem, can be the information system of an enterprise. Like a human one, the enterprise information system supports the operation of not only the company itself, but also communication with customers, partners, etc. Like the human brain, the enterprise has a complex infrastructure operating with the help of local and global computing networks, servers and network operating systems, data management systems.

Data and computing equipment are usually similar at different enterprises and do not generally allow to explain what business purposes they serve. At the same time, in case of equipment or software failure, some specific of information system functions for business disappear. This is similar to studies of the human brain, when it is impossible to say exactly how the brain performs certain functions of intelligence, but when a part of the brain is removed, certain functionality disappears, allowing conclusions about the part of the brain involved in the implementation of these functions. A complex dependence suggests that in both cases (both human intelligence and enterprise information system) the systems are multilevel ones.

Interestingly, the similarity with the enterprise is found in works by neuropsychologists. Thus, Lisa Feldman Barrett in her book « How Emotions Are Made: The Secret Life of the Brain [24] writes about the work of the brain: «To manage all of this spending and replenishing, your brain must constantly predict your body's energy needs, like a budget for your body. Just as a company has a finance department that tracks deposits and withdrawals and moves money between accounts, so its overall budget stays in balance, your brain has circuitry that is largely responsible for your body budget».

In corporate information technologies, building an information system based on multilevel models is called an architectural approach. This approach was first proposed by John Zachman in his 1987 article [25], when he worked at IBM, for planning business systems. He used the word “architecture” as a comparison with a building to demonstrate a tiered approach. By now, the architectural approach has become one of the most common approaches for creating the IS of an enterprise. Therefore, one should not confuse the use of the term architecture in modeling AGI, as in Ref. [18] and the architectural approach. There are many architecture frameworks for different types of enterprises (public, private). One of the most famous architectural frameworks is the TOGAF framework [26] developed by The Open Group. There is even a special open-source tool for modeling ArchiMate architectures, which has been used for all figures in this article in particular.

To simulate general artificial intelligence, we will use three layers and five levels (two on the lower two levels) – see Fig. 1. The lower layer will be responsible for the technologies used by intelligence, both artificial and human ones, to ensure its activities. The second layer is responsible for functionality related to external relations, while the third one - for the activity of the intellect itself. Thus, both technological tasks and social tasks shall be considered. Dividing the lower two layers into two levels allow building several hierarchies: from the simplest to the most complex ones.

Fig. 1.

Fig. 1

Layers and levels of the AGI model.

The technological layer should be divided into two levels: entropy and processes. The entropy level is inherent in all living systems, even the simplest ones. The same level is the essential element of any computing system, including artificial intelligence. The main function of the tasks solved at this level is to reduce entropy by using energy. Entropy always increases in closed systems, but in the case of energy inflow, the second law of thermodynamics does not work. The paper [27] substantiates the statement that the decrease in entropy of living organisms is associated with the minimization of free energy. However, this does not explain the very reason for minimizing free energy. If this were, as the author claims, a fundamental principle like the principle of energy minimization in mechanics, then systems with negentropy (negentropy term was proposed by Ref. [28]) would arise randomly everywhere. In our opinion, it should be considered that living organisms never appear in a single instance. It may be more appropriate to look for the causes of negentropy in such complex systems where the internal organization corresponds to the species properties in the whole. Therefore, the natural species selection of individuals that better resist fluctuations of chaos can be described as minimizing internal energy.

In any case, we can say that entropy reduction requires the appropriate infrastructure. This is what all living organisms do and the computing infrastructure does the same – it is no coincidence that the formula for information in cybernetics with the opposite sign is the formula for entropy. Information in cybernetics has a narrow meaning and equal to data that has physical media as a carrier, subject to physical laws. Information in the wide sense has a social nature and considers human relationships (that is why the information layer is located even above the social layer).

The next level on the technological layer is responsible for the implementation of processes, i.e., the activity of the organism. If elementary processes related to intellectual activity are implemented at this level, in addition to supporting processes, tasks related to recognition and prediction should be implemented at this level too. However, for this, we need to place tasks at a lower level (entropy), which are usually solved by neural networks. In nature, memory and neural network are implemented together, in computing equipment they are usually performed separately. The technological layer containing the two lower levels is a weak AI, i.e., not intelligence, but a technological basis for intelligence.

4. Subjectivity and application layer

At the third level, the social essence of intelligence is implemented [20]. This is the essence without which artificial intelligence cannot be subjective. By subjectivity, we mean the subject's awareness of its identity [29], which is typical to all thinking beings. At this, only at the level of human intelligence does subjectivity become self-identification, i.e., the subject's awareness of its place in the world and its destiny. At this level, intelligence builds relationships with others of its kind: family relationships (which are largely determined by algorithms implemented at the process level), hierarchies, the exchange of simple information.

The need of thinking organisms to communicate with each other imposes a significant limitation on the capabilities of intelligence. If communications served only to transfer information from one individual to another, then artificial intelligence, which can consume information from millions of sources at the same time, would be immeasurably stronger than human intelligence. However, communication is required for the exchange of opinions and emotions, while communications which are mediated symbols – for the exchange of knowledge. The two-way nature of the exchange is a very important limitation for intelligence, if a person or AI spends T time communicating with N partners during the day, then partners must spend the average T/N time communicating with them. This is a well-known fact in education. If we just need to pass information to any number of students, we can record a lecture for them or offer to read a book. But if we must teach them to think, it is necessary to spend time on each student individually or to organize joint discussions. Some could argue that a chatbot can communicate with numerous partners at once at the same time, but this communication violates the subjectivity. In fact, such chatbot is equal to many subjects of communication simply placed on one digital platform.

This is not an omission or limitation, it is a condition of subjectivity, limitation of the subject. For example, we have a person whose labor productivity is twice as high as that of others. If we measured a person by performance (similarly – by goals, motivations, etc.), they would not differ from two people. Yet, this is not so, a person can be productive as much as they can, but with limited time for communication. That is why the number of participants in family relationships is small, it is impossible to establish close relationships if there is no time resource for this.

When developing artificial intelligence, it is important to consider that communication with other intelligent organisms (living or artificial ones) should occupy all the attention of intelligence. The intellect should, as it were, live time together with other intelligences, only in this way it becomes social and also acquires subjectivity, unlike other subjects. Even in solitude, the intellect communicates with others, depicting them in its consciousness.

The three lower levels completely simulate the behavior of living beings. For higher animals, the task of learning is realized at the third level, when through communication, the intellect receives new knowledge acquired by another subject. Fig. 2 shows an architectural model of the intelligence of higher animals leading a social life. It consists of three levels, entropic, process and social one. The arrows show how the elements of the lower layers ensure the operation of elements of the upper layers. This is the essence of the architectural approach, where the lower levels are providing for the upper ones. It does not mean that there is no feedback, as it is implicitly available. Since the lower layers are providing, they must implement tasks to be solved at the upper levels. This is exactly what happens in the human brain. The paper [30] demonstrates that although neurons located in the ventromedial prefrontal cortex are responsible for human decision-making, the speed of decision-making can vary according to human social needs due to a system designed to use previous knowledge, in the form of rapid, somatic signaling. This also assumes the similarity between human intellectual activity and functioning of an enterprise information system. If the performance of the existing infrastructure is insufficient to implement the tasks of the enterprise, capacities from other directions are used or new ones added. This means that the lower layers provide the upper ones in accordance with the goals of the upper levels, the goals are the feedback.

Fig. 2.

Fig. 2

An architectural model of the intelligence of higher animals.

Various processes among animals (for example, creating a family) are implemented at the genetic level. Algorithms describe this in the model. Patricia S. Churchland in her book “Conscience: The Origins of Moral Intuition” [31] describes how family creation or maternal care is programmed in animals and is supported by hormones (for example, oxytocin). At the same time, the algorithms are not executed rigidly, and carry a small element of uncertainty. In the book [32], such uncertainty is called noise and stated that such noise is positive in nature, allowing to find better solutions than the solutions laid down by the algorithm. The implementation of algorithms, in its turn, requires the availability of memory, sensory organs (In-Out), energy. Recognition and prediction tasks require a neural network, as well as sensory organs. It is these tools that allow the animal to build hierarchies, teach and learn.

Since this paper proposes a conceptual approach, we do not aim at accurate description of all the relationships. Let us illustrate just some of these relationships. In Fig. (3), the training function of AGI is considered separately. We specifically called it training, not learning to avoid association with machine learning. Note that the training function is at the social level as it requires communication with other intelligent beings who transmit their experience. Due to this, the so-called collective memory [33] and to some extent collective intelligence are implemented. This is how this works in nature (see, for example, the connectionist approach [34]), and so it should be implemented in AGI in the case of an architectural approach. The neural network, memory, and the ability to predict training opportunities and simultaneously develop with training. It would be necessary to specify more links with communication, family, and hierarchy, but these functions are at the same level, so these links are omitted in the model.

Fig. 3.

Fig. 3

Training function.

What is the difference between the presented training and the Supervised learning technology in machine learning? Firstly, Supervised learning does not involve social mutual relations. Looking at how training takes place in social animals, we can see that at the first stage, when only one of the parents is engaged in training, it is like Supervised learning. However, as the teenager learns, they go up in the social hierarchy, expand the circle of communications and become part of the social system, choosing supervisors for them and becoming a supervisor themselves. Generative AI projects have become popular recently (for example, the GPT project from OpenAI [35]), in which numerous users are used as supervisors, while stochastic is used in the choice of answers, simulating unpredictability. However, such Generative AI is not embedded in the social environment, does not have subjectivity and therefore its level of intelligence is lower than that of social animals, although it mimics the level of educated people outwardly.

In the case of human intelligence, another level appears at the relationship layer – informational one (see Fig. (4)), where linguistic capabilities of intelligence are implemented. By information, we mean not cybernetic information related to entropy, but information related to communication between people. Such information has the following characteristics: relevance, timeliness, accuracy, etc.

Fig. 4.

Fig. 4

Level 3 “Information” on the Relations layer.

Language is a tool of interaction, communication. Nevertheless, one should not forget that the language itself is the result of communication and agreements of subjects. Vygotsky, analyzing the relation of speech and thinking [36], says that the meaning of the word corresponds to the concept and generalization. A language consisting of words, therefore, is a system of concepts and generalizations developed as a result of the joint activity of subjects.

Modern chatbots can talk like humans, but this is just imitation of the use of language. In order for the intellect to really use language for communication, it must be in an active relationship with other subjects, not just to talk, but to discuss certain actions. Moreover, as already mentioned, as a requirement of subjectivity the subject must communicate mainly with only one subject.

The symbols that make up words or replace words represent an even greater abstraction. This should also be agreed on between the subjects of intellectual activity, as well as concepts and definitions. A glossary of words, terms, definitions, etc. Should not just be stored in the memory of the intellect, it should be verified by activity and interaction with other subjects all the time.

5. Actualization and subjectivity of artificial intelligence

Joint activity with other subjects is the most important component of intelligence. The relationship layer is actually a layer of applications for external communications and external activities. Three layers of the general artificial intelligence model can be considered as a dialectical triad: the technological layer is the internal state of intelligence, the relationship layer is external communications while the upper layer of actualization is the internal state but implemented through joint activity.

The similarity with the enterprise information system remains. The IT infrastructure allows the enterprise to maintain the internal content of the IS. The application layer allows company employees to communicate with each other, with customers and partners. While the business layer implements the strategy and features of the enterprise development.

In the case of AGI, intelligence implements its strategic objectives, forms the knowledge that it needs for self-realization at the upper layer of actualization. Since the tasks at this layer relate to the relationship layer, generic concepts and ethics are formed here. It is also at this layer that the intellect carries out self-identification, i.e. understands its place in the world.

Recently, many researchers talked about subjectivity of AI, its ethics and morality use [37]. If talking about weak AI, i.e., only about technologies, questions of ethics and morality cannot be addressed to a computer, since it does not have subjectivity. But if we talk about a general AI embedded in social relations, questions of ethics and morality become relevant, and AI gets subjectivity.

Fig. 5 shows the AGI model with the components and the connections between them. The entropy level allows implementing tasks at the process level and the social level as well. The process level provides the social activity of the intellect, which in turn provides the information level and the level of actualization.

Fig. 5.

Fig. 5

General AGI model with connections.

The subjectivity of intelligence is implemented at the social level as a requirement to allocate most of the time to face-to-face communication. Actualization, the subject's understanding of its generic purpose, is implemented at the upper level. Let us take a closer look at another communication branch from the architectural model, which provides the “Knowledge” function located at the top level along with the “Strategic goal setting”, “Ethics and morality” and “Self-identification” functions (Fig. 6). Working with knowledge is provided from the “Information” level through the “Symbols” and “Definitions” functions. These information functions allow AGI to store and transmit information. However, unlike weak AI, where functions and definitions are embedded into software, in strong AI they must be provided with communications and training from the social stratum. In the process of communication and learning, people can (and AGI must too) change symbols and, what happens even more often, correct definitions. It is the changes and refinements that arise because of communications that create knowledge.

Fig. 6.

Fig. 6

Knowledge function.

Many organizations have computer databases that store e-books or magazines, but if these electronic materials are not used in the company's activities, they cannot be knowledge. Similarly, in the case of artificial intelligence, the presence of huge amounts of information recorded in its memory does not indicate that intelligence has this knowledge. We shall illustrate this by the example of the knowledge that a human is mortal. The fact that a person lives only for a limited time, children learn even in infancy. However, this becomes knowledge only when a person really faces the death of people close or familiar to him. That is why teenagers are often fearless, because they do not really know what death is. Of course, not all knowledge should be mediated by experience, but all knowledge should be obtained using social communications, not by uploading them to memory. To some extent, this is like how companies work, turning implicit knowledge (the experience and intuition of their employees) into explicit knowledge [38].

The example of the awareness of human mortality is much more important than just an illustration of the function of knowledge. Awareness of their mortality makes a person understand their ancestral essence. The life of a mortal person can be perceived as a tool for overcoming death, but for the genus as a whole. Apparently, if humans were immortal, there would be no the concept of a genus. The finiteness of a person's life makes their life meaningful and an integral part of actualization.

A similar situation applies to enterprises that have a limited life cycle within an infinite (by the standards of their lifetime) market. Thus, the limited time and the risk of dying in a competitive struggle make the business tasks set for enterprises meaningful.

If the hypothesis about the importance of the mortal human nature is correct, then the AGI model should have a limited life cycle. I.e., the general artificial intelligence as a whole is not a process, but a project, with a lifetime and a set of strategic tasks that the intelligence finds and develops. At least, this is a consequence of the architectural approach.

Of course, the finiteness of life is not the only one, although it is a strong motivation. In the work [39], the motivation of AI is justified at all levels, from physical (entropic) to cognitive ones. This justification is also based on the idea of minimizing free energy. However, as mentioned above, it does not explain the reasons of motivation, so it is controversial to apply this idea to the self-identification. In a sense, Maslow's pyramid of motivations [40] corresponds to the architectural approach and the highest needs should be related to the social nature of man and their self-actualization. Meanwhile, the reasons for such motivation should be sought in the social nature of intelligence. For example, effective managers can shape people's motivations, which has been called nudge [41]. All this suggests that the motivation of AGI should also have a social essence.

6. Conclusions

Of course, the similarity between intelligence and the information system of an enterprise cannot be a simple confirmation of the right choice of a general artificial intelligence model. However, if the architectural approach is perceived as a system-wide approach based on multilevel one and applied to the construction of models where cause-and-effect relationships are not direct, then its use is quite justified.

The paper suggests distinguishing five levels located on three layers. The three layers are the minimum possible separation for a multilevel model of general intelligence, where internal infrastructure is implemented at the first layer, external relations are implemented at the middle layer, and strategic goals are implemented at the upper level taking into account internal capabilities and external constraints.

The number of levels has been chosen considering different types of tasks (for example, a separate level for language tasks) for bottom-up connectivity. However, it is important to note that, like in any classification, the choice of the number of levels, as well as their location is quite abstract. There may be relationships at the same level and even connections directed down, but such anomalies should be rare. If there are too many of them, it means that the levels are most likely incorrectly arranged, or the functions should be divided in such a way that one part of it is on one level and the other on another. In the practice of enterprises, it also happens when some service starts dictating conditions to those it serves indicating the problems in the enterprise.

As part of the AGI model construction based on the architectural approach, two important requirements have been identified, one of which is presented in the form of a hypothesis and requires further research.

The first requirement is related to subjectivity and assumes that the relationship of artificial intelligence with its own kind, or people, should be the focus of the intelligence activity. If a computing complex supports a large number of communications, this most likely means that the same number of possible subjects have been created and should be socialized separately.

The second requirement is related to the self-identification of general artificial intelligence as part of an intellectual society. The hypothesis suggests that general artificial intelligence should have a limited lifespan, which will become a strong motivation to develop a strategy for the life of such intelligence and bring it closer to human intelligence.

The editors of articles devoted to various aspects of Artificial General Intelligence [42] in their preface say that cross-disciplinarity is a distinctivefeature of research in the field of AGI. Meanwhile, combining different disciplinarians into a single whole – an architectural approach - can become a convenient tool for such unification. This is the answer to the question of what AGI should be like. However, the question of how to create AGI is yet to be answered, and this shall take place in the next study.

Author contribution statement

Boris Slavin: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.

Funding statement

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Data availability statement

No data was used for the research described in the article.

Declaration of interest's statement

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  • 1.Russell S.J., Norvig P. Alan Apt; New Jersey: 1995. Artificial Intelligence : a Modern Approach; p. 932. [Google Scholar]
  • 2.LeCun Y., Bengio Y., Hinton G. Deep learning. Nature. May 2015;521:436–444. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
  • 3.Grudin J., Jacques R. CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019. Chatbots, humbots, and the quest for artificial general intelligence. [Google Scholar]
  • 4.Workshop A.G.I. Proceedings of the AGI Workshop 2006. IOS Press; 2007. Advances in artificial general intelligence: concepts, architectures and algorithms; p. 295. [Google Scholar]
  • 5.Fjelland R. Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications. 2020;7(10):1–9. [Google Scholar]
  • 6.Korteling J., Boer-Visschedijk G., Blankendaal R., Boonekamp R., Eikelboom A. Human- versus artificial intelligence. Frontiers in Artificial Intelligence. March 2021;4(622364):1–13. doi: 10.3389/frai.2021.622364. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Dreyfus H.L. Harper; NY: 1972. What Computers Can't Do: A Critique of Artificial Reason; p. 260. [Google Scholar]
  • 8.Goertzel B., Pennachin C., editors. Artificial General Intelligence. Springer; Rockville: 2007. p. 509. [Google Scholar]
  • 9.Wang P. On defining artificial intelligence. Journal of Artificial General Intelligence. 2019;10(2):1–37. [Google Scholar]
  • 10.Kuusi O., Heinonen S. Scenarios from artificial narrow intelligence to artificial general intelligence — reviewing the results of the international work/technology 2050 study. World Futures Review. May 2022;14(11):65–79. [Google Scholar]
  • 11.Bostrom N. Oxford University Press; Oxford: 2014. Superintelligence: Paths, Dangers, Strategies. [Google Scholar]
  • 12.Baars J.B., Franklin S. An architectural model of conscious and unconscious brain functions: global workspace theory and IDA. Neural Network. 2007;20:955–961. doi: 10.1016/j.neunet.2007.09.013. [DOI] [PubMed] [Google Scholar]
  • 13.Pei J., Deng L., Song S., et al. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature. 2019;572:106–111. doi: 10.1038/s41586-019-1424-8. [DOI] [PubMed] [Google Scholar]
  • 14.Shevlin H., Vold K., Crosby M., Halina M. The limits of machine intelligence. EMBO Rep. 2019;20 doi: 10.15252/embr.201949177. pp. 1–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Alexander S.A. Proceedings Artificial General Intelligence: 13th International Conference. AGI 2020; St. Petersburg, Russia: 2020. AGI and the knight-Darwin law: why idealized AGI reproduction requires collaboration. [Google Scholar]
  • 16.Eliasmith C., Stewart T.C., Choo X., Bekolay T., DeWolf T., Tang Y., Rasmussen D. A large-scale model of the functioning brain. Science. 2012;338(6111):1202–1205. doi: 10.1126/science.1225266. [DOI] [PubMed] [Google Scholar]
  • 17.Bach J. AGI 2011. 2011. A motivational system for cognitive AI. [Google Scholar]
  • 18.Nyalapelli V.K., Gandhi M., Bhargava S., Dhanare R., Bothe S. International Conference on Computer Communication and Informatics (ICCCI) 2021. Review of progress in artificial general intelligence and human brain inspired cognitive architecture. [Google Scholar]
  • 19.Legg S., Hutter M. Universal intelligence: a definition of machine. Minds Mach. August 2007;17:391–444. [Google Scholar]
  • 20.Sloman S., Patterson R., Barbey A. Cognitive neuroscience meets the community of knowledge. Front. Syst. Neurosci. October 2021;15(675127):1–13. doi: 10.3389/fnsys.2021.675127. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Woolley A., Chabris C., Pentland A., Hashmi N., Malone T. Evidence for a collective intelligence factor in the performance of human groups. Science. October 2010;330:686–688. doi: 10.1126/science.1193147. [DOI] [PubMed] [Google Scholar]
  • 22.Slavin B. Collective intelligence technology (in Russian) Control Sciences. 2016;5:2–9. [Google Scholar]
  • 23.Peeters M.M.M.P., Diggelen J.v., et al. vol. 36. AI & SOCIETY; 2021. pp. 217–238. (Hybrid Collective Intelligence in a Human–AI Society). [Google Scholar]
  • 24.Barrett L. Macmillan; Boston, New-York: 2017. How Emotions Are Made: the Secret Life of the Brain; p. 424. [Google Scholar]
  • 25.Zachman J. A framework for information systems architecture. IBM Syst. J. 1987;26(3):276–292. [Google Scholar]
  • 26.The Open Group . tenth ed. 2022. The TOGAF® Standard.https://publications.opengroup.org/standards/togaf/specifications/c220 [Online]. Available: [Accessed 26 06 2022] [Google Scholar]
  • 27.Friston K. A free energy principle for biological systems. Entropy. 2012;14(11):2100–2121. doi: 10.3390/e14112100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Brillouin L. Academic Press; New York, London: 1964. Scientific Uncertainty, and Information; p. 164. [Google Scholar]
  • 29.Hall D.E. Routledge; New York: 2004. Subjectivity; p. 144. [Google Scholar]
  • 30.Adolphs R., Tranel D., Bechara A., Damasio H., Damasio A. In: Neurobiology of Decision-Making. Damasio A., Damasio H., Christen Y., editors. Springer; 1996. Neuropsychological approaches to reasoning and decision-making; pp. 157–180. [Google Scholar]
  • 31.Churchland P.S. W. W. Norton & Company; 2019. Conscience: the Origins of Moral Intuition; p. 212. [Google Scholar]
  • 32.Kahneman D., Sibony O., Sunstein C.R. Little, Brown Spark; 2021. Noise: A Flaw in Human Judgment; p. 454. [Google Scholar]
  • 33.J. V. Wertsch and H. L. Roediger, "Collective memory: conceptual foundations and theoretical approaches," Memory, vol. 16, no. 3, p. 318–326. [DOI] [PubMed]
  • 34.Dumas G. Towards a two-body neuroscience. Commun. Integr. Biol. May 2011;4(3):349–352. doi: 10.4161/cib.4.3.15110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Floridi L., Chiriatti M. GPT-3: its nature, scope, limits, and consequences. Minds Mach. November 2020;30(4):681–694. [Google Scholar]
  • 36.Vygotsky L. 1996. Thinking and Speech (In Russian), M.: Labirint; p. 416. [Google Scholar]
  • 37.Dubrovsky D., Efimov A., Lepskiy V., Slavin B. The fetish of artificial intelligence. Russ. J. Philos. Sci. 2022;65(1):44–71. [Google Scholar]
  • 38.Nonaka I. de The economic impact of knowledge; Routledge: 2009. The Knowledge-Creating Company; pp. 175–187. [Google Scholar]
  • 39.Linson A., Clark A., Ramamoorthy S., Friston K. The active inference approach to ecological perception: general information dynamics for natural and artificial embodied cognition. Frontiers in Robotics and AI. March 2018;5(21) doi: 10.3389/frobt.2018.00021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Maslow A.H. third ed. Addison-Wesley; Boston: 1987. Motivation and Personality; p. 369. [Google Scholar]
  • 41.Thaler R.H., Sunstein C.R. Penguin Publishing Group; 2021. Nudge: the Final Edition; p. 384. [Google Scholar]
  • 42.Artificial General Intelligence (Preface) Proceedings of 5th International Conference, AGI 2012, Berlin. 2012. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

No data was used for the research described in the article.


Articles from Heliyon are provided here courtesy of Elsevier

RESOURCES