Abstract
This study explores the concept of neural reshaping and the mechanisms through which both human and artificial intelligence adapt and learn. Objectives: To investigate the parallels and distinctions between human brain plasticity and artificial neural network plasticity, with a focus on their learning processes. Methods: A comparative analysis was conducted using literature reviews and machine learning experiments, specifically employing a multi-layer perceptron neural network to examine regression and classification problems. Results: Experimental findings demonstrate that machine learning models, similar to human neuroplasticity, enhance performance through iterative learning and optimization, drawing parallels in strengthening and adjusting connections. Conclusions: Understanding the shared principles and limitations of neural and artificial plasticity can drive advancements in AI design and cognitive neuroscience, paving the way for future interdisciplinary innovations.
Keywords: Neural plasticity, brain adaptation, artificial intelligence, learning, cognitive reshaping
Introduction
The growing brain has always been one of the most fascinating topics for research and thought. The identification and characterization of the incredibly dynamic processes by which the brain develops and matures across time have received a great deal of attention [1,2]. Nevertheless, despite this extensive body of research, we are still unsure of how the developing brain manages to overcome a wide range of difficulties throughout life to become a fully developed mature brain [3]. The development and maturation of the human brain are particularly distinctive due to several factors. Firstly, the extended period of brain maturation, which spans from the prenatal stage to the third decade of life, allows for an unparalleled level of complexity and specialization [4]. Unlike most species, humans exhibit a protracted phase of synaptic overproduction during early development, followed by extensive pruning, which optimizes neural circuits for efficiency and adaptability [5]. Secondly, the human brain’s extraordinary plasticity enables the acquisition of diverse skills and knowledge, supporting cultural evolution and individual learning. This includes the dynamic formation of new synapses, dendritic arborization, and myelination, processes that vary across different regions and are influenced by both genetic and environmental factors [6]. Lastly, the prefrontal cortex, responsible for higher-order cognitive functions such as decision-making, planning, and social behavior, undergoes prolonged development, which is rare among species and critical for human adaptability and complex behaviors [7]. These unique features highlight the intricate interplay between biology and experience in shaping the human brain.
The capacity of the brain to change its connections or rewire itself is known as neuroplasticity or brain plasticity [8-10]. Any brain, not only the human brain, would be unable to mature from infancy to adulthood or recover from brain damage without plasticity [9]. The brain is unique in that it simultaneously processes sensory and motor inputs [10-12]. It has numerous neuronal routes that can mimic one another’s functionality, making it simple to fix slight developmental mistakes or momentary loss of function due to damage by rerouting impulses via a different pathway [13]. But might artificial intelligence benefit from this plasticity as well? The purpose of this study is to provide an answer to that query.
The capacity for coherent thought is one of the definition of the brain. It takes thought for machines to function in the human environment. Thus, the term “machine brain” is used to describe it. Artificial intelligence (AI) at its maximum level, often known as strong AI, is represented by machine brain [12-14]. Training the machine brain is akin to training intelligent algorithms because, at this point, robots still lack true intelligence. This process is crucial to machine learning. Understanding a machine brain’s structure is crucial to research whether or not plasticity can be implemented in a machine brain. We need first have a thorough understanding of the physiological makeup and functional zoning of the human brain in order to comprehend the anatomy of a machine brain [15-18]. A human brain is composed of 80% water and 20% biological components [19-21]. It is made up of the skull, brain stem (the subconscious layer), cerebellum (the layer that controls balance), and brain (the layer that controls thought). The brain can be thought of as the outer layer of the human brain, with the cerebral cortex and the neocortex of the forebrain making up the majority of this layer [22]. The brain stem, which is situated a little up the neck, regulates the majority of unconscious behaviours. Another component of the hindbrain is the cerebellum. It is located above the brain stem and controls bodily equilibrium, nerve reflexes, and muscle coordination [23]. The temporal lobe, which is found on both sides of the brain and flush with the ears, regulates our hearing and short-term memory [24,25]. While the left hemisphere is in charge of speech, writing, language, and computing, the right hemisphere is responsible for our creativity, spatial thinking, music, and intuitive feeling. The frontal lobe determines personality, emotion, and planned conduct. The parietal lobe regulates touch, limb movements, and speech and language understanding when it joins with the occipital lobe. The occipital lobe is connected to our vision [26].
The contribution of this study is comparing machine learning plasticity and human brain plasticity is analysing the similarities and differences in their learning processes. It is done by reviewing the literature and conducting experiments and comparing these to the various ways in which the human brain can change and adapt. Additionally, the paper examines the limitations and strengths of each type of plasticity and draws conclusions about how they might complement each other in various applications. This includes areas such as artificial intelligence, cognitive psychology, and neuroscience, psychiatry and could lead to new insights into how machine learning can be improved and made more effective in various domains.
Varieties and characteristics of plasticity
Three types of plasticity can be distinguished in the developing of brain which are, experience independent plasticity, experience-expectant plasticity, and experience-dependent plasticity [27]. Because it is difficult for the genome to specify every link in the brain, it creates a rough approximation of connectivity that is changed by internal and external events, both during pregnancy and in the early postnatal period [28]. This results in experience independent plasticity. Experience expectant plasticity largely takes place in the first few months after birth. When an input is not experienced, the brain loses its ability to discern fine distinctions and becomes an expert at distinguishing stimuli that it gets [29]. Experience dependent plasticity, which modifies the connections between groups of neurons as a result of experience, starts in the early postnatal period and lasts the rest of one’s life [30].
Plasticity refers to the brain’s ability to adapt and reorganize itself by forming new neural connections in response to learning, experience, or injury. This dynamic property enables both development and recovery, playing a critical role in cognitive, sensory, and motor functions. The three types of plasticity are:
1. Experience-Independent Plasticity: This type of plasticity occurs without the influence of external stimuli and is primarily driven by genetic and molecular programs. It is most prominent during prenatal development and the early postnatal period when basic neural structures and connectivity are established. Examples include the formation of synapses and neural circuits that govern basic reflexes [28].
2. Experience-Expectant Plasticity: This form of plasticity relies on specific external stimuli that the developing brain anticipates during critical periods. For example, the visual cortex requires exposure to visual stimuli for proper development; without such input, the brain’s ability to process visual information is impaired [29]. This type of plasticity emphasizes the brain’s ability to fine-tune itself based on expected environmental interactions.
3. Experience-Dependent Plasticity: This type extends throughout life and involves the formation and strengthening of synaptic connections as a direct result of individual experiences. It underpins skills such as language learning, musical training, and memory formation. Unlike experience-expectant plasticity, which is universal to all humans, experience-dependent plasticity is unique to each individual’s lived experiences [31].
Several potential mechanisms are taken into consideration in order to comprehend plasticity. The most likely candidates, include the neurogenesis, glycogenesis, the formation of connections, either by axon extension or synapse formation, pruning, or growth of dendrites and thus synapses; epigenetic changes; as well as changes in the excitatory - inhibitory balance. Despite the fact that neurogenesis in the brain is complete at birth, it can be induced postpartum under specific conditions [32,33]. Glia can also possibly develop in addition to neurons [34]. In the human brain, glial cells make up around 50% of the cells, with astrocytes making up the majority of them. There is little information available on any unique elements that might affect brain astrocytosis. Along with astrocyte growth, myelin production rises, which helps to speed up conduction along axons. Increased myelin production improves the effectiveness of communication between brain areas, according to functional MRI studies. However, myelination might also have additional purposes. Myelin first undergoes modifications as a result of learning [35,36]. Axon grows and new synapses are formed as myelin continues to develop by learning process [37,38]. There is strong evidence to suggest that learning is a moment when connectedness is actively changing [39]. Utilizing resting-state functional interactions and networks is a strong tool for analysing connectivity alterations. Using this method, we may examine how the interactions between ages and the activity of specific brain regions alter as a result of learning. Two general properties were found after reviewing such investigations, according to research [40]. Regional interactions shift over development into interactions spanning greater cortical distances, which is the first property. The second property is that these developmental shifts separate local regions and integrate them into diverse subnetworks. Learning also alters the amygdala, striatum, and hippocampus’ cortical connections [41]. Changes in connection must be accurate enough for an altered circuit to process information differently and perform the changed or new function, which is a key principle.
Learning processes in the human brain
Artificial learning techniques like neural network systems have aided in the development of the machine brain by taking advantage of significant discoveries in the domains of neurology, cognitive science, and other disciplines [42,43]. Future development will focus on understanding how the human brain learns. This implies that efficient brain learning mechanisms can serve as an inspiration. On a molecular, cellular, and neural circuit level, we can combine brain activity and perception [44]. The brain’s learning processes differ significantly from those of machines. It is crucial to completely comprehend these variations in order to advance machine capabilities and overcoming the divide between artificial intelligence and brain science. For building the machine brain, it will pave a new route. The development of the machine brain can be aided by interdisciplinary study in the fields of cognitive science, neurology, psychiatry, and computational science [16]. In order to comprehend the neurological principles underlying the human brain’s internal cognitive processes, we must first grasp that it possesses a higher intelligence layer than the machine intelligence layers [45,46].
Bridging the gap between AI and brain science requires a multifaceted approach that integrates advancements from both fields. Firstly, fostering interdisciplinary research that combines neuroscience, cognitive science, and computational modeling is essential. By studying the biological mechanisms of learning and plasticity, AI systems can be designed with architectures inspired by the human brain, such as spiking neural networks that emulate event-based communication [31]. Secondly, neural-inspired models, such as those mimicking hierarchical organization and modular processing observed in the brain, can enable AI to achieve greater adaptability and generalization capabilities. Techniques like neuromorphic computing aim to replicate neural structures, bridging the divide between biological and artificial systems. Lastly, employing neurofeedback technologies provides a promising avenue for real-time interaction between AI systems and brain activity. Machine learning algorithms can analyse patterns in neural data to guide brain plasticity enhancement, creating a synergistic relationship between the two domains. Collaborative efforts between neuroscientists and computer scientists are critical to designing models that align computational efficiency with biological plausibility.
Billions of neurons in the human brain are protuberant cells. The nucleus, ribosome, protoplast network structure, and other components make up the entire cell body. Here, the energy sources for neural functions are displayed and numerous biochemical processes are carried out. The axon is long and has few branches (see Figure 1), whereas these dendrites are small and have numerous branches. The axon is the conduit through which neurons transfer the information they have activated to other neurons [11]. The synapse, which consists of the presynaptic membrane, synaptic space, and postsynaptic membrane, is the structure that links one neuron to another (See Figure 2). Synapses allow for the unidirectional passage of information between neurons without any attenuation. Synapses allow for the unidirectional passage of information between neurons without any attenuation.
Figure 1.

Building blocks of the brain: Exploring the intricate components of a simple neuron - from dendrites to myelin, each piece plays a crucial role in transmitting signals.
Figure 2.

Connecting neurons, sparking ideas: the intricate structure of the Synapse.
The synaptic body contains vesicles that alter and release neurotransmitters. These neurotransmitters diffuse to the postsynaptic membrane of other neurons after passing through the synaptic gap and promptly bind to the protein receptor, altering the postsynaptic membrane’s permeability to ions. The membrane potential follows a change in the ion concentration difference between the membrane’s inside and exterior. The excitatory or inhibitory alterations in the postsynaptic membrane, which are caused by abrupt rising pulses when the membrane potential increases beyond a fixed value, are directly related to the learning mechanisms in human brain. The brain undergoes significant changes while a person is learning, including the development of new connections between neurons. We refer to this phenomenon as neuroplasticity. The capacity of the human brain to modify, i.e., to build, bolster, weaken, or destroy connections between neurons. These connections get stronger with more repetition. The messages (nerve impulses) are transferred faster as these connections become stronger, boosting their effectiveness.
The human brain’s simulation process for learning
A hard issue in computational neurosciences is how the plasticity dynamics of multilayer Biological Neural Networks (BNNs) are set up for effective data-driven learning [47]. Artificial neural network algorithms are typically unrivalled in their ability to perform a wide range of data-driven tasks, which begs the issue of whether the factors that contribute to their success are shared by their biological counterparts, specifically Spiking Neural Networks (SNNs). However, the continuous-time dynamics, localization of operations, and spike (event)-based communication of biological neural networks set them apart from Artificial Neural Networks (ANNs) [31]. Training on nonstationary data is a concern of continuous learning [48]. An agent is limited to interacting with only one task at a time in a practical description as they are taught in succession. There are various requirements for a continuous learning algorithm to be effective. 1) Unless capacity is a problem or contrary information is presented, agents shouldn’t forget what they’ve already learned. 2) In order to accelerate learning, an algorithm should be able to take use of task structural similarities. 3) Every time new information aids in the generalisation of previously learned tasks, backward transfer should be possible. 4) Learning now shouldn’t interfere with performance on future assignments because good continuous learning depends on a persistent capacity to learn new things [49]. The subtleties of not being able to learn vary. The ability of a neural network to minimise training loss for a new task may be lost. Negative forward transfer, which is a common impact for regularization-based continual learning systems, can cause learning to become less data efficient. In this case, we might still be able to achieve full performance on the new learning and reduce training error to zero, but learning would be much slower. Bringing the learning error to zero is the main objective, along with faultless performance in learning.
Artificial neural network as a ML algorithm
Any machine learning algorithm aims to identify the best function that converts a set of inputs into the desired output. A multi-layered neural network is an example of a machine learning algorithm. A multi-layered neural network is trained via backpropagation in order to help it learn the necessary internal representations so that it can learn any arbitrary input to output mapping [50-52]. It helps to first gain some intuition regarding the relationship between the correct output of a neuron and the actual output of a neuron in order to comprehend the mathematical derivation of the multi-layered neural network method. Consider a simple neural network with two input units, one output unit and no hidden units, and in which each neuron uses a linear output that is the weighted sum of its input. Prior to training, the weights will originally be assigned at random. The neuron then picks up new information from training examples, in this instance a sequence of tuples (x1, x2, yt), in which x1 and x2 are the network’s inputs and yt are the desired result.
Given x1 and x2, the initial network will most likely produce an output ypred. The difference between the computed output ypred and the intended output yt is measured using a loss function, L(yt, ypred) (See Figure 3). The squared error can be utilised as a loss function for regression analysis problems and the categorical cross-entropy can be used for classification problems.
Figure 3.

A straightforward neural network comprising a single output unit, two input units, each with a single input.
Therefor, the Loss function would be:
Where E stands for the error or discrepancy. The weighted sum of all a neuron’s inputs determines its output, though:
Here, w1 and w2 are the connection weights between the input and output units. Since the inbound weights to the neuron also affect error, it is these weights that must be altered in the network in order to support learning.
Figure 4 depicts a perceptron network with just one hidden layer that can have more layers and neurons as needed to solve a given problem. Whereas X1, X2,..., Xn are the inputs and W1, W2,..., Wn are the synapses’ transmission efficiencies. When neurons exceed the accumulation threshold, φ, the activation function, chooses the neurons’ output mode. The weighted sum of the prior layers is utilised to determine the output’s final value:
Figure 4.
Multi-layer perceptron.
Suppose each neuron’s output is calculated in the following manner:
Where, φ is the activation function so that the output of each neuron is not just a linear function and b is a bias so that our neuron does not always cross the origin and w is the weight vector of each layer:
And the input vector is defined as follows:
After calculating the output, the difference between the predicted value and the target is calculated. For this purpose, the appropriate loss function of the problem is used. The loss function used to quantify errors in learning is a crucial contributor to updating synaptic weight in order to reduce errors [42,43]. Synaptic weights are updated by the algorithm to reduce loss. The following are the main mechanisms of learning. To enhance the learning system, the synaptic weights of each neuron are calculated for their contributions to errors and subsequently changed [44,45].
Backpropagation artificial neural network as an example
A symbolic backpropagation network is shown in Figure 5. After the neuron or, on a larger scale, the neural network receives the input data, the output is compared to the ideal value to determine the error. The optimizer, a bridge between the error and the artificial neuron, is then used to examine the error. In reality, in this section, we derive the loss function with regard to the network parameters using the gradient reduction algorithm, and as a consequence, we adjust the weights and bias to lower the amount of error. This method is repeated until the function’s minimum value is reached.
Figure 5.
Backpropagation.
A supervised learning approach used to train multi-layer perceptron is called backpropagation (Artificial Neural Networks). When creating a neural network, we initialise the weights with random values or any appropriate variable. It’s not necessary that the weight values we choose will be accurate or that they will best fit our model. We started out by choosing some weight values, but the difference between our model output and our real output - i.e., the error value - is rather large. In essence, what we must do is find a way to describe the model in order to adjust the parameters (weights) so that error is minimised. Backpropagation is a technique for training our model. Take a look at the Figure 5.
Here is a list of the steps: Calculate the error: How much of the real output differs from the model’s output; Minimum Error Check: Check to see if the error has been minimised; Update the parameters: Update the parameters if there is a significant error; Check the mistake once more after that. Continue until the mistake is at its lowest point; Model is ready: We can give the model some inputs and it will output whenever the error is at its lowest possible level.
The Backpropagation algorithm employs a method known as the Delta Rule or Gradient Descent to find the least value of the error function in weight space. The learning problem is then thought to have an answer in the weights that minimise the error function [53-55]. We are attempting to determine the weight value at which the mistake is minimised. In essence, we must decide whether to change the weight value for the better or for the worse. Once we are aware of that, we continue to update the weight value in that direction until the mistake is at its lowest. You can reach a point where updating the weight further causes the mistake to grow. You should end at that point, and the weight value is the final one.
Experiments
In order to explore the classification and regression approach, two separate datasets were tested. In all experiments, the MLP neural network with several hidden layers was employed. The objective of the regression problem is to continuously estimate the relationship between input and output. As a result, the number of prediction variables is the same as the number of neurons in the output layer. The number of neurons in the output layer is equal to the number of classes in the classification issue because the aim is to discretely estimate the link between input and output.
First experiment (regression)
The Housing dataset was the one utilised in this experiment. One of the datasets used to assess machine learning algorithms in the regression domain is this one. This dataset is used to forecast home prices and consists of 20,640 data samples with 8 variables, such as the average age of the property, the number of rooms, the number of beds, area, etc.
In order to investigate plasticity behaviour in neural networks, we used a simple multi-layer perceptron neural network. This 3-layer neural network has two hidden layers with the size of 64 and 32 neurons, respectively, and the number of neurons in the output layer is equal to 1, because in this problem, the goal is to estimate the house price based on the measured features.
In this experiment, after cleaning, the dataset was divided into two groups, training and testing, with a training rate of size equal to 0.7. Then the data was pre-processed with the standardization method. For the statistical analysis, we used linear regression with gradient descent optimization to minimize the mean squared error (MSE). Statistical significance of the model was assessed using the coefficient of determination (R2) and p-values for individual predictors, ensuring robustness in parameter selection and model validity. Then the data was pre-processed with the standardization method. The batch size during training is equal to 128 and during testing is equal to 256, and the optimizer used in this experiment is one of the most famous optimization algorithms, i.e. gradient descent. The learning rate is 0.001 and the number of repetitions (epoch) is 300. The activation function of the hidden layers is ReLU (Rectified Linear Unit) and the loss function is used according to the MSELOSS regression problem. As can be seen in Figure 6, with the passage of time and the increase in the learning rate, the error rate in the network gradually decreases.
Figure 6.

Results of first experiment.
Second experiment (classification)
The mobileprice dataset was used in this experiment. This dataset contains 2000 training samples and 1000 test samples. For statistical analysis, we employed multinomial logistic regression to evaluate classification performance. We computed accuracy, precision, recall, and F1-score metrics, along with a confusion matrix to assess model performance. These metrics were calculated using cross-validation to ensure generalizability of the results. Statistical significance was evaluated through p-values for the model coefficients to confirm their contribution to the predictions. The samples have 20 features, including clock speed, Wi-Fi feature, battery power, etc. In this matter, it is not necessary to predict the actual price of the device, but it is divided into one of 4 categories based on its price range.
Multilayer perceptron neural network was used for this experiment. This network consists of 3 hidden layers with size 64, 32 and 16. The number of neurons in the input layer is equal to the number of features of each sample, i.e. 20, and the number of neurons in the output layer is equal to the number of dataset classes, i.e. 4.
In this experiment, the training data is divided into two categories, training and testing, with a training rate of size equal to 0.7 in order to train and evaluate the model. The batch size was set to 64, the optimizer was gradient descent, the number of epochs was 60, and the learning rate was set at 0.004. For the hidden layers, the Rectified Linear Unit (ReLU) activation function was utilised, while softmax was used for the final layer. Cross-entropy is the chosen loss function. As seen in Figure 7, the error rate in the network gradually declines as time goes on and the learning rate rises.
Figure 7.

Results of second experiment.
Exploring experiments
The results of both machine-learning experiments show the optimization of processing pathways to improve information transmission and processing efficiency. In the second section, it was stated that increased myelin production improves the effectiveness of communication between brain areas, according to functional MRI studies. In that both entail the improvement of processing paths to increase information transmission and processing efficiency, it might be compared to how machine learning algorithms work. In the case of the brain, increased myelin production can lead to faster and more efficient transmission of signals between brain regions, allowing for more rapid processing of information. Similarly, in machine learning, algorithms are trained to identify patterns and make predictions based on input data. Over time, as the algorithm is exposed to more data and receives feedback on its performance, it can optimize its processing pathways to improve its accuracy and efficiency. In both cases, the end goal is to improve the effectiveness of information processing, either through faster and more efficient signal transmission in the brain or through improved accuracy and efficiency in machine learning algorithms.
The results clearly show that the machine learning algorithms receive and process input data, adjusting its parameters to improve performance. Over time, the algorithm becomes more effective in recognizing patterns and making predictions, much like how the human brain becomes more efficient through learning experiences. It was stated earlier that learning also alters the amygdala, striatum, and hippocampus’ cortical connections. Both learning (in the human brain) and machine learning algorithms involve strengthening and adjusting connections between different elements. In the human brain, the connections between the amygdala, striatum, and hippocampus change as a result of learning experiences. This allows for the formation of new neural pathways and the strengthening of existing ones, leading to increased efficiency in processing information. Similarly, machine learning algorithms work by strengthening connections between various elements within the system. In both cases, strengthening connections leads to improved performance and more accurate results.
In the review of Varieties and characteristics of plasticity, it was stated that several potential mechanisms are taken into consideration in order to comprehend plasticity. The most likely candidates include neurogenesis, glycogenesis, the formation of connections, either by axon extension or synapse formation, pruning, or growth of dendrites and thus synapses; epigenetic changes; as well as changes in the excitatory-inhibitory balance. Results clearly show that machine learning algorithms also use a combination of these mechanisms to improve their performance. In the same way that neurogenesis and glycogenesis contribute to the formation of new connections in the brain, machine learning algorithms use optimization techniques to update the connections between neurons and improve the accuracy of predictions. Similarly, the formation of new connections in the brain through axon extension or synapse formation is similar to the process of weight updates in machine learning algorithms. Additionally, pruning in the brain is similar to regularization techniques in machine learning algorithms that aim to reduce overfitting by removing redundant connections. The excitatory-inhibitory balance in the brain is also similar to the balance between positive and negative weights in machine learning algorithms that determine the final prediction. Lastly, epigenetic changes in the brain can be compared to the changes in the model parameters in machine learning algorithms that are made after multiple training iterations.
According to the generality of the results, it can be inferred that the concern of training on nonstationary data in continuous learning is similar to the operation of machine learning algorithms in that they both require the ability to adapt and change in response to new and changing data. Machine learning algorithms are designed to continually learn and improve their performance as they receive new data, allowing them to make more accurate predictions and decisions. Similarly, in continuous learning, the model must be able to continually update and adapt to changing data in order to remain accurate and relevant. This requires the model to be able to recognize patterns and changes in the data and make updates accordingly. In both cases, the ability to adapt and change in response to new data is essential for success.
Discussion
Machine learning algorithms have come a long way in recent years, allowing computers to learn and perform tasks like image classification, speech recognition, and even playing complex games. However, the methods used by machine learning algorithms are very different from the way the human brain processes information. In this paper, we examined the concept of plasticity in both the human brain and machine learning algorithms and compared the two.
The human brain is an incredibly complex organ that is capable of processing large amounts of information and adapting to new stimuli. This ability to change and adapt is known as plasticity, and it is one of the key features that sets the human brain apart from computers. In contrast, traditional computer programs are rigid and can only perform the tasks that they were specifically programmed to do. However, with the advent of machine learning algorithms, computers are now able to learn and adapt to new data.
The perspective of plasticity in machine learning algorithms refers to the ability of algorithms to change and adapt to new data and information, in a manner that is similar to how the human brain adapts and changes. It is considered to be an important aspect of machine learning algorithms, as it allows them to improve their performance and accuracy over time.
In supervised learning, for example, plasticity allows the algorithm to learn the relationship between the input and output variables, and to adjust its parameters in order to minimize the prediction error. In reinforcement learning, plasticity allows the algorithm to learn from its actions and to adjust its policy in order to maximize reward.
Plasticity can be achieved through various methods, such as gradient descent, genetic algorithms, or other optimization methods. The specific implementation of plasticity depends on the algorithm being used and the goals of the learning process. Plasticity is a crucial property that enables machine learning algorithms to learn and improve over time, allowing them to perform better on new and unseen data.
As shown in the experiments section, one of the most commonly used machine learning algorithms is Multilayer Perceptron (MLP). MLP is a fully connected class of feedforward artificial neural network (ANN). ANNs are inspired by the structure and function of the human brain, and they consist of interconnected nodes or neurons that process information. Each neuron receives input from other neurons, processes that input, and produces an output that is used by other neurons. In this way, ANNs are able to model complex relationships and patterns in the data.
As it is shown in the experiments, plasticity in an MLP refers to the ability of the model to adjust its weights and biases in response to new input data. This enables the model to learn and improve its performance over time. The plasticity of an MLP is determined by the optimization algorithm used to train the model, such as stochastic gradient descent, which adjusts the weights and biases in response to the error between the predicted output and actual target.
While both the plasticity of an MLP and the human brain share the basic concept of being able to change and adapt, there are important differences between the two. One of the main differences is that the plasticity of an MLP is limited to adjusting the weights and biases of its connections, whereas the plasticity of the human brain involves a much more complex set of processes, including the growth of new neurons and synapses, the rearrangement of existing connections, and the release of neurotransmitters that modulate the strength of connections.
Additionally, the plasticity of an MLP is largely determined by the algorithms and parameters used during training, whereas the plasticity of the human brain is influenced by a wide range of factors, including genetics, experience, and environmental factors. While both the plasticity of an MLP and the human brain share some similarities, they are also significantly different in terms of their underlying mechanisms, complexity, and scope of influence.
The human brain is capable of adapting and changing in response to new experiences and information. This process is known as neuroplasticity, and it is the key to the brain’s ability to learn and form new connections between neurons. When we experience new stimuli, our brain forms new connections between neurons and strengthens existing connections. This process allows the brain to store new information and learn new skills.
Neuroplasticity is not only important for learning but also for recovery from injury. The brain is able to reorganize itself and form new connections to compensate for lost or damaged areas, which can help patients recover from strokes and other injuries. While both the human brain and machine learning algorithms are capable of learning and adapting to new data, there are several key differences between the two. Firstly, the human brain is capable of forming new connections between neurons, whereas machine learning algorithms can only adjust the weights of their existing connections. Secondly, the human brain is capable of processing information in a parallel manner, whereas machine learning algorithms typically process information in a sequential manner. Finally, the human brain is capable of learning from a wide variety of stimuli, whereas machine learning algorithms are typically designed to learn from specific types of data.
Plasticity is a key feature of both the human brain and machine learning algorithms. While the methods used by each are different, both are capable of adapting to new data and learning from experience. The human brain remains unrivalled in its ability to process information in a parallel manner and learn from a wide variety of stimuli, but machine learning algorithms are rapidly improving and offer a promising alternative for solving complex problems. The combination of the two may offer new insights and solutions to the challenges facing us in the future.
At present, the plasticity of machine-learning algorithms is limited compared to that of the human brain. The human brain is capable of changing and adapting to new information in a much more complex and nuanced way than machine learning algorithms. This is because the human brain has a much larger number of neurons and connections, as well as a more complex network of feedback mechanisms that allow it to change and adapt over time.
However, it is not impossible that we will reach the plasticity of the human brain in machine-learning algorithms in the future. With advances in technology and a better understanding of the human brain and its functions, it is possible that machine-learning algorithms could be developed to have a level of plasticity that is similar to the human brain.
Machine learning algorithms have made remarkable advancements in recent years, with the ability to learn and improve with experience becoming a key characteristic of these algorithms. This plasticity of machine learning algorithms has opened up new avenues for the development of new technologies, which could significantly impact various industries in the future.
One of the areas where the plasticity of machine learning algorithms could have a significant impact is in the field of robotics. With the ability to learn and improve with experience, robots equipped with machine learning algorithms could be trained to perform complex tasks in a more efficient and autonomous manner. This could result in significant advancements in the field of industrial robotics, where robots could be trained to perform tasks with precision and accuracy, leading to increased productivity and reduced costs.
Another area where the plasticity of machine learning algorithms could have a significant impact is in the field of autonomous vehicles. With the ability to learn and adapt to new situations, machine learning algorithms could play a key role in the development of autonomous vehicles, which could make driving safer and more efficient. Autonomous vehicles could learn from their experiences on the road, and make decisions based on real-time data, leading to a safer and more efficient driving experience for passengers.
Additionally, the plasticity of machine learning algorithms could have implications in the field of medicine. With the ability to learn and improve with experience, machine learning algorithms could be used to develop personalized medicine, where medical treatments could be tailored to individual patients based on their unique medical history and other factors. This could lead to more effective treatments and better outcomes for patients.
The plasticity of human brain and artificial intelligence in the learning process is an exciting area of research with the potential to revolutionize the treatment of brain disorders, such as autism, learning disorders, memory disorders or developmental disorders which are related to low rate of brain plasticity. In future studies, research could be directed towards understanding how the plasticity of artificial intelligence algorithms can be utilized to enhance the plasticity of the human brain.
One potential direction is to investigate how the use of personalized machine learning algorithms can improve learning outcomes in individuals with brain disorders. This could involve creating customized training programs that adapt to the unique needs and abilities of each individual, taking into account their specific strengths and weaknesses.
Another direction could be to explore the potential of neurofeedback techniques, which use machine learning algorithms to provide real-time feedback on brain activity, to enhance brain plasticity. This could involve developing algorithms that can identify patterns of brain activity associated with successful learning and using this information to guide training and rehabilitation programs. By understanding how these two systems can work together, we can potentially unlock new avenues for improving learning outcomes and enhancing the quality of life for individuals with brain disorders.
The comparison between human brain plasticity and artificial intelligence plasticity presents not only a scientific inquiry but also a conceptual framework for innovation. We propose that the interplay between these systems can significantly influence advancements in education, healthcare, and adaptive technologies. For example, leveraging insights from neuroplasticity to design more adaptive AI systems could revolutionize personalized education and rehabilitation. Conversely, applying AI models to analyse neural connectivity patterns may unlock new methods for enhancing cognitive functions and recovery in neurological disorders.
Furthermore, the concept of continuous learning in AI - modelled after human brain adaptability - highlights the potential for machines to operate in dynamic, nonstationary environments. This has profound implications for autonomous systems, where rapid adaptation is critical. However, we emphasize that ethical considerations, particularly regarding the autonomy of AI systems and their integration into human-centric domains, must remain central to this discourse.
Lastly, we posit that understanding the limitations of current AI in replicating human neuroplasticity underscores the necessity of interdisciplinary research. The synergy between neuroscience, computational sciences, and cognitive psychology is not merely beneficial but essential for addressing the complex challenges of replicating adaptive intelligence in machines.
In conclusion, the plasticity of machine learning algorithms could have significant implications for the development of new technologies in the future. With the ability to learn and improve with experience, machine learning algorithms could play a key role in the development of robotics, autonomous vehicles, and personalized medicine, among other fields. As these technologies continue to evolve, the future implications of the plasticity of machine learning algorithms are sure to be far-reaching and impactful.
Disclosure of conflict of interest
None.
References
- 1.Sadegh-Zadeh SA, Fakhri E, Bahrami M, Bagheri E, Khamsehashari R, Noroozian M, Hajiyavand AM. An approach toward artificial intelligence Alzheimer’s disease diagnosis using brain signals. Diagnostics (Basel) 2023;13:477. doi: 10.3390/diagnostics13030477. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Peng S, Wuu J, Mufson EJ, Fahnestock M. Precursor form of brain-derived neurotrophic factor and mature brain-derived neurotrophic factor are decreased in the pre-clinical stages of Alzheimer’s disease. J Neurochem. 2005;93:1412–1421. doi: 10.1111/j.1471-4159.2005.03135.x. [DOI] [PubMed] [Google Scholar]
- 3.Malekpour M. Effects of attachment on early and later development. The British Journal of Development Disabilities. 2007;105:81–95. [Google Scholar]
- 4.Lenroot RK, Giedd JN. Brain development in children and adolescents: insights from anatomical magnetic resonance imaging. Neurosci Biobehav Rev. 2006;30:718–729. doi: 10.1016/j.neubiorev.2006.06.001. [DOI] [PubMed] [Google Scholar]
- 5.Huttenlocher PR, Dabholkar AS. Regional differences in synaptogenesis in human cerebral cortex. J Comp Neurol. 1997;387:167–178. doi: 10.1002/(sici)1096-9861(19971020)387:2<167::aid-cne1>3.0.co;2-z. [DOI] [PubMed] [Google Scholar]
- 6.Tau GZ, Peterson BS. Normal development of brain circuits. Neuropsychopharmacology. 2010;35:147–168. doi: 10.1038/npp.2009.115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Casey BJ, Giedd JN, Thomas KM. Structural and functional brain development and its relation to cognitive development. Biol Psychol. 2000;54:241–257. doi: 10.1016/s0301-0511(00)00058-2. [DOI] [PubMed] [Google Scholar]
- 8.Johansson BB. Brain plasticity in health and disease. Keio J Med. 2004;53:231–246. doi: 10.2302/kjm.53.231. [DOI] [PubMed] [Google Scholar]
- 9.Kolb B, Gibb R, Robinson TE. Brain plasticity and behavior. Curr Dir Psychol Sci. 2003;12:1–5. [Google Scholar]
- 10.Moreno S, Bidelman GM. Examining neural plasticity and cognitive benefit through the unique lens of musical training. Hear Res. 2014;308:84–97. doi: 10.1016/j.heares.2013.09.012. [DOI] [PubMed] [Google Scholar]
- 11.Sadegh-Zadeh SA, Kambhampati C, Davis DN. Ionic imbalances and coupling in synchronization of responses in neurons. J (Basel) 2009;2:17–40. [Google Scholar]
- 12.Wang WF, Chen X, Yao T. Structure of a machine brain. Five-Layer Intelligence of the Machine Brain. Springer; 2022. pp. 1–15. [Google Scholar]
- 13.Gigerenzer G. Strong AI and the problem of ‘second-order’ algorithms. Behav Brain Sci. 1990;13:663–664. [Google Scholar]
- 14.Monte-Serrat DM, Cattani C. The natural language for artificial intelligence. Academic Press; 2021. [Google Scholar]
- 15.Müller F, O’Rahilly R. The development of the human brain, including the longitudinal zoning in the diencephalon at stage 15. Anat Embryol (Berl) 1988;179:55–71. doi: 10.1007/BF00305100. [DOI] [PubMed] [Google Scholar]
- 16.Wang W, Cai H, Deng X, Lu C, Zhang L. Interdisciplinary evolution of the machine brain. Interdisciplinary Evolution of the Machine Brain. Springer; 2021. pp. 119–145. [Google Scholar]
- 17.Midgley G. The brain in the machine, or the machine in the brain? Springer; 1994. [Google Scholar]
- 18.Sadegh Zadeh SA, Kambhampati C. All-or-none principle and weakness of Hodgkin-Huxley mathematical model. Int J Math Comput Sci. 2017;11:453. [Google Scholar]
- 19.Nazari MJ, Shalbafan M, Eissazade N, Khalilian E, Vahabi Z, Masjedi N, Ghidary SS, Saadat M, Sadegh-Zadeh SA. A machine learning approach for differentiating bipolar disorder type II and borderline personality disorder using electroencephalography and cognitive abnormalities. PLoS One. 2024;19:e0303699. doi: 10.1371/journal.pone.0303699. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Sadegh-Zadeh SA, Nazari MJ, Aljamaeen M, Yazdani FS, Mousavi SY, Vahabi Z. Predictive models for Alzheimer’s disease diagnosis and MCI identification: the use of cognitive scores and artificial intelligence algorithms. NPG Neurologie-Psychiatrie-Gériatrie. 2024 [Google Scholar]
- 21.Sadegh-Zadeh SA, Sadeghzadeh N, Soleimani O, Shiry Ghidary S, Movahedi S, Mousavi SY. Comparative analysis of dimensionality reduction techniques for EEG-based emotional state classification. Am J Neurodegener Dis. 2024;13:23–33. doi: 10.62347/ZWRY8401. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Ganiev AG, Abdunazarova ZS. Biophysics of brain activity. Brain activity in the development of “creative thinking” “mind map”. Turkish Journal of Computer and Mathematics Education (TURCOMAT) 2021;12:1–6. [Google Scholar]
- 23.Pujol J, Blanco-Hinojo L, Ortiz H, Gallart L, Moltó L, Martínez-Vilavella G, Vilà E, Pacreu S, Adalid I, Deus J, Pérez-Sola V, Fernández-Candil J. Mapping the neural systems driving breathing at the transition to unconsciousness. Neuroimage. 2022;246:118779. doi: 10.1016/j.neuroimage.2021.118779. [DOI] [PubMed] [Google Scholar]
- 24.Zadeh SAS, Kambhampati C. A computational investigation of the role of ion gradients in signal generation in neurons. Intelligent Computing: Proceedings of the 2018 Computing Conference, Volume 1. Springer; 2019. pp. 291–304. [Google Scholar]
- 25.Camí J, Martínez LM. The illusionist brain: the neuroscience of magic. Princeton University Press; 2022. [Google Scholar]
- 26.de Villiers CR. The human brain-cortex, lobes, neural networks and problem solved! The Handbook of Creativity & Innovation in Business. Springer; 2022. pp. 25–49. [Google Scholar]
- 27.Wang W, Cai H, Deng X, Lu C, Zhang L. Interdisciplinary evolution of the machine brain. Vision, Touch & Mind. Springer Nature; 2021. [Google Scholar]
- 28.Kolb B, Harker A, Gibb R. Principles of plasticity in the developing brain. Dev Med Child Neurol. 2017;59:1218–1223. doi: 10.1111/dmcn.13546. [DOI] [PubMed] [Google Scholar]
- 29.Fahrbach SE, Moore D, Capaldi EA, Farris SM, Robinson GE. Experience-expectant plasticity in the mushroom bodies of the honeybee. Learn Mem. 1998;5:115–123. [PMC free article] [PubMed] [Google Scholar]
- 30.Nithianantharajah J, Hannan AJ. Enriched environments, experience-dependent plasticity and disorders of the nervous system. Nat Rev Neurosci. 2006;7:697–709. doi: 10.1038/nrn1970. [DOI] [PubMed] [Google Scholar]
- 31.Kaiser J, Mostafa H, Neftci E. Synaptic plasticity dynamics for deep continuous local learning (DECOLLE) Front Neurosci. 2020;14:424. doi: 10.3389/fnins.2020.00424. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Ryu JR, Hong CJ, Kim JY, Kim EK, Sun W, Yu SW. Control of adult neurogenesis by programmed cell death in the mammalian brain. Mol Brain. 2016;9:43. doi: 10.1186/s13041-016-0224-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Monfils MH, Driscoll I, Kamitakahara H, Wilson B, Flynn C, Teskey GC, Kleim JA, Kolb B. FGF-2-induced cell proliferation stimulates anatomical, neurophysiological and functional recovery from neonatal motor cortex injury. Eur J Neurosci. 2006;24:739–49. doi: 10.1111/j.1460-9568.2006.04939.x. [DOI] [PubMed] [Google Scholar]
- 34.Rice ME, Russo-Menna I. Differential compartmentalization of brain ascorbate and glutathione between neurons and glia. Neuroscience. 1998;82:1213–1223. doi: 10.1016/s0306-4522(97)00347-3. [DOI] [PubMed] [Google Scholar]
- 35.Xin W, Chan JR. Motor learning revamps the myelin landscape. Nat Neurosci. 2022;25:1251–1252. doi: 10.1038/s41593-022-01156-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Yoo Y, Tang LYW, Brosch T, Li DKB, Kolind S, Vavasour I, Rauscher A, MacKay AL, Traboulsee A, Tam RC. Deep learning of joint myelin and T1w MRI features in normal-appearing brain tissue to distinguish between multiple sclerosis patients and healthy controls. Neuroimage Clin. 2017;17:169–178. doi: 10.1016/j.nicl.2017.10.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Caroni P, Donato F, Muller D. Structural plasticity upon learning: regulation and functions. Nat Rev Neurosci. 2012;13:478–490. doi: 10.1038/nrn3258. [DOI] [PubMed] [Google Scholar]
- 38.Fields RD. White matter in learning, cognition and psychiatric disorders. Trends Neurosci. 2008;31:361–370. doi: 10.1016/j.tins.2008.04.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Fletcher PC, Zafiris O, Frith CD, Honey RA, Corlett PR, Zilles K, Fink GR. On the benefits of not trying: brain activity and connectivity reflecting the interactions of explicit and implicit sequence learning. Cereb Cortex. 2005;15:1002–1015. doi: 10.1093/cercor/bhh201. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Vogel AC, Power JD, Petersen SE, Schlaggar BL. Development of the brain’s functional network architecture. Neuropsychol Rev. 2010;20:362–375. doi: 10.1007/s11065-010-9145-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Papale AE, Hooks BM. Circuit changes in motor cortex during motor skill learning. Neuroscience. 2018;368:283–297. doi: 10.1016/j.neuroscience.2017.09.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Arel I, Rose DC, Karnowski TP. Deep machine learning-a new frontier in artificial intelligence research [research frontier] IEEE Comput Intell Mag. 2010;5:13–8. [Google Scholar]
- 43.Shankar K, Perumal E, Tiwari P, Shorfuzzaman M, Gupta D. Deep learning and evolutionary intelligence with fusion-based feature extraction for detection of COVID-19 from chest X-ray images. Multimed Syst. 2022;28:1175–1187. doi: 10.1007/s00530-021-00800-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Pourtois G, de Gelder B, Bol A, Crommelinck M. Perception of facial expressions and voices and of their combination in the human brain. Cortex. 2005;41:49–59. doi: 10.1016/s0010-9452(08)70177-1. [DOI] [PubMed] [Google Scholar]
- 45.Gray JR, Chabris CF, Braver TS. Neural mechanisms of general fluid intelligence. Nat Neurosci. 2003;6:316–322. doi: 10.1038/nn1014. [DOI] [PubMed] [Google Scholar]
- 46.Kuperberg GR. Neural mechanisms of language comprehension: challenges to syntax. Brain Res. 2017;1146:23–49. doi: 10.1016/j.brainres.2006.12.063. [DOI] [PubMed] [Google Scholar]
- 47.Zenke F, Ganguli S. Superspike: supervised learning in multilayer spiking neural networks. Neural Comput. 2018;30:1514–1541. doi: 10.1162/neco_a_01086. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Pratama M, Pedrycz W, Webb GI. An incremental construction of deep neuro fuzzy system for continual learning of nonstationary data streams. IEEE Trans Fuzzy Syst. 2019;28:1315–1328. [Google Scholar]
- 49.Berariu T, Czarnecki W, De S, Bornschein J, Smith S, Pascanu R, Clopath C. A study on the plasticity of neural networks. arXiv preprint arXiv:2106.00042, 2021. [Google Scholar]
- 50.Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature. 1986;323:533–536. [Google Scholar]
- 51.Sadegh-Zadeh SA, Rahmani Qeranqayeh A, Benkhalifa E, Dyke D, Taylor L, Bagheri M. Dental caries risk assessment in children 5 years old and under via machine learning. Dent J (Basel) 2022;10:164. doi: 10.3390/dj10090164. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Jourahmad Z, Habibabadi JM, Moein H, Basiratnia R, Geranqayeh AR, Ghidary SS, Sadegh-Zadeh SA. Machine learning techniques for predicting the short-term outcome of resective surgery in lesional-drug resistance epilepsy. arXiv preprint arXiv:2302.10901, 2023. [Google Scholar]
- 53.Sadegh-Zadeh SA, Soleimani Mamalo A, Kavianpour K, Atashbar H, Heidari E, Hajizadeh R, Roshani AS, Habibzadeh S, Saadat S, Behmanesh M, Saadat M, Gargari SS. Artificial intelligence approaches for tinnitus diagnosis: leveraging high-frequency audiometry data for enhanced clinical predictions. Front Artif Intell. 2024;7:1381455. doi: 10.3389/frai.2024.1381455. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Sohrabi MA, Zare-Mirakabad F, Ghidary SS, Saadat M, Sadegh-Zadeh SA. A novel data augmentation approach for influenza A subtype prediction based on HA proteins. Comput Biol Med. 2024;172:108316. doi: 10.1016/j.compbiomed.2024.108316. [DOI] [PubMed] [Google Scholar]
- 55.Sadegh-Zadeh SA, Khezerlouy-aghdam N, Sakha H, Toufan M, Behravan M, Vahedi A, Rahimi M, Hosseini H, Khanjani S, Bayat B, Ali SA, Hajizadeh R, Eshraghi A, Ghidary SS, Saadat M. Precision diagnostics in cardiac tumours: Integrating echocardiography and pathology with advanced machine learning on limited data. Inform Med Unlocked. 2024;49:101544. [Google Scholar]


