Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2026 Mar 12;123(11):e2521560123. doi: 10.1073/pnas.2521560123

Online supervised learning of temporal patterns in biological neural networks under feedback control

Yuki Sono a,b, Hideaki Yamamoto a,b,c,1, Yusei Nishi a,b, Takuma Sumi c,d, Yuya Sato a,e,f, Ayumi Hirano-Iwata a,b,c,e, Yuichi Katori g,h, Shigeo Sato a,b
PMCID: PMC12994192  PMID: 41818149

Significance

Reservoir computing is a machine learning paradigm that exploits the transient dynamics of high-dimensional nonlinear systems. Although it was originally inspired by the mammalian brain and widely explored in physical systems, its implementations in biological neural networks (BNNs) have been limited due to their excessive connectivity and global synchrony in vitro. Here, we use microfluidic devices to construct modular, nonrandomly connected BNNs and integrate them with microelectrode arrays in a closed-loop reservoir computing environment. We show that the system can be trained to autonomously output various temporal signals, with the modular connectivity that is essential for learning. In vitro BNNs provide unique alternatives for physical reservoirs with dynamic adaptability.

Keywords: biocomputing, cell engineering, in vitro neural network, microelectrode array, reservoir computing

Abstract

In vitro biological neural networks (BNNs) provide well-defined model systems for constructively investigating how living cells interact with their environments to shape high-dimensional dynamics that can be used to generate coherent temporal outputs, such as those required for motor control. Here, we develop a real-time closed-loop BNN system that is capable of generating periodic and chaotic temporal signals by integrating cultured cortical neurons with microfluidic devices and high-density microelectrode arrays. We show that training a simple linear decoder with fixed feedback weights enables the system to learn and autonomously generate diverse temporal patterns. When feedback is switched on, the irregular activity in the BNNs is transformed into low-dimensional, structured dynamics, producing coherent trajectories that are characterized by stable transitions between different neural states. BNNs trained on various target frequencies—ranging from 4 to 30 s—can be trained to sustain oscillations at distinct frequencies, demonstrating their adaptability. Importantly, top–down control of the self-organized network formation with microfluidic devices is the key to suppressing excessive synchronization and increasing dynamic complexity in BNNs, facilitating the training process and the generation of robust outputs. This work offers a biologically inspired platform for understanding the physical basis of cortical computations and for advancing energy-efficient neuromorphic computing paradigms.


Precise motor action control is essential for animal survival. For this purpose, an animal uses the intrinsic chaotic neural activity that arises from the recurrent neural network (RNN) in its brain as a substrate to generate coherent muscle commands (1, 2). Importantly, the brain does not operate as a simple open-loop controller; instead, muscles continuously provide sensory feedback to the brain, which stabilizes motor actions and enables adaptive behaviors in dynamic environments (3). Computational studies have previously demonstrated that such computations can be modeled with RNNs by incorporating feedback from the output layer and that stable learning can emerge by tuning these output-feedback systems (46). Nevertheless, the biological plausibility of these mechanisms and whether such principles can be realized in biological neural networks (BNNs) consisting of living neurons remain largely unexplored.

Recently, in vitro BNNs integrated with reservoir computing models (7, 8) have emerged as unique platforms for constructively investigating how living neural systems can perform complex computations (912). This approach is also compelling in the context of physical reservoir computing, which is a form of neuromorphic computing that harnesses the transient, high-dimensional dynamics of nonlinear physical systems for computation (1315), since individual neurons (as computational units) exhibit complex, nonlinear input–output transformations (16). Furthermore, BNNs distinguish themselves from other physical reservoirs in several key ways. For example, unlike conventional reservoir computing systems that require stable internal dynamics, BNNs can exhibit plastic changes in their responses over time (1720). Counterintuitively, this property can be leveraged to enhance the classification and prediction performance of temporal signals (12). Beyond the applications of BNNs in reservoir computing, their plasticity may enable them to be tuned under the framework of free-energy principles (21) or reinforcement learning (22) to optimally interact with the environment.

Another hallmark of BNNs is their ability to exhibit spontaneous activity, which is persistent background activity that occurs even without external inputs (2325). In the mammalian cortex, such spontaneous activity plays critical roles in the network formation (26), memory consolidation (27), and prediction (28). This activity is also preserved even when cortical neurons are cultured in vitro (29). Theoretical studies have shown that feedback signals can stabilize chaotic dynamics in neural networks, enabling the generation of coherent time series signals from networks exhibiting irregular spontaneous activity (4, 30). However, testing the biological plausibility of such mechanisms in the context of BNN-based physical reservoir computing requires the integration of several demanding technologies, such as the preparation of in vitro BNNs with high-dimensional dynamics and the development of closed-loop controllers with sufficient spatiotemporal resolutions.

Here, we integrated microfluidics-based cell patterning with CMOS-based high-density microelectrode array (HD-MEA) recording to realize closed-loop reservoir computing with BNNs that autonomously generates coherent temporal signals. Microfluidic devices were used to constrain the growth of cultured rat cortical neurons in a nonrandom architecture resembling modular connectivity in the mammalian cortex and to suppress excessive synchronization across the network. The HD-MEA was then used to record the neuronal activity at a high spatiotemporal resolution, which was streamed to a desktop computer for real-time processing and used to generate feedback stimuli for the BNN. We show that micropatterned BNNs can be trained online to generate a variety of periodic and chaotic time series signals. Importantly, this was possible only when BNNs were patterned to bear modular connectivity, which prevented the formation of excessively dense connections. Our results offer a framework for understanding how animals may transform high-dimensional neural dynamics into coherent motor commands while also highlighting the potential of BNNs as computational resources.

Results

Closed-Loop Control Using HD-MEAs.

A closed-loop controller for BNNs was constructed using an HD-MEA system (Fig. 1A). The system recorded spike trains from the electrodes, filtered the spike trains with a double exponential kernel to convert them into a continuous signal (a reservoir state x(t)), decoded the output signals via a linear readout defined by y(t)=W(tΔt)x(t), and sent a feedback stimulation to the BNN. Here, W(tΔt) is the output matrix at a previous time step, which was trained via first-order reduced and controlled error (FORCE) learning (4) to minimize the error between y(t) and the target signal. The feedback stimulation with an amplitude of A(t)=g(y(t)) was delivered to selected electrodes, where g is a mapping function (described in Materials and Methods).

Fig. 1.

Four-part figure of a BNN reservoir computing setup, system configuration, lattice vs hierarchical structures, and fluorescence images of neurons.

Overview of the experimental setup. (A) Signal generation task. Voltage traces recorded from the BNN were converted into spike trains, which were then passed through a double exponential filter to generate a continuous-time signal, referred to as the reservoir state. The readout layer transformed the reservoir state into an output signal, which was then used to generate feedback input for the BNN. (B) Data flow in the closed-loop setup. The detected spikes were passed on from the hardware to the Python application programming interface (API), and they were then used to perform the reservoir computing task and compute the feedback stimulation in a custom Python/C++ script. The obtained stimulation signal was then transferred to the hardware via the API. (C) Structures of the lattice-modular (lattice) and hierarchically modular (hierarchical) networks. Both patterns contained the same number of wells in which neuronal cell bodies attached but differed in the designs of their interconnecting microchannels, which shaped the network topology. (D) Recording area of the HD-MEA with the PDMS-microfluidic device attached (Top). Primary cortical neurons at DIV 29 bearing a hierarchical pattern (stained with the neuronal marker NeuO) are shown in the Middle and Bottom panels.

Technically, the entire system was realized by controlling the HD-MEA in real time with custom-written C++/Python scripts (Fig. 1B). The C++ script received instantaneous spike counts from the hardware, which sent summed spike counts for each electrode to a Python closed-loop controller. The Python controller then routed the data to a second C++ script for filtering, decoding, and output training. The decoded output was then passed to the Python controller, which created a stimulation sequence that was subsequently sent back to the BNN through the HD-MEA. The entire closed-loop control cycle operated on a cycle with a mean duration of 332.5±1.5 ms (mean ± SD; n=138 recordings from 46 cultures) and an SD of 3.7±0.7 ms. Each cycle consisted of a 120 ms pause before starting spike counting to process the waveform through an FIR filter and remove stimulation artifacts, a spike accumulation period of 200 ms, and software/hardware latencies. Further implementation details are provided in Materials and Methods.

The BNN was prepared by culturing primary rat cortical neurons on HD-MEA chips. The HD-MEA contained 26,400 electrodes aligned in a 17.5-µm pitch over a 3.85 × 2.10-mm2 recording area. Unlike sparsely coupled networks that are used as reservoir layers in reservoir computing models, cultured neurons tend to form dense interconnections when grown on a homogeneous substrate (31, 32). To restrict excessive neuronal connections and impose modular connectivity, a homemade polydimethylsiloxane (PDMS) microfluidic film was placed on the electrodes to confine neuronal somata to 100 × 100 µm2 wells and to provide controlled interwell coupling through microchannels (20, 33, 34). Here, “modular” refers to clusters of neurons connected by microchannels, not to cortical layers or anatomical modules.

Two microfluidic designs were fabricated: “lattice modular” and “hierarchically modular” (Fig. 1C). For the sake of brevity, we hereafter refer to the two designs as the “lattice” and “hierarchical” networks, respectively. Both designs consisted of 128 square wells arrayed across the entire recording area of the HD-MEA, with each well containing approximately 14.6±3.8 neurons (n=30 wells from two cultures) that formed aggregates (Fig. 1D). Because neurites readily span ∼100 µm in 2D cultures, connectivity within each well is expected to be substantially denser than interwell connectivity. The two configurations differed in their interwell connectivity: In the lattice design, each well was connected to its four nearest neighbors by microchannels (width ∼5 µm, height 1 to 5 µm), forming a square-lattice topology with relatively uniform interwell coupling, whereas in the hierarchical design, the interwell connections were sparser and grouped into four units at different scales to produce more heterogeneous connectivity. BNNs grown without the microfluidic film, denoted as “homogeneous”, were used as a control.

Spontaneous Activity of BNNs.

First, the intrinsic properties of the BNNs were assessed by recording the spontaneous activities derived from homogeneous, lattice, and hierarchical BNNs (Fig. 2A). The homogeneous networks exhibited bursting patterns with high degrees of synchronization across the entire chip, suggesting that the neurons formed a highly interconnected network without a distinct spatial organization. In contrast, the spontaneous activity of the lattice and hierarchical networks was characterized by spatially segregated bursting patterns, reflecting the constrained connectivity imposed by the microfluidic devices.

Fig. 2.

A Multi-part figure comparing activity maps, spike rasters, firing rates, correlation matrices, and summary statistics of homogeneous, lattice, and hierarchical networks.

Spontaneous activity of homogeneous and engineered BNNs. (A) Live-cell electrical images of each network, which were generated by scanning across the electrode array and color mapping the mean amplitude of detected presumptive action potentials over a 30-s recording window (Top). For the lattice and hierarchical networks, only the electrodes within the well regions were scanned. The scans were performed immediately prior to the neural activity recordings, shown in the Middle and Bottom rows, and were used to select the electrodes for recording. Representative raster plots (Middle) and corresponding population firing rates (Bottom) are also shown for each network topology. The data were obtained from cultures at DIV 14. (B) Pairwise correlation matrices computed from the same networks shown in (A). (C) Network-averaged mean correlations for the three topologies. Each plot represents a single BNN realization, and the horizontal bars are their medians. (D) Distribution of the variance explained by each principal component. The shaded error bars represent the SDs. (E) Comparison among the mean firing rates for the three topologies. Each plot represents a single BNN realization, and the horizontal bars are their medians. Sample sizes: n=24 (homogeneous), n=27 (lattice), and n=24 (hierarchical).

An analysis of the network statistics revealed that the mean pairwise correlations were significantly lower in both the lattice and hierarchical networks than in the homogeneous culture (mean ± sem): 0.45 ± 0.04 (homogeneous, n=24), 0.11 ± 0.01 (lattice, n=27), and 0.12 ± 0.03 (hierarchical, n=24) (Fig. 2 B and C). This reduction in synchrony increased the dimensionality of the network dynamics, which were characterized by broader variance distributions across the principal components (35) (Fig. 2D).

The most prominent difference between the lattice and hierarchical networks was a nearly twofold increase in the firing rate in the lattice configuration relative to the hierarchical configuration (P<0.05, one-sided t test), reflecting the influence of abundant intermodular connections: 0.63 ± 0.10 Hz (homogeneous, n=24), 0.56 ± 0.09 Hz (lattice, n=27), and 0.32 ± 0.08 Hz (hierarchical, n=24) (Fig. 2E). Throughout the development process [14 to 31 d in vitro (DIV)], there was a weak trend for the mean firing rate to increase and the mean correlation to decrease, yet the overall effect of microfluidic patterning remained stable (SI Appendix, Fig. S1). Taken together, these observations underscore the ability of engineered BNNs, especially lattice networks, to support flexible dynamics with high activity levels, providing a preliminary indication of their potential to function as effective reservoirs.

Periodic Wave Generation Task.

Next, as a basic example of online training in a BNN-based reservoir computing task, the neural activity was modulated by the feedback controller (Fig. 1A), whose readout weights were trained in real time via the FORCE algorithm. Sinusoidal waves with periods of 4, 10, and 30 s were used as the target waves, and the performances of lattice, hierarchical, and homogeneous BNNs were compared.

Fig. 3A illustrates how the BNN activity can be modified so that it autonomously produces a periodic, sinusoidal output. Initially, without any external drive, the neurons in the BNN exhibited spatiotemporally complex spontaneous activity, as detailed in the previous section. When a feedback input was added and FORCE learning started, the readout weights began to fluctuate rapidly, which immediately caused the neurons in the BNN to activate periodically and forced the output to match the target sine wave. Over time, the fluctuations exhibited by the readout weights decreased, and a quasistatic weight matrix that generated the target function was obtained (SI Appendix, Fig. S2). At this point, the learning could be turned off, and then the BNN with the feedback drive sustained the oscillatory activity and the output. The learned oscillatory dynamics were not always robustly retained in the postlearning phase, and the mean squared error (MSE) increased in the postlearning phase relative to the learning phase in 99% of the trials (n=126 recordings obtained from 22 MEA chips at DIV 14 to 30, with no more than five recordings performed on the same chip per day; all network topologies pooled; target signal: sine wave with T=30 s). Nonetheless, the same approach could be used to apply BNNs to generate other periodic signals that contain multiple frequency components and nondifferentiable points, such as triangle wave or square waves (Fig. 3B). Hence, FORCE learning, which has been shown to be successful for training rate-based and spiking neural networks (SNNs) (4, 36), can be used to train BNNs to generate various temporal signals.

Fig. 3.

Three-part figure showing neural activity and reservoir outputs before, during, and after training, along with MSE comparisons across conditions.

FORCE learning in BNNs. (A) FORCE training sequence in a hierarchical network at DIV 23, presented as an homage to Fig. 2AC of Sussillo and Abbott (4). The network output is shown in red, whereas the target signal, a sinusoidal wave with a cycle period of 30 s, is shown in dotted gray. The black traces are the reservoir states (filtered spike trains) derived from the ten electrodes with the largest readout weight magnitudes. The yellow trace is the time derivative of the magnitude of the readout weight vector. After learning, the BNN was able to sustain the trained oscillation even after the weight update was disabled. (B) Examples of FORCE learning applied to other periodic waveforms: a triangle wave (Left) and a square wave (Right), both with cycle periods of 30 s. (C) Comparison of MSE during the learning phase across different sine-wave periods and network configurations. Bars represent means, and plots represent single BNN realizations. The “no cells” condition corresponds to recordings in medium alone, and the resulting x(t) consists solely of background noise passed through the same filtering process used for the neural data.

The generation of such temporal signals required a BNN to exhibit complex, asynchronous responses to feedback stimulation (Fig. 3C) and was critically dependent on both excitatory synaptic transmission within the BNN and interwell connectivity realized by the microchannels (SI Appendix, Figs. S3 and S4). For a target signal with faster oscillations (a sine wave with a 4 s cycle), the MSE during the learning phase of the homogeneous networks was indistinguishable from that of a random output that was generated with no neurons (two-sided t test, P=0.53). The MSEs of the homogeneous networks decreased when the period of the target signal was increased but were consistently higher than the values obtained from the lattice and hierarchical networks. Comparing the two types of micropatterned networks, the lattice networks consistently presented lower MSEs than the hierarchical networks (P<0.01 for all cycle periods; two-sided t test on n=42 recordings on seven MEAs (lattice) and 45 recordings on eight MEAs (hierarchical) at DIV 14 to 30), presumably reflecting their greater degree of network activity, facilitating regression by the linear decoder. The finding that the regression performance improves for target sine waves with longer cycle periods is consistent with a previous report showing that a homogeneous BNN can be trained via FORCE learning to generate a constant, time-invariant output (10).

Network Dynamics and Weight Matrices.

To better understand the mechanism behind the increased regression performance achieved by the engineered BNNs, we analyzed the network dynamics during the learning phase. As in the case with spontaneous activity (Fig. 2A), the homogeneous networks exhibited irregular bursting activity that spread across the entire networks even under closed-loop control. This led to consistent failures of the FORCE training for all target sine waves in the homogeneous networks (SI Appendix, Fig. S5A). Although the decoded output y(t) fluctuated in response to the network activity, these fluctuations occurred at random time points, and the output lacked the smooth, periodic profile of a sine wave.

This behavior contrasted with that observed in engineered networks (SI Appendix, Fig. S5 B and C). For example, in the lattice networks, the reservoir state was modulated by a feedback controller to exhibit periodic changes in sync with the target signal. These dynamics enabled the regression of output signals that closely tracked the target waveform, demonstrating the ability of BNNs to be used for generating coherent time series signals. Importantly, the same culture preparation could be trained to sustain oscillations with arbitrary cycle periods after terminating the learning process (Fig. 4AC).

Fig. 4.

Multi-panel figure showing reservoir states and outputs for different cycle periods, along with weight maps, PCA trajectories, and MSE comparisons.

Versatility of BNN reservoirs and their supporting network dynamics. (AC) Reservoir states derived from all electrodes (Top) and the corresponding output signals (Bottom), obtained from an identical lattice network at DIV 24. The cycle periods of the target sinusoidal wave were set to 4 s (A), 10 s (B), and 30 s (C). Note the difference in the frequency of the output oscillations after the weight update was halted, reflecting the ability of the system to autonomously sustain temporal dynamics across different timescales. (D) Spatial distribution of the learned output weight vector from the same network shown in (C). (E) Trajectory of the reservoir states projected in the principal component space. The gray trace represents the spontaneous activity prior to learning, whereas the colored trace represents the trajectory during the process of learning a sinusoidal wave with a 30-s period. The trace color encodes the amplitude of the computed reservoir output, illustrating the correspondence between the neural state space and the output signal. (F and G) Effect of network dynamics on the MSE during the learning phase for a 30-s sinusoidal wave. The mean firing rate, mean correlation, and effective rank were evaluated during spontaneous activity. Each plot represents a single recording. Sample sizes: n=39 recordings from 7 cultures at DIV 14 to 28 (homogeneous), n=42 recordings from 7 cultures at DIV 14 to 30 (lattice), and n=45 recordings from 8 cultures at DIV 14 to 30 (hierarchical).

An analysis of the readout weight distributions further revealed that the output signal y(t) was reconstructed through contributions from the entire network, which was consistent with the spatially complex dynamics observed in the engineered BNNs (Fig. 4D). Moreover, a trajectory analysis revealed that when feedback control was activated, the network dynamics rapidly transitioned from a highly variable state (gray) to a more structured state (colored), in which the system transitioned between two attractors that corresponded to the positive and negative peaks of the sine wave (Fig. 4D).

Fig. 4F summarizes how the regression performance depends on the firing rate and neural correlations within the BNN. Regardless of their network morphologies, BNNs with higher mean firing rates in their spontaneous activity exhibited lower MSEs during the learning phase. A high mean correlation was associated with poor performance. However, a low mean correlation did not necessarily guarantee high performance, as the MSEs in this range were highly variable. Overall, the best regression performance was thus observed in BNNs with high firing rates and low correlations.

A clearer inverse relationship was observed between the effective rank of the network activity and the MSE (Fig. 4G). The effective rank increased when the eigenvalues obtained via principal component analysis (PCA) were distributed more broadly into higher PCs (Fig. 2D) (37). Since a broader eigenvalue distribution implies that the corresponding network exhibits more complex and higher-dimensional dynamics (35), the result suggests that networks with richer internal dynamics tend to support a more accurate regression.

Collectively, these results underscore the importance of the dynamical complexity enabled by microfluidic patterning in the use of BNNs for functional computation.

Chaotic Wave Generation Task.

Finally, we tested the applicability of the FORCE-trained BNN to learn to generate a Lorenz attractor, which is a nonlinear chaotic time series (Fig. 5A). The input layer was extended to handle three dimensions, corresponding to the x, y, and z components, and the values of each component were encoded as input pulses via pulse amplitude modulation (38). A simple linear decoder, whose weights were trained by the FORCE algorithm, was used for decoding.

Fig. 5.

Multi-panel figure of Lorenz attractor task showing output signals during and after learning, predicted vs target correlations, and a Lorenz map.

Learning a nonlinear chaotic equation. (A) The trajectory and governing equation of the Lorenz attractor. The actual target signal was obtained by reshaping the waveform, as detailed in Materials and Methods. (B) Time series of the x, y, and z components during learning, using a lattice network at DIV 24 as the reservoir layer. (C) Point-wise comparison between the target and predicted signals over time; each dot represents a sample point in time. (D) Lorenz map generated from the target signal (open circles) and the actual network output (filled circles). The axes indicate the peak amplitudes for the z-component at the tth and (t+1)th time steps. (E) Output of the same network as that shown in (B) after the FORCE learning was halted.

Similar to the periodic wave generation task, the engineered BNN, but not the homogeneous BNN, could be successfully trained to follow the target trajectory during the learning phase (Fig. 5B). When the predicted and target signals were compared over time, their pairwise correlations exceeded 0.8 in all dimensions, indicating that the system can capture the main structure of the Lorenz trajectory (Fig. 5C). However, the accuracy was not uniform across the state space: Further analysis revealed that the system reproduced the target more faithfully in the low-amplitude range, whereas the generation of high-amplitude peaks was less accurate (Fig. 5D). This limitation is likely due to a combination of the closed-loop latency and the current setting of the α parameter in FORCE learning. Although lowering α could yield improved performance, it would increase the risk of signal divergence. Finally, after the online weight update was halted, the output trajectory began to deviate from the target signal (Fig. 5E). Nonetheless, the system preserved its intrinsic dynamical structure, continuing to generate a chaotic oscillatory output. We therefore interpret this experiment as a proof-of-concept demonstration that engineered BNNs under closed-loop feedback can be trained to approximate chaotic dynamics beyond simple periodic waveforms rather than as their quantitative reconstruction. A more rigorous characterization of the chaotic wave generation task in low-latency feedback systems is left for future work.

Discussion

This study demonstrates that BNNs can be trained in real time via FORCE learning to autonomously generate structured temporal patterns, bridging in vitro neurodynamics with machine learning. While biological neurons inherently possess the capacity to self-organize and form functional networks even in vitro, the resulting networks tend to exhibit excessively synchronized bursting behaviors. Bioengineering technologies involving microfluidic devices offer an effective approach for bridging the gap between in vitro and in vivo BNNs (20, 39). In this context, we demonstrated that rat cortical neurons cultured in lattice and hierarchical configurations developed into networks that exhibited more diverse and structured activity patterns that could be used as the reservoir layer for learning various periodic waveforms, as well as more complex chaotic dynamics.

FORCE training was initially introduced as a framework to extract diverse temporal patterns from artificial neural networks composed of rate neurons (4, 6) and was later shown to be applicable to SNNs (36, 38). Its application to BNNs was first demonstrated by Yada et al. (10), who reported that the spontaneous activity in homogeneous BNNs can be controlled using FORCE learning to produce a constant output signal via a linear decoder. Our work extends this framework to enable the generation of temporally varying signals, which was accomplished by enriching the internal dynamics and stimulus responses of the in vitro BNNs by micropatterning.

Prior studies on BNN-based reservoir computing have largely relied on open-loop stimulation, demonstrating pattern classification and chaotic time series prediction in rat cortical cultures (9, 40) and human cortical organoids (12). Consistent with our observations, micropatterned BNN reservoirs have been shown to outperform homogeneous BNNs in pattern classification tasks; however, these results have been demonstrated only in open-loop configurations (11). Although they are less explored, closed-loop implementations of BNNs enable the system to self-sustain desired outputs and interact dynamically with an environment (10, 41). Moreover, online training of the readout weights, as opposed to the offline training used in many prior studies, is critical for real-time adaptation with reduced memory resources. Our platform integrates microfluidic cell engineering, HD-MEA-based feedback, and real-time FORCE, enabling in vitro BNNs to produce high-dimensional dynamics for learning and sustaining multisecond temporal patterns. This broadens the scope of BNN-based reservoir computing for, e.g., control and robotics-related applications.

One current limitation of this work is the decreased performance observed during the postlearning phase, when the weight updates were halted and the system ran autonomously. This could have arisen from the nature of FORCE learning, which tends to be fragile to reservoir perturbations. For example, in a previous study using SNNs, removing as few as 10% of the reservoir neurons resulted in a catastrophic decrease in task performance (36). In theory, the α parameter in Eq. 11 (Materials and Methods) scales the regularization term in the FORCE learning, which is based on recursive least squares (42); thus, increasing the value of this parameter would suppress overfitting. However, increasing α also slows the learning procedure and hampers the generation of high-amplitude signals. Consequently, its value was heuristically set to 1,000 in the present work to balance these trade-offs. Recent extensions of FORCE learning, including transfer-FORCE and composite-FORCE, may also improve the performance of BNN-based reservoir computing by increasing the convergence speed to compensate for the limited timescales of biological preparations and by stabilizing the weight updates under noisy conditions (43, 44). After the postlearning performance is stabilized, a systematic analysis of performance degradation with respect to the elapsed time and cycle counts will also be required to fully characterize the BNN-based system.

A second limitation relates to the latency (approximately 330 ms) of the closed-loop cycle. While experiments demonstrated that sinusoidal waves with fundamental periods of 4 s fall within the learnable regime, the performance increased as the period was prolonged (Fig. 3C). Moreover, the inability of the system to reproduce the nondifferentiable transitions of triangular and square waveforms further indicates that high-frequency components are difficult for the current system to track. As suggested by our recent SNN simulation study, the feedback delay critically affects the performance of a time series generation task (38). Because a large fraction of the feedback delay in the preset system originates in the signal filtering step prior to performing spike detection, the delay could be reduced by using specialized hardware (45) or changing the filtering algorithm (46), which could further improve the overall task performance and increase the computational capacity of the closed-loop controlled BNN reservoir. Reducing the spike accumulation windows can further shorten the cycle period; however, this comes at the cost of increased fluctuations in spike counts and thus must be carefully considered (SI Appendix, Fig. S6). Alternatively, the incorporation of predictive compensation may further broaden the range of learnable targets under residual delays.

A promising future direction is related to the application of brain–machine interfaces and neuroprosthetic devices (47). In particular, the ability to modulate and decode neural dynamics in real time via closed-loop interactions may offer new strategies for augmenting or restoring motor and cognitive functions. Moreover, in vitro cell cultures, especially those derived from human-induced pluripotent stem cells, are increasingly gaining attention as alternatives to animal testing (48). As such, closed-loop BNN platforms could serve as versatile platforms for modeling and investigating the responses of living neurons under complex, task-driven conditions, extending beyond spontaneous activity paradigms. Finally, BNNs could also serve as novel computational resources, replacing the current power-demanding hardware that is used for machine learning (49). While many challenges remain (see the Discussion section in ref. 12), BNNs could offer multiple advantages, such as adaptability (12, 21), scalability, energy efficiency, and the opening of possibilities for developing “wetware” computing devices (50).

Materials and Methods

Neuronal Culture.

Primary cortical neurons were isolated from the cerebral cortices of embryonic day-18 (E18) rats. The cerebral cortices dissected from the rat embryos were placed in a 60-mm dish containing 4.5 mL of Hanks-balanced salt solution (HBSS; Gibco 14175-095) supplemented with 10 mM HEPES (Gibco 15030-015) and 1% penicillin/streptomycin (Sigma P-4333). The tissues were minced into approximately 1 mm3 pieces and transferred to a 15-mL centrifuge tube together with the HBSS. To the tube, 0.5 mL of 2.5% trypsin (Gibco 15090-046; final concentration: 0.25%) and 0.2 mL of 10 mg/mL DNase (Roche 10104159001; final concentration: 0.4 mg/mL) were added. The mixture was then incubated at 37 °C for 15 min.

Following the incubation step, the solution was completely aspirated, and the tissue was rinsed three times with a fresh HBSS. The tissue was then mechanically dissociated by trituration with fire-polished glass pipettes to obtain a single-cell suspension.

Before cell plating, the electrode surface was coated with polyethyleneimine (PEI) and laminin to promote cell adhesion. First, a 1% terg-a-zyme (Alconox 1304-1) was applied to the surface and left at room temperature for 2 h. After this, the chip was thoroughly rinsed three times with deionized water. The chip was then submerged in a beaker filled with 70% ethanol and left at room temperature for 30 min. Subsequently, the chip was thoroughly rinsed with deionized water and dried. A 0.07% PEI/borate buffer solution was then applied to cover the entire electrode surface and incubated for 1 h. Next, the surface was rinsed three times with deionized water, dried, and coated with a 50 to 100 µg/mL laminin/DPBS solution in the same manner as that used for the PEI solution. Finally, the surface was rinsed three times with deionized water, dried, and prepared for further steps.

After the coating, the microfluidic device was attached to the HD-MEA chip by placing it over the electrode area with forceps. After allowing it to attach overnight, the chip well was filled with neuronal plating medium consisting of minimum essential medium (Gibco 11095-080) supplemented with 5% fetal bovine serum and 0.55% glucose (Sigma G-8769). The chip was then degassed in a vacuum chamber to remove the air bubbles entrapped in the microfluidic device. The medium was then exchanged with fresh neuronal plating medium, and the chip was incubated in a CO2 incubator (37 °C, 5% CO2) for at least one night before cell seeding.

Cortical neurons were seeded onto the HD-MEA chip at a density of 700 cells/mm2. One hour after plating, the medium was completely replaced with Neurobasal medium (Gibco 21103-049) supplemented with 2% B-27 (Gibco 17504-044) and 1% GlutaMAX-I (Gibco 35050-061). Half of the medium was subsequently replaced with fresh Neurobasal medium twice per week. Recordings were performed using neurons at 14 to 31 DIV.

The neuronal growth observed on the HD-MEA chips was assessed by labeling the cells with the fluorescent chemical probe NeuO (51). First, half of the medium (500 µL) was removed from the well and mixed with 1.5 mL of fresh Neurobasal medium. The resulting mixture was split into two 1-mL aliquots. To one of the tubes, 2 µL of 100 µM NeuO solution was added, and the mixture was thoroughly stirred (STEMCELL Technologies 01801; final concentration: 2 µM). Subsequently, the remaining medium in the well was completely aspirated and replaced with NeuO-containing medium. After a 30-min incubation period, the medium was completely replaced with NeuO-free medium, which was incubated at 37 °C. Observations were acquired via a stereomicroscope (Nikon SMZ18) equipped with an sCMOS camera (Andor Zyla 4.2P).

Microfluidic Devices.

The microfluidic devices for patterning primary neurons were fabricated as described previously (20, 39). Briefly, a master mold was fabricated by patterning an SU-8 photoresist on a silicon wafer, and the PDMS gel was poured onto the mold and thermally cured in a 70 °C oven for 2 h. The hardened PDMS microfluidic device was then peeled off the mold and cut with a clean razor blade. The device was then cleaned via sonication in ethanol for 5 min, rinsed in ddH2O water three times, sterilized under UV light for 30 min, and finally attached to the recording area of the MaxOne HD-MEA chip as described in the previous section.

Each device consisted of 128 square wells, each with a side size of 100 µm, which were separated by 100 µm. The wells were connected with microtunnels in either a lattice or hierarchical configuration (Fig. 1C). The microchannels were fabricated in two configurations that varied in height: one with a width of 5.1±0.5 µm and a height of 5.4±0.2 µm (mean ± SD, n=28 channels) and another with a width of 5.4±0.7 µm and a height of 1.2±0.04 µm (n=8 channels). Both devices were employed in experiments without performing systematic selection. The overall size of the microfluidic film, including the boundary regions, was approximately 3.3 mm by 1.7 mm.

HD-MEA Recording.

Neuronal recording and stimulation were performed via the MaxOne HD-MEA system (MaxWell Biosystems). The MaxOne+ chip is a device featuring 26,400 Pt electrodes (without a Pt-black coating) arranged at a 17.5 µm pitch within a recording area of 3.85 × 2.10 mm2. The extracellular potential was recorded, and its high sampling rate of 20 kHz enabled the production of an excellent temporal resolution. Signals with amplitudes exceeding five times the SD of the amplitude of the preceding signal were detected as spikes. Furthermore, electrical pulse stimulation could be delivered via a subset of electrodes, with a maximum of 32 electrodes. All processes related to the recording and stimulation were implemented via Python-based custom scripts.

To identify “active electrodes” that were capable of capturing reliable neural activity, the spontaneous activity level was measured for 30 s across all electrodes within the square-well region. Thresholds were then applied to the mean firing rate and mean firing amplitude of each electrode to select the active electrodes (less than 1,024), which were used for recording purposes. From the selected recording electrodes, the top 10% of electrodes (on the basis of their mean spike amplitudes) were identified. Of these, 30 electrodes were randomly selected to serve as the electrodes for delivering feedback stimulation during closed-loop control. On average, the 30 stimulation electrodes were distributed across 24.0±3.3 wells (mean ± SD; n=261 recordings). These stimulation electrodes were further divided into two groups based on the polarity of the output values produced for the periodic signal learning task. The stimulation electrodes were excluded from the set of recording electrodes, unless otherwise noted.

Closed-Loop Control.

A closed-loop system was realized by controlling the HD-MEA system through a Python/C++ API. The entire closed-loop control scheme operated on a cycle with a mean duration of 332.5±1.5 ms (mean ± SD; n=138 recordings from 46 cultures) and an SD of 3.7±0.7 ms. Each cycle consisted of a 120 ms pause before starting spike counting to process the waveform through an FIR filter and remove stimulation artifacts, a spike accumulation period of 200 ms, and software/hardware latencies.

Two tasks were considered in this study, i.e., a sine wave generation task and a Lorenz attractor generation task. For the former task, the target signal d was set to

d(t)= sin2πtT, [1]

where T (= 4, 10, and 30 s) is the cycle period. For the latter task, we employed the classic Lorenz attractor given by

D˙1=10(D2D1), [2]
D˙2=D1(28D3)D2, [3]
D˙3=D1D283D3, [4]

and its affine transformation d(t)=[D1(0.1t)/20,D2(0.1t)/20,D3(0.1t)/201] was then used as the target signal. The system exhibited deterministic chaos, and the phase space revealed the strange attractor illustrated in Fig. 5A. The value of the target waveform was sampled at each cycle; thus, the experiment was susceptible to fluctuations in the intervals of feedback cycles.

A schematic representation of the system is summarized in Fig. 1A. The system consisted of four modules, including the HD-MEA, each with the following functions:

  • Spike Counting (C++): Received spike information concerning the specified electrodes from the hardware in real time.

  • Data Routing (Python): Accumulated the spike counts of each electrode over a specified duration (set to 200 ms) and transferred the data to the subsequent modules. The dependence of task performance on the accumulation duration is summarized in SI Appendix, Fig. S6A.

  • Reservoir Computing (C++): Filtered the spike trains with a double exponential filter to generate the reservoir state vector x=[xi] (36):
    x˙i=xiτd+hi, [5]
    h˙i=hiτr+1τrτdtik<tδ(ttik), [6]
    where i is an index for neurons (electrodes), hi is an auxiliary variable, and τr(=0.5 s) and τd(=2 s) are rise and decay time constants, respectively. The dependence of task performance on τd is summarized in SI Appendix, Fig. S6B. tik is the kth spike fired by the ith neuron, and δ is the Dirac delta function with δ(0)=1. The module then computed the output vector as follows:
    y(t)=W(tΔt)x(t), [7]
    and optimized the output weight matrix W for use in the next time step. W was initialized as zero and optimized online via the FORCE learning algorithm (4):
    W(t)=W(tΔt)e(t)P(t)x(t), [8]
    where e and P are the error vector and the inverse correlation matrix, respectively, which are calculated as follows:
    e(t)=W(tΔt)x(t)d(t), [9]
    P(t)=P(tΔt)P(tΔt)x(t)x(t)P(tΔt)1+x(t)P(tΔt)x(t) [10]
    where d is the target signal vector. The initial value for P was set to
    P(0)=Iα, [11]
    where α is the “learning rate” set to 1,000 and I is an identity matrix.
  • Stimulation Sequencer (Python): Produced commands for delivering feedback stimulation on the basis of the output and sent them back to the MEA hardware (MaxOne Hub). Biphasic voltage pulses, each with a duration of 200 µs per phase, were applied with amplitudes determined according to the reservoir output y(t). Stimulation electrodes were selected on the basis of the procedure described in HD-MEA Recording.

    For the sine wave generation task, the amplitude of the applied biphasic pulse A(t) was determined as follows:
    A(t)=Amax21+exp(k|y(t)|)1, [12]
    where Amax (= 876 mV) is the maximum amplitude, y is the reservoir output, and k (= 4) is the gain for adjusting the nonlinear relationship between y and A (Fig. 3A). The parameters were set on the basis of pilot experiments examining the relationship between the pulse amplitude and evoked activity (Fig. 3B). The stimulation electrodes were divided into two groups. For the first group, the stimulation amplitude was set to A(t) when y(t)0 and to 0 when y(t)<0. For the second group, the amplitude was set to A(t) when y(t)0 and to 0 when y(t)>0.
    For the Lorenz attractor generation task, the stimulation electrodes were divided into three groups, each corresponding to one of the three components of the signal. The output vector y=(y1,y2,y3) was then mapped onto the amplitude of the feedback stimulation as follows:
    Ai(t)=Amax1+exp(kyi(t)), [13]
    where i is the index for the signaling component, Amax (= 876 mV) is the maximum amplitude, and k (= 4) is the gain.

Data Analysis.

Spontaneous activity.

The extracellular potentials recorded from each electrode were filtered (1 to 3,000 Hz) and recorded in an HDF5 file, together with the spikes that were detected in real time by thresholding the extracellular signals at five times the SD. The spikes were then binned at 1 ms and filtered with the same double exponential filter described above to obtain a (virtual) firing rate, or a “reservoir state” in the case of the closed-loop experiments (see the next section).

The Pearson correlation coefficients rij between electrodes i and j were subsequently computed via the following equation:

rij=t(xi(t)x¯i)(xj(t)x¯j)t(xi(t)x¯i)2t(xj(t)x¯j)2, [14]

where xi(t) and xj(t) are the firing rates of electrodes i and j at time t, respectively, and x¯i and x¯j denote the mean firing rates of electrodes i and j, respectively.

PCA was used to extract the underlying, low-dimensional dynamics that govern the activity of a large population of neurons. For a recording with a firing rate matrix of XRT×d, where T and d are the numbers of time points and electrode channels, respectively, X was first centered on its mean as follows:

X¯ij=Xij1Ti=1TXij, [15]

with Xij denoting the elements of X. Denoting the centered firing rate matrix X¯, factor loadings and the variance explained by the PCs were then obtained as an eigendecomposition of the covariance matrix, Σ=1T1X¯X¯:

Σ=VΛV=v1vdλ1λdv1vd, [16]

where V is the eigenvector matrix and Λ is the diagonal matrix of the eigenvalues λi. The eigenvectors vi correspond to the factor loadings of the ith PC, and each normalized eigenvalue pi=λiiλi is the variance explained by the corresponding PC.

To quantify the effective dimensionality of the network dynamics, the effective rank of X was calculated as follows (37):

erank(X)= exp(H(p1,,pd)), [17]

where H is the Shannon entropy given by

H(p1,,pd)=i=1dpilogepi. [18]

Closed-loop experiments.

The network dynamics produced under closed-loop control were analyzed following the same procedure used for spontaneous activity, except that the stimulation artifacts needed to be removed. Specifically, the artifacts were removed by excluding all spikes occurring within 10 ms before and after the stimulation.

To characterize the trajectory of the neural dynamics, the activity observed under closed-loop control was projected into a PC space derived from the spontaneous activity. Let X¯RT×d denote the centered reservoir state matrix during a closed-loop experiment, and V=v1vk denote a submatrix of the eigenvector matrix derived from the spontaneous activity. The neural activity trajectory projected onto a k-dimensional PC space, i.e., XRT×k, was then calculated as follows:

X=X¯V. [19]

The analyses were performed in MATLAB (MathWorks).

Supplementary Material

Appendix 01 (PDF)

Acknowledgments

We thank Prof. Wilten Nicola at the University of Calgary for discussion on the inherent properties of the FORCE learning algorithm. We also thank Mr. Iori Morita at Tohoku University for the fabrication of the photomasks. This work was partly supported by the Ministry of Education, Culture, Sports, Science and Technology, Japan Grant-in-Aid for Transformative Research Areas (A) “Multicellular Neurobiocomputing” (24H02330, 24H02332, 24H02334), the Japan Society for the Promotion of Science KAKENHI (22H03657, 22K19821, 22KK0177, 23H00251, 23H02805, 23H03489, 25H00447), the Japan Science and Technology Agency (JST) Core Research for Evolutionary Science and Technology (JPMJCR19K3), JST Advanced Technologies for Carbon Neutrality (JPMJAN23F3), the Doctoral Program for World-leading Innovative & Smart Education for AI Electronics of Tohoku University, and the Cooperative Research Project Program of the Research Institute of Electrical Communication (RIEC), Tohoku University. This research was carried out at the Laboratory for Nanoelectronics and Spintronics, RIEC, Tohoku University.

Author contributions

Y. Sono and H.Y. designed research; Y. Sono and Y.N. performed research; T.S., Y. Sato, A.H.-I., Y.K., and S.S. contributed new reagents/analytic tools; Y. Sono, H.Y., and Y.N. analyzed data; A.H.-I. and S.S. cosupervised research, acquired funding; and Y. Sono and H.Y. wrote the paper.

Competing interests

The authors declare no competing interest.

Footnotes

This article is a PNAS Direct Submission.

Data, Materials, and Software Availability

Spike train data of spontaneous activity and the output signals during the reservoir computing tasks are archived in Zenodo at DOI: https://doi.org/10.5281/zenodo.18654317 (52).

Supporting Information

References

  • 1.Vyas S., Golub M. D., Sussillo D., Shenoy K. V., Computation through neural population dynamics. Annu. Rev. Neurosci. 43, 249–275 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Buonomano D. V., Maass W., State-dependent computations: Spatiotemporal processing in cortical networks. Nat. Rev. Neurosci. 10, 113–125 (2009). [DOI] [PubMed] [Google Scholar]
  • 3.Li J. S., Sarma A. A., Sejnowski T. J., Doyle J. C., Internal feedback in the cortical perception-action loop enables fast and accurate behavior. Proc. Natl. Acad. Sci. U.S.A. 120, e2300445120 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Sussillo D., Abbott L., Generating coherent patterns of activity from chaotic neural networks. Neuron 63, 544–557 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Pemberton J., Chadderton P., Costa R. P., Cerebellar-driven cortical dynamics can enable task acquisition, switching and consolidation. Nat. Commun. 15, 10913 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Yonemura Y., Katori Y., Dynamical predictive coding with reservoir computing performs noise-robust multi-sensory speech recognition. Front. Comput. Neurosci. 18, 1464603 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Lukoševičius M., Jaeger H., Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3, 127–149 (2009). [Google Scholar]
  • 8.Maass W., Natschläger T., Markram H., Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560 (2002). [DOI] [PubMed] [Google Scholar]
  • 9.Dockendorf K. P., Park I., He P., Príncipe J. C., DeMarse T. B., Liquid state machines and cultured cortical networks: The separation property. Biosystems 95, 90–97 (2009). [DOI] [PubMed] [Google Scholar]
  • 10.Yada Y., Yasuda S., Takahashi H., Physical reservoir computing with force learning in a living neuronal culture. Appl. Phys. Lett. 119, 173701 (2021). [Google Scholar]
  • 11.Sumi T., et al. , Biological neurons act as generalization filters in reservoir computing. Proc. Natl. Acad. Sci. U.S.A. 120, e2217008120 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Cai H., et al. , Brain organoid reservoir computing for artificial intelligence. Nat. Electron. 6, 1032–1039 (2023). [Google Scholar]
  • 13.Tanaka G., et al. , Recent advances in physical reservoir computing: A review. Neural Netw. 115, 100–123 (2019). [DOI] [PubMed] [Google Scholar]
  • 14.Cucchi M., Abreu S., Ciccone G., Brunner D., Kleemann H., Hands-on reservoir computing: A tutorial for practical implementation. Neuromorphic Comput. Eng. 2, 032002 (2022). [Google Scholar]
  • 15.Liang X., et al. , Physical reservoir computing with emerging electronics. Nat. Electron. 7, 193–206 (2024). [Google Scholar]
  • 16.Beniaguev D., Segev I., London M., Single cortical neurons as deep artificial neural networks. Neuron 109, 2727–2739.e3 (2021). [DOI] [PubMed] [Google Scholar]
  • 17.Shahaf G., Marom S., Learning in networks of cortical neurons. J. Neurosci. 21, 8782–8788 (2001). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Chiappalone M., Massobrio P., Martinoia S., Network plasticity in cortical assemblies. Eur. J. Neurosci. 28, 221–237 (2008). [DOI] [PubMed] [Google Scholar]
  • 19.Osaki T., et al. , Complex activity and short-term plasticity of human cerebral organoids reciprocally connected with axons. Nat. Commun. 15, 2945 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Murota H., Yamamoto H., Monma N., Sato S., Hirano-Iwata A., Precision microfluidic control of neuronal ensembles in cultured cortical networks. Adv. Mater. Technol. 10, 2400894 (2025). [Google Scholar]
  • 21.Kagan B. J., et al. , In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron 110, 3952–3969.e8 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.A. Robbins et al. , Goal-directed learning in cortical organoids. Cell Reports 45, 116984 (2026). [DOI] [PubMed]
  • 23.Ikegaya Y., et al. , Synfire chains and cortical songs: Temporal modules of cortical activity. Science 304, 559–564 (2004). [DOI] [PubMed] [Google Scholar]
  • 24.Stringer C., et al. , Spontaneous behaviors drive multidimensional, brainwide activity. Science 364, eaav7893 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Luczak A., Barthó P., Harris K. D., Spontaneous events outline the realm of possible sensory responses in neocortical populations. Neuron 62, 413–425 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Murakami T., Matsui T., Uemura M., Ohki K., Modular strategy for development of the hierarchical visual network in mice. Nature 608, 578–585 (2022). [DOI] [PubMed] [Google Scholar]
  • 27.Deuker L., et al. , Memory consolidation by replay of stimulus-specific neural activity. J. Neurosci. 33, 19373–19383 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Dimakou A., Pezzulo G., Zangrossi A., Corbetta M., The predictive nature of spontaneous brain activity across scales and species. Neuron 113, 1310–1332 (2025). [DOI] [PubMed] [Google Scholar]
  • 29.Soriano J., Martínez M. R., Tlusty T., Moses E., Development of input connections in neural cultures. Proc. Natl. Acad. Sci. U.S.A. 105, 13758–13763 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Laje R., Buonomano D. V., Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16, 925–933 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Soriano J., Neuronal cultures: Exploring biophysics, complex systems, and medicine in a dish. Biophysica 3, 181–202 (2023). [Google Scholar]
  • 32.Yamamoto H., et al. , Modular architecture facilitates noise-driven control of synchrony in neuronal networks. Sci. Adv. 9, eade1755 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Sato Y., et al. , Microfluidic cell engineering on high-density microelectrode arrays for assessing structure-function relationships in living neuronal networks. Front. Neurosci. 16, 943310 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Monma N., et al. , Directional intermodular coupling enriches functional complexity in biological neuronal networks. Neural Netw. 184, 106967 (2025). [DOI] [PubMed] [Google Scholar]
  • 35.Dahmen D., Grün S., Diesmann M., Helias M., Second type of criticality in the brain uncovers rich multiple-neuron dynamics. Proc. Natl. Acad. Sci. U.S.A. 116, 13051–13060 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Nicola W., Clopath C., Supervised learning in spiking neural networks with FORCE training. Nat. Commun. 8, 2208 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.O. Roy, M. Vetterli, “The effective rank: A measure of effective dimensionality” in 2007 15th European Signal Processing Conference (EURASIP, 2007), pp. 606–610.
  • 38.Sato Y., et al. , In silico modeling of reservoir-based predictive coding in biological neuronal networks on microelectrode arrays. Jpn. J. Appl. Phys. 63, 108001 (2024). [Google Scholar]
  • 39.Takemuro T., Yamamoto H., Sato S., Hirano-Iwata A., Polydimethylsiloxane microfluidic films for in vitro engineering of small-scale neuronal networks. Jpn. J. Appl. Phys. 59, 117001 (2020). [Google Scholar]
  • 40.T. A. E. Lindell, O. H. Ramstad, I. Sandvig, A. Sandvig, S. Nichele, “Chaotic time series prediction in biological neural network reservoirs on microelectrode arrays” in 2024 International Joint Conference on Neural Networks (IJCNN) (IEEE, 2024), pp. 1–10.
  • 41.Deng Y., et al. , Optogenetically enhanced physical reservoir computing with in vitro neural networks for obstacle avoidance. J. Biomed. Opt. 30, 105004 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Haykin S., Adaptive Filter Theory (Pearson Education, Essex, UK, ed. 5, 2014). [Google Scholar]
  • 43.Tamura H., Tanaka G., Transfer-RLS method and transfer-FORCE learning for simple and fast training of reservoir computing models. Neural Netw. 143, 550–563 (2021). [DOI] [PubMed] [Google Scholar]
  • 44.Y. Li, K. Hu, K. Nakajima, Y. Pan, “Composite force learning of chaotic echo state networks for time-series prediction” in 2022 41st Chinese Control Conference (CCC) (IEEE, 2022), pp. 7355–7360.
  • 45.Müller J., Bakkum D. J., Hierlemann A., Sub-millisecond closed-loop feedback stimulation between arbitrary sets of individual neurons. Front. Neural Circuits 6, 121 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.de Cheveigné A., Nelken I., Filters: When, why, and how (not) to use them. Neuron 102, 280–293 (2019). [DOI] [PubMed] [Google Scholar]
  • 47.Beaubois R., et al. , Biœmus: A new tool for neurological disorders studies through real-time emulation and hybridization using biomimetic spiking neural network. Nat. Commun. 15, 5142 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Ingber D. E., Human organs-on-chips for disease modelling, drug development and personalized medicine. Nat. Rev. Genet. 23, 467–491 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Mehonic A., Kenyon A. J., Brain-inspired computing needs a master plan. Nature 604, 255–260 (2022). [DOI] [PubMed] [Google Scholar]
  • 50.Smirnova L., et al. , Organoid intelligence (OI): The new frontier in biocomputing and intelligence-in-a-dish. Front. Sci. 1, 1017235 (2023). [Google Scholar]
  • 51.Er J. C., et al. , NeuO: A fluorescent chemical probe for live neuron labeling. Angew. Chem. Int. Ed. 54, 2442–2446 (2015). [DOI] [PubMed] [Google Scholar]
  • 52.Y. Sono et al. , Dataset for: Online supervised learning of temporal patterns in biological neural networks under feedback control. Zenodo. https://zenodo.org/records/18654317. Deposited 16 February 2026. [DOI] [PMC free article] [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix 01 (PDF)

Data Availability Statement

Spike train data of spontaneous activity and the output signals during the reservoir computing tasks are archived in Zenodo at DOI: https://doi.org/10.5281/zenodo.18654317 (52).


Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES