Skip to main content
Science Advances logoLink to Science Advances
. 2025 Jan 10;11(2):eadr5262. doi: 10.1126/sciadv.adr5262

Harnessing spatiotemporal transformation in magnetic domains for nonvolatile physical reservoir computing

Jing Zhou 1,*, Jikang Xu 2, Lisen Huang 1, Sherry Lee Koon Yap 1, Shaohai Chen 1, Xiaobing Yan 2,*, Sze Ter Lim 1,*
PMCID: PMC11721694  PMID: 39792678

Abstract

Combining physics with computational models is increasingly recognized for enhancing the performance and energy efficiency in neural networks. Physical reservoir computing uses material dynamics of physical substrates for temporal data processing. Despite the ease of training, building an efficient reservoir remains challenging. Here, we explore beyond the conventional delay-based reservoirs by exploiting the spatiotemporal transformation in all-electric spintronic devices. Our nonvolatile spintronic reservoir effectively transforms the history dependence of reservoir states to the path dependence of domains. We configure devices triggered by different pulse widths as neurons, creating a reservoir featured by strong nonlinearity and rich interconnections. Using a small reservoir of merely 14 physical nodes, we achieved a high recognition rate of 0.903 in written digit recognition and a low error rate of 0.076 in Mackey-Glass time series prediction on a proof-of-concept printed circuit board. This work presents a promising route of nonvolatile physical reservoir computing, which is adaptable to the larger memristor family and broader physical neural networks.


Spintronic memoristors store information in time as variations in space, forming a reservoir for temporal data processing.

INTRODUCTION

Recent advances in neural networks attempt to incorporate physical principles into computational models, owing to the limitation of data-driven approaches on multiscale problems and the isomorphism between mathematical operation and hardware physics (16). For instance, the physics-informed neural network enforces physical laws as constraints or prior knowledge during the in silico learning process (2, 5). The deep physical neural network (PNN) explores beyond in silico learning by training physical transformation with backpropagation (3). A closely related yet distinct approach is physical reservoir computing (PRC), which harnesses the rich dynamics of physical process for machine learning without training the physical process itself (1, 4, 6). The algorithm of RC was originally derived from the echo state networks and liquid state machines (79). It is an improved computing paradigm for temporal data processing since the conventional recurrent neural network (RNN) suffers from an exploding (and vanishing) gradient problem (9). The architecture of RC is featured by an input-driven reservoir of complex but fixed interconnectivity and a memoryless readout layer. Although RC benefits from clearly defined mathematical preconditions and streamlined training process, translating these advantages into practicable reservoir-building strategies remains a longstanding challenge (10). This is where the physical implementation of RC, or PRC, was introduced. A wealth of physical substrates, including photonic (1116), mechanical (1721), electronic (2226), quantum (2731), and even biochemical systems (3236), proves to have the intrinsic memory and nonlinear dynamics required by the black box–like reservoir. These physical substrates result in a hybrid in situ–in silico RC architecture that is more resource efficient for information processing.

Despite the versatility of PRC, the construction of nearly all physical reservoirs relies on some delay-based architecture. This means that the effect of an input to the physical system is not immediate but occurs after a time delay. For instance, using explicitly implemented delay lines in the mechanical (37), optical (38), and electrical (23) circuits, the current output is fed back to the system to modify subsequent inputs. These circuitry-based reservoirs suffer from the less efficient serial operation enforced by the arbitrary delay line and limited interconnectivity. More recent advances achieved delays by leveraging intrinsic relaxation processes of materials, such as magnetic damping of a spin-torque nano-oscillator (11, 26), filament rupture in metal oxide (22, 39), ferroelectric polarization back-switching (25), and reagent consumption in redox reaction (33). These relaxation processes naturally propagate the effects of an input to future time steps. The relaxation-based approaches demonstrate high fidelity to the black box–like nature of reservoir, thanks to the hidden and extensive interconnection associated with nonlinear material dynamics. However, such material- or device-based reservoirs are restricted by the characteristic time constants of the relaxation processes, limiting their effectiveness to signals of similar timescale only. As expected, the delay-based conventional physical reservoirs are essentially volatile, where the reservoir states are lost upon the removal of input. This compromises the application of PRC in many real-life situations where the data are generated continuously in a prolonged period, such as near-sensor or in-sensor computing (1).

In this work, we address these challenges by demonstrating nonvolatile PRC using all-electric spintronic memristors. Instead of the physical sense of time, we interpret the history dependence of reservoir states as the path dependence of ferromagnetic domains. This transforms a volatile reservoir in the temporal domain to a nonvolatile reservoir in the spatial domain, removing the limit on input timescale. The electrical readouts of our reservoir states are finely manipulated by current-induced domain wall motion, where the combination of spin-orbit torque (SOT) and thermal effects induces strong nonlinearity. Stacking multiple dynamic responses of the ferromagnetic domains generates a high-dimensional reservoir characterized by strong interconnection, without the need for nonlinear preprocessing masks or time multiplexing. Our reservoir shows competitive performance in both written digit classification and Mackey-Glass time series prediction. The potential of on-chip learning is also demonstrated using a customized printed circuit board (PCB).

RESULTS

Device and reservoir building

All-electric spintronic memristors are used as the hardware building blocks of our reservoir (see Materials and Methods for details). SOT usually requires an external magnetic field for deterministic switching, introducing additional constraints on circuits while complicating the device operation. Our devices internalize this field, allowing them to be manipulated purely by electrical signals: The out-of-plane magnetization is modified by input voltage (Vp) pulses and gauged using Hall resistance (RH). This field-free feature is indispensable for circuit-level application toward large-scale integration with conventional chip technologies. Our devices are essentially nonvolatile as their magnetization states are retained without any input. More device properties were reported in our previous work (40).

Our reservoir-building strategies tap on the nonlinear path dependence of ferromagnetic domains. Figure 1A illustrates the path dependence in typical RH-Vp responses. Paths 1 and 2 form the full magnetic hysteresis loop where the middle states and saturation states are labeled as A and B and C and D, respectively. Despite being both at the same macroscopic readout of RH = 0.5, states A and B have different prior states (path 1 for A and path 2 for B). As a result, state A will take path 1, but state B will take path 3 toward state C. Similarly, states A and B take paths 4 and 2 toward state D, respectively. Paths 3 and 4 are the widely observed minor hysteresis loops in SOT-driven magnetization switching (4143). Microscopically, the minor loops arise from the different domain structures of states A and B, which respond differently to input stimuli, forming different future trajectories. This translates conveniently into the basic dynamics in an echo state network: The reservoir state x(t) at time step t depends on both current input u(t) and past states [x(t-n), n ∈ ℝ].

Fig. 1. Reservoir-building strategies.

Fig. 1.

(A) Path dependence in electric switching of magnetization, with voltage pulse (Vp) as the input and Hall resistance (RH) as the output. Solid (dotted) lines indicate full (partial) switching loops. Filled and hollow circles label magnetization states and switching paths, respectively. (B) Nonlinear dependence of number of voltage pulses (Np) for full switching on pulse width (τp). Inset illustrates the creation of virtual nodes (VNs) by projecting input at different pulse width intervals defined as the dynamics ranges (DRs). Δτp refers to the difference in maximum and minimum τp. (C) A black box–like high-dimensional reservoir by stacking multiple VNs. a.u., arbitrary units.

The electrical switching of perpendicular magnetization is highly sensitive to the input pulse width (τp). Switching triggered by small τp (<1 ns) is dominated by SOT, whereas thermal effects, such as Joule heating–induced lowering of switching barrier, dominate for large τp (>1 μs). As a result, switching depends nonlinearly on τp, which is usually observed as a disproportionate increase in Vp or the number of pulses (Np) required for full switching as τp decreases (4446). Moreover, the switching behavior becomes more complex for intermediate pulse widths (1 ns < τp < 1 μs) since the contributions of SOT and thermal effects are comparable. Subsequently, tuning the working τp interval—defined as the dynamic range (DR)—of the spintronic device creates multiple virtual nodes (VNs) (37), each producing a different dynamic response to the same input (Fig. 1B). Owing to the strong nonlinearity in τp, the input-output characteristics of VN depends on not only the width (Δτp) but also the exact position of DR. For instance, DR 2 and DR 4 in Fig. 1B, although having the same Δτp, will produce two distinct VNs. This allows a diverse combination of VNs, or neurons, to be stacked, collectively forming a reservoir of high-dimensional data space (Fig. 1C). Compared with VNs in photonic PRCs, our VNs only share the similar neural functions. However, our VNs are generated in fundamentally different approaches because they do not rely on time-domain multiplexing enabled by delay lines (47). Our reservoir merges two advantages of conventional approaches on top of being nonvolatile. First, we use arrays of spintronic devices (14 in this work) of nominally the same physical characteristics (fig. S1) for the simultaneous generation of VNs, leading to the more preferred parallel computing for analog neural networks (1, 48). The very small device-to-device variations also enrich the reservoir dynamics by further diversifying the neuron responses (note S1). Next, as we will show later, the same overarching τp dependence helps to establish extensive and complex interconnections among neurons, similar to the role of a long delay line in optical circuit (14, 38).

Nonlinearity and fading memory

We first demonstrate the path dependence using Hall measurement (Fig. 2A). The current-induced SOT in a longitudinal Vp partially reverses the magnetization, which is monitored by the Hall resistance RH = VH/IL, considering the transverse Hall voltage VH induced by a longitudinal current IL. By applying a train of Vp segments of opposite polarities but diminishing amplitudes, we observe minor switching loops that gradually spiral inward, as shown in Fig. 2B. The vertical dotted line highlights that the same Vp triggers different changes in RH depending on the present state. The horizontal dotted line, on the other hand, implies that the same RH state requires different Vp to change depending on the history of the state. Collectively, we conclude the simultaneous dependence of RH on both input Vp and the history of RH.

Fig. 2. Switching dynamics in spintronic memristor.

Fig. 2.

(A) Schematic setup of Hall and MOKE measurements. Vp, Vpre, and Hpre denote the amplitudes of switching pulse, preset pulse, and preset field, respectively. LRD refers to the length of the remaining domain. (B) Variation of Hall resistance (RH) with decreasing Vp. τp = 200 ns. (C) Variation in LRD with pulse number (Np) for Vp = −4.1 V and τp = 80 to 200 ns. (D) MOKE images for τp = 80, 140, 200 ns when 20, 50, and 70% when LRD remain. (E) Extracted domain wall motion per pulse (dLRD/dNp) from line fits in (C). Inset shows the one-dimensional (1D) and 2D domain wall motion at low and high τp. (F and G) RH induced by six pulse trains where only Vp for Np = 3 is different. Np = 0 is a preset pulse. The maximum absolute value of voltage pulse is 3 V in (F) and 3.2 V in (G).

Next, using polar magneto-optic Kerr effect (MOKE) microscopy, we demonstrate that SOT-driven domain wall motion (49) is nonlinear in τp, which provides the practical tool for inducing nonlinear reservoir dynamics. Referring to Fig. 2A, input Vp switches the positive out-of-plane magnetization (+Mz, gray color) to the negative one (−Mz, black color), leading to the leftward movement of domain walls near the right terminal of the microstrip. We define a parameter LRD in Fig. 2A to quantify domain wall motion. In Fig. 2C, LRD is roughly linear in the number of pulses Np for τp = 80 to 200 ns, indicating the stable effect of Vp on domain walls. We verify that the step-like features near LRD ≈ 7 μm for τp = 120 ns and τp = 80 ns are unlikely to be caused by local pinning (note S2 and fig. S2). In a closer examination, we find that the domain wall motion appears to be one-dimensional (1D) when τp = 80 ns, as shown by the domain snapshots in Fig. 2D. However, as τp increases to 200 ns, the +Mz domains shrink in both x and y directions, indicating 2D domain wall motion (movies S1 to S3). This is consistent with the extreme case of Vpre, where τp = 5 μs, such that one pulse is sufficient to switch most areas of the +Mz domains (Fig. 2A). Using the linear fits in Fig. 2C, we summarize in Fig. 2E the extracted domain movement per pulse (dLRD/dNp), which is unambiguously nonlinear in τp. The relation in Fig. 2E implies that if an input is encoded as τp of an electric pulse, any linear changes in the input will lead to nonlinear changes in LRD thus nonlinear changes in electric output (i.e., RH). Furthermore, since RH is proportional to the area (i.e., 2D) of ±Mz whereas LRD is 1D, we expect the nonlinearity in RHp to be even stronger than LRDp. Therefore, modulating τp is an effective means of high-dimensional nonlinear transformation of input data.

The mathematical preconditions of a working reservoir (11, 22)—point-wise separation property and fading memory—are readily achieved in our spintronic neuron. The separation property requires the reservoir states x(t) to distinguish any differences in the paired input u(t) and output y(t) at time t. Fading memory means that the reservoir state x(t) depends on both present and past inputs [u(t-n), n ∈ ℝ], and this dependence weakens as n increases. To show these, we apply six trains of seven pulses to the same spintronic memristor with the same initial state, where only the third pulse is different in each train. The resulting RH states are summarized in Fig. 2F. The state trajectories begin to branch out at Np = 3, implying the separation property since the memristor recognizes small differences in the input. Moreover, the differences in RH diminish for Np ≥ 4, showing that the effects of inputs at Np = 3 extend into the future and eventually fade out. In addition, our devices have highly repeatable readout and excellent endurance, which are crucial for the stable operation of the network. In the single-domain limit, the estimated thermal stability is high, consistent with the demonstrated long data retention time (>10 days) (note S3 and figs. S3 to S5). However, caution must be taken to exploit this fading memory. In a control experiment, we increase the maximum magnitude of Vp from 3 V in Fig. 2F to 3.2 V in Fig. 2G and assign alternating pulse polarities. The resulting RH trajectories only differ at Np = 3, showing no fading memory at all, because values of Vp in Fig. 2G are strong enough to preset the RH state such that prior history is washed out. Therefore, a prerequisite for establishing fading memory is to have the input energy below the preset level. This echoes the delay-based PRCs (11, 23, 26), where the time delay between successive inputs must be less than the characteristic time constant to keep the fading memory property. However, our approach differs from the delay-based PRCs in two critical aspects. First, our architecture has temporal information encoded within individual inputs, lifting the need for pulse train–like input of certain frequencies. The resulting task performance has a more flexible timescale since the reservoir no longer requires specific rates of data input. Second, our method of encoding temporal information is only possible with nonvolatile memristor, without which the reservoir would require continuous input streams to retain its state and becomes delay based.

Written digit recognition

In this section, with reference to the benchmark task of written digit recognition, we illustrate the network structure and training protocol in our PRC. Referring to Fig. 3A, an input (un) of digit “5” from the Modified National Institute of Standards and Technology (MNIST) database (50) is mapped to pulse trains of fixed amplitudes (−3.0 and 2.8 V for negative and positive inputs, respectively), where the pulse width (50 ns < τp < 200 ns) varies linearly with the magnitude of input (see table S1 for examples). Figure 3B shows the reconstructed responses of four different VNs to the same input digits. Apparently, all VNs successfully capture the outlines of the digits with varying details, as if the VNs “perceive” the same input in slightly different manners. Notably, horizontal streaks are observed in VN responses. They arise from the large number of zeros in the pixels of the input images. The corresponding voltage pulse has minimal τp that barely alters the magnetization thus RH. We consider this an advantage of nonvolatile reservoir, which can tolerate a larger number of low-energy input while memorizing the state. In other words, our spintronic reservoir does not have to be continuously excited. Furthermore, unlike prior works (22, 25), we did not binarize the image pixels, making our task more challenging. The RH trains produced by all VNs are concatenated to form the reservoir state xn of one digit, and all input digits’ reservoir states collectively form the final reservoir X.

Fig. 3. Network structure of reservoir computing.

Fig. 3.

(A) Comparison of the structures of a classic echo state network (red arrows) and our physical RC (blue arrows). (B) The same spintronic device responds to the same input differently when different DRs are chosen, resulting in different VNs. RH,max and RH,min refer to the maximum and minimum RH readouts, respectively.

In Fig. 3A, we also illustrate the software-based reservoir (red arrows) in a classic echo state network (see Materials and Methods for details), which differs from our hardware-based reservoir (blue arrows) in three aspects. First, our reservoir requires no dimension-raising preprocessing mask Win, which is indispensable in prior approaches (14, 23, 26, 30, 37, 38). In some cases of time multiplexing, the preprocessing mask relies on trigonometric functions to introduce nonlinearity while explicitly adding the dimension of time, which inevitably undermines the role of reservoir. Next, our reservoir dynamics is not determined by an arbitrarily designed internal state matrix W, substantially lowering energy consumption since matrix manipulation is computationally expensive. Last, our reservoir has the fading memory and the nonlinearity internalized in the spintronic memristor’s switching dynamics, instead of relying on the leak rate and activation function. Overall, our physical reservoir has all essential characteristics of a software-based reservoir while using much less predetermined hyperparameters and computation steps. The training procedure in our physical RC is similar to prior approaches—we use the combination of pseudoinverse and ridge regression to compute the output weight WOut (see Materials and Methods for details).

To deliver practically meaningful task performance, we develop a proof-of-principle PCB (Fig. 4A) while exercising 10-fold cross-validation for training (fig. S6). The PCB elevates the technical sophistication of our approach from measurement of discrete devices to circuit-level integration. It ensures the parallel connection of 14 devices to the field programmable gating array (FPGA) during the course of computation, mimicking the environment of on-chip learning toward embedded applications. Cross-validation ensures the statistical validity—a requirement frequently overlooked in prior works—where up to 20,000 samples are taken as the data pool and grouped randomly into 10 equally populated subsets. Each subset serves as the testing data, while the rest serves as the training data in a single trial, which repeats 10 times to produce an average performance. Figure 4B shows some single-trial confusion matrices for data pools of size Ns. When Ns = 2000, the training accuracy is unity (ATrain = 1.00), but the testing accuracy is very low (ATest = 0.633). This implies that our reservoir can easily interpret a small dataset, but these samples are too few to generalize for untrained data. Despite the low ATest score, our reservoir is making human-like mistakes, such as confusing “9” with “4” and “8” with “3” (Fig. 4B, middle). Expectedly, increasing Ns decreases ATrain and increases ATest, both of which eventually saturate (Fig. 4C).

Fig. 4. Written digit recognition with spintronic physical RC.

Fig. 4.

(A) Illustration of hardware circuit. Top: A photo of the customized PCB and a chip carrier. Bottom: Schematic components of the circuit. The spintronic devices and their supporting components are labeled as devices under test (DUT). (B) Confusion matrices for training (left) and testing (middle and right). (C) Dependence of the training (ATrain) and testing (ATest) accuracy rates on the number of trained samples (Ns). (D) Dependence of ATrain and ATest on the number of neurons (NVN). Error bars in (C) and (D) indicate 1 SD from 10-fold cross-validation. DAC, digital-to-analog converter; MUX, multiplexer.

By progressively increasing the number of neurons, or VNs (NVN), we demonstrate the robustness of our reservoir building strategy (Fig. 4D). In the control experiment with NVN = 0, the reservoir includes raw inputs only, producing the lowest accuracy rates, therefore implying the crucial role of hardware dynamics. The largest performance enhancement comes from the first neuron, where both ATrain and ATest increase from less than 0.7 for NVN = 0 to roughly 0.8 for NVN = 1. Additional neurons further increase the task performance, which appears to saturate at NVN = 6. The highest average accuracy rates are ATrain = 0.916 ± 0.001 and ATest = 0.880 ± 0.001 with Ns = 20,000 and NVN = 6. These results demonstrate that stacking different neurons derived from the same spintronic dynamics effectively enhances reservoir dynamics and task performance. However, the observed saturation and the small number of neurons in reservoir remain intriguing, which we will investigate further in the next section.

Mackey-Glass time series prediction

While written digit classification only showcases our physical RC’s broader capabilities in machine learning tasks, time series prediction is a more specific benchmark task for RC (or RNN in general) (1, 6). We use a dataset (51) of the Mackey-Glass equation (52), which describes a chaotic time series widely applied in physical and biological systems. The same linear mapping process in Fig. 3A is used to generate input to devices, where pulses have fixed amplitudes (−2.8 and 2.2 V) and varying pulse widths (50 ns < τp < 200 ns). After flushing the first 100 data points in the series to remove the initial condition, we train 1000 data points to find WOut for autonomously predicting the next 400 data points. We prepared 156 neurons from 14 spintronic memristors, corresponding to 156 different DRs of 14 Δτp. Inspired by the strategy of “titration of chaos” (53), we add the neurons one at a time to the reservoir and compute the normalized root mean square error (NRMSE) of testing. A neuron is kept in the reservoir only if it decreases the NRMSE. This process repeats for all 156 neurons to obtain the final reservoir and minimized NRMSE of each trial (Fig. 5A). The 156 neurons are randomized into 100 distinct trial sequences to study the effects of neuron interaction on reservoir dynamics. Figure 5B shows the prediction during a particular trial. As more neurons are added, the reservoir produces better prediction for the untrained data. The final reservoir has 57 neurons, and its prediction almost superimposes with the target, showing robust performance of our physical reservoir on the Mackey-Glass time series.

Fig. 5. Autonomous prediction of Mackey-Glass time series with spintronic physical RC.

Fig. 5.

(A) Schematics of experimental design for studying neuron interaction. (B) Improvement of prediction with increasing number of neurons (NVN). (C) Dependence of NRMSE on NVN. Inset shows the initial condition. (D) The selection frequency of each neuron (Ni) in 100 trials.

A quantitative analysis on the 100 trials reveals the strong and complex interconnection among neurons. The reduction of NRMSE with NVN for five representative trials is plotted in Fig. 5C in reddish colors. Regardless of the different trends and final NVN values, the five trials have saturation NRMSEs close to 0.08, which also applies to all other trials (fig. S7). This shows the enhancing effect of more neurons on the reservoir dynamics, leading to better predictions on time series, consistent with the observation in Fig. 4D. Moreover, we find that a specific neuron is included in the final reservoir of some trials but not others. This is strong evidence of the complex interaction among neurons because the same neuron can be either helpful or detrimental depending on the existing neurons in the reservoir. It also explains the saturated performance in Figs. 4D and 5C as the additional neuron does not interact with the existing ones well.

We summarize the occurrence frequency of neurons in the final reservoirs of all trials to further investigate neuron interaction. Referring to Fig. 5D, neurons with wider DRs (Δτp ≥ 100 ns) are picked more frequently by the reservoir than those with narrower DRs (Δτp ≤ 20). We attribute this to the fact that wider DRs correspond to stronger nonlinearity. Unexpectedly, some neurons with Δτp ≤ 20, where the underlying device responses is approximately linear in τp, also have fairly good frequency (~20). This is another strong evidence of the hidden interactions among neurons beyond their individual contribution to the reservoir dynamics. Therefore, the results in Fig. 5 (C and D) validate with our hypothesis (Fig. 1B) that the overall τp dependence of hardware facilitates the interaction among VNs.

The hidden interaction among neurons of different Δτp is also demonstrated using a control experiment. Instead of multiple Δτp, we set up 50 different neurons of a single Δτp (= 100 ns) and repeat the reservoir construction process in Fig. 5A. Five representative NRMSE trends are plotted in Fig. 5C in bluish colors. The saturated NRMSE is close to 0.15—roughly 80% more than that of multiple Δτp—indicating a performance deterioration with single Δτp. Moreover, for most trials with single Δτp, the reduction in NRMSE is slower than those with multiple Δτp, implying that additional neuron of the same Δτp is less efficient in enriching the reservoir dynamics. In addition, we point out that the differences in the trends and saturation of NRMSE between single and multiple Δτp are unlikely to stem from initial conditions, as highlighted by the inset of Fig. 5C. For some trials with multiple Δτp, although the reservoir begins with underperforming neurons (NRMSE > 0.7), its performance rapidly improves after adding the first several neurons. These results suggest that the quantity and diversity of neurons are both critical for constructing a reservoir with rich internal dynamics for time series prediction.

DISCUSSION

In summary, we have demonstrated a spintronic implementation of nonvolatile PRC, where the spintronic hardware can be controlled purely by electrical signals. We establish a strong isomorphism between the path dependence in the electrical manipulation of ferromagnetic domains and the history dependence of reservoir states. Such spatiotemporal transformation removes the constraints of input timescale on the efficient responses of physical reservoir, therefore substantially boosting the application scope of PRC. In principle, the time delay between successive inputs to our spintronic reservoir can extend beyond several years since nonvolatile spintronic devices are known to have long data retention time (>10 years) (54). With the help of a customized PCB, we use a parallel computing framework by generating a large number of VNs from multiple spintronic devices. We show that the shared τp dependence of multidimensional domain wall motion mediates the interaction of VNs, contributing to the rich dynamics of the spintronic reservoir and its excellent performance on benchmark tasks.

Using our spintronic reservoir, the highest single-trial ATest is 0.903 in the written digit classification, and the lowest NRMSE is 0.076 in Mackey-Glass time series prediction, both of which are among the best performances of PRC (table S2). Furthermore, the robust performance of our spintronic RC is enhanced by ultralow reading energy (10.1 fJ) and fairly low writing energy (0.92 nJ). The present writing energy is further reduced to 37 pJ using nanoscale magnetic tunnel junction (MTJ). Our circuit has a low power budget of 16.64 mW and a high processing speed of 100 kHz, both are promising when compared with prior works. The total time per trial for written digit recognition and Mackey-Glass prediction tasks are 18 s and 90 ms, respectively (note S4 and tables S3 to S5). In the digit classification task, the energy consumption of our PRC network is estimated to be 99.6 and 94.8% lower than the software implementation of the state-of-the-art echo state network (7) and feedforward neural network (41), respectively (note S4 and table S6).

While the present work only offers a proof-of-concept solution to nonvolatile PRC, we envisage marked improvement in its functionality by incorporating more sophisticated materials, devices, and circuit components. First, strictly speaking, our architecture is partially parallel. Using a larger hardware array and a one-VN-one-memristor structure, a completely parallel architecture can be readily engineered. Second, replacing the Hall geometry with MTJs will enhance the signal-to-noise ratio, energy efficiency, and scalability (55). Notably, smaller MTJs also retain the nonlinear τp dependence (fig. S8) (56) and nonvolatility required for PRC. For an MTJ array on the same current channel (57), the collective resistance readout depends on the switching probability, which is finely controlled by Vp and τp. With the help of a MTJ crossbar array as the hardware readout layer (58), an all-spintronic PNN can be built using our approach. Next, the domain wall motion in our device is chiral (40), which provides additional tuning knobs on reservoir. A related technology is the magnetic skyrmion (59, 60), whose chirality and quasiparticle nature are likely to enrich reservoir dynamics. Last, our nonvolatile approach applies to both physical computation and physical transformation. Therefore, it can extend beyond the relatively rigid training paradigm of RC to the more hierarchical learning in deep neural networks or the so-called physics-aware training (3). We also expect our approach to be applicable to other nonvolatile memristors, such as ferroelectrics (61).

MATERIALS AND METHODS

Sample fabrication

The spintronic memristor has a stack of PtMn (15)/CoFeB (3)/W (1.5)/CoFeB (0.9)/MgO (1.8), where numbers in the parentheses are nominal thickness in nanometers. All layers were deposited at room temperature using a magnetron sputtering technique on a 200-mm thermally oxidized silicon wafer, with a base chamber pressure of 2 × 10−8 torr. The field-free feature was enabled by an in-plane exchange bias field, which was engineered by annealing the device at 250°C and an in-plane field of 2 T for 2 hours followed by field cooling to room temperature. The microdevices were patterned by photolithography and ion beam etching. The Hall bars for electrical measurement has a dimension of 5 μm for both the current channel and Hall leads. The microstrips for MOKE microscopy are 5 μm by 25 μm.

Single-device measurement

A Tektronix AFG3252 function generator was used to apply voltage pulses to Hall bars and microstrips. The Hall voltage was measured using an ac modulation technique by a Zurich MFLI lock-in amplifier with a probing current of 50 μA and a modulation frequency of 317.3 Hz. Polar MOKE images were obtained using a MagVision Kerr system. To ensure the same original state before pulsing, a two-step initialization procedure was used, including a preset out-of-plane field (HPre) and a preset voltage pulse (VPre). For Hall bars, μ0Hpre = 0.05 T and Vpre = 2.5 V at τp = 200 ns. For microstrips, μ0Hpre = −0.05 T and Vpre = 3 V at τp = 5 μs.

Circuit and software

The PCB for hardware network consists of two chip carriers and peripheral circuits. Two dies—each provides seven Hall bar devices—from the same 200-mm wafer were installed on the chip carriers, and electrical connections were made by wire bonding. An FPGA was configured with a digital-to-analog converter for sourcing voltage pulses. Twenty-eight 1:2 analog demultiplexers were used to tune the pulse widths for writing the 14 devices in parallel. The measurement circuit has four 8:1 multiplexers, a two-stage amplification module, and an analog-to-digital converter (ADC). The FPGA read through the ADC Hall voltages induced by a probing current of 50 μA. The PCB was interfaced with a computer using USB. Python (version 3.7.11) was used to communicate with the FPGA for data processing and to run training algorithms of the two benchmark tasks. The FPGA was programmed using Quartus II (version 13.1).

Tasks and training

The training algorithms in this work are adapted from the classic echo state network. The reservoir state at time step t is expressed as

xt+1=(1α)xt+αtanh(Winut+1+Wxt) (1)

Here, ut is the input; Win is the weight matrix connecting input and reservoir states; W is the internal weight matrix for connection among reservoir state; α is the leak rate, and the hyperbolic tangent is the activation function. Stacking xt for all time steps generates the reservoir state matrix X, which relates to the real output y (or predicted output y) as

WOutX=y (2)

where WOut is the weight matrix connecting reservoir state and output. To find WOut, we first alter Eq. 2 as XTWOutT=yT, where ()T indicates transpose matrix. Then

WOutT=(XXT)1XyT (3)

where (XXT)1X is the Moore-Penrose pseudoinverse of XT. In practice, we use y from the training dataset and apply linalg.solve() function from the numpy library of Python on

(XXT+Reg·I)WOutT=XyT (4)

to solve for WOut, where Reg (=10−8) and I are regularization coefficients of ridge regression and identity matrix, respectively. The computed WOut and Eq. 2 are used to find out y for the testing dataset.

The MNIST written digit dataset has 60,000 samples for training and 10,000 samples for testing, all of which are labeled and have a resolution of 28 × 28 pixels. Since we exercise cross-validation, the difference between training and testing samples in the original dataset is trivial. Therefore, we use the first 20,000 samples in the training dataset. Equation 1 is irrelevant for building the physical reservoir of both tasks. For written digit, X was constructed using the process described in Fig. 3A. Ideally, X is 3D with a size of NVN × Ntime × Ns, where Ntime is the number of time step (equals to the number of pixels per image). However, to reduce computation cost and time, we downsized X to 2D by concatenating the NVN and Ntime axes, and X becomes (NVN × Ntime) × Ns. For the same reasons, we downgrade the image resolution from 28 × 28 to 14 × 14 (Ntime = 196). Since there are 10 distinct digits, WOut is 10 × (NVN × Ntime). We used a winner-take-all strategy (similar to the softmax function) to identify the predicted digit. Last, ATrain and ATest were computed as NRNs, where NR is the number of correctly recognized digits.

The Mackey-Glass dataset has 10,000 data points with a delay of 17, i.e., it is chaotic. The X was constructed as depicted in Fig. 5A. Since Ns = 1 for this task, X has a size of NVN × Ntime for training, and WOut has a size of 1 × NVN. In the autonomous prediction mode, yt predicted by WOut is fed back into the series as the input of the next time step ut+1. The NRMSE of prediction is computed using

NRMSE=1Ni=1i=N(yiyi)21Ni=1i=N(yiy¯)2 (5)

where y¯ is the average of real output in the testing dataset.

Acknowledgments

We wish to thank W. Zizhe for insightful discussion on reservoir dynamics and training methodology.

Funding: This study is supported by the career development fund (A*STAR grant no. C222812013), Key Projects Supported by the Regional Innovation and Development Joint Fund (grant no. U23A20365), National Key R&D Plan “Nano Frontier” Key Special Project (grant nos. 2024YFA1208400 and 2021YFA1200502), Cultivation Projects of National Major R&D Project (grant no. 92164109), Disruptive Technology Innovation Project of the National Key R&D Program (grant no. DT01202402075), National Natural Science Foundation of China (grant no. 61874158), Special Project of Strategic Leading Science and Technology of Chinese Academy of Sciences (grant no. XDB44000000-7), Yanzhao Young Scientist Project of Hebei Province (grant no. F2023201076), Support Program for the Top Young Talents of Hebei Province (grant no. 70280011807), Interdisciplinary Research Program of Natural Science of Hebei University (grant no. DXK202101), Natural Science Foundation of Hebei Province (grant nos. F2021201045), Baoding Science and Technology Plan Project (grant nos. 2172P011, 2272P014) Outstanding Young Scientific Research and Innovation Team of Hebei University (grant no. 605020521001), Special Support Funds for National High Level Talents (grant no. 041500120001), Hebei Province High-Level Tal-Ent Funding Project (grant no. B20231003),Science and Technology Project of Hebei Education Department (grant nos. QN2020178 and QN2021026).

Author contributions: J.Z. conceptualized this work. J.Z., X.Y., and S.T.L. supplied resources and supervised this work. J.Z. performed the electrical and MOKE measurements, composed Python codes for training, and wrote the original manuscript. J.X. designed and fabricated the PCB circuits. L.H. fabricated devices and performed MOKE measurement. S.L.K.Y. deposited films and fabricated devices. S.C. performed electrical measurement and supported algorithm development.

Competing interests: The authors declare that they have no competing interests.

Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials.

Supplementary Materials

The PDF file includes:

Supplementary Notes S1 to S4

Figs. S1 to S8

Tables S1 to S6

Legends for movies S1 to S3

References

sciadv.adr5262_sm.pdf (1.8MB, pdf)

Other Supplementary Material for this manuscript includes the following:

Movies S1 to S3

REFERENCES AND NOTES

  • 1.Liang X., Tang J., Zhong Y., Gao B., Qian H., Wu H., Physical reservoir computing with emerging electronics. Nat. Electron. 7, 193–206 (2024). [Google Scholar]
  • 2.Karniadakis G. E., Kevrekidis I. G., Lu L., Perdikaris P., Wang S., Yang L., Physics-informed machine learning. Nat. Rev. Phys. 3, 422–440 (2021). [Google Scholar]
  • 3.Wright L. G., Onodera T., Stein M. M., Wang T., Schachter D. T., Hu Z., McMahon P. L., Deep physical neural networks trained with backpropagation. Nature 601, 549–555 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Jaeger H., Noheda B., van der Wiel W. G., Toward a formal theory for computing machines made out of whatever physics offers. Nat. Commun. 14, 4911 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Raissi M., Perdikaris P., Karniadakis G. E., Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686–707 (2019). [Google Scholar]
  • 6.Tanaka G., Yamane T., Héroux J. B., Nakane R., Kanazawa N., Takeda S., Numata H., Nakano D., Hirose A., Recent advances in physical reservoir computing: A review. Neural Netw. 115, 100–123 (2019). [DOI] [PubMed] [Google Scholar]
  • 7.Jaeger H., The “echo state” approach to analysing and training recurrent neural networks-with an erratum note. German National Research Center for Information Technology 148, 13 (2001). [Google Scholar]
  • 8.Maass W., Natschläger T., Markram H., Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560 (2002). [DOI] [PubMed] [Google Scholar]
  • 9.Lukoševičius M., Jaeger H., Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3, 127–149 (2009). [Google Scholar]
  • 10.Yan M., Huang C., Bienstman P., Tino P., Lin W., Sun J., Emerging opportunities and challenges for the future of reservoir computing. Nat. Commun. 15, 2056 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.K. Nakajima, I. Fischer, Reservoir Computing Theory, Physical Implementations, and Applications (Springer Singapore, 2021); 10.1007/978-981-13-1687-6. [DOI]
  • 12.Vandoorne K., Dierckx W., Schrauwen B., Verstraeten D., Baets R., Bienstman P., Campenhout J. V., Toward optical signal processing using photonic reservoir computing. Opt. Express 16, 11182–11192 (2008). [DOI] [PubMed] [Google Scholar]
  • 13.D. Brunner, M. C. Soriano, G. V. D. Sande, Photonic Reservoir Computing: Optical Recurrent Neural Networks (De Gruyter, 2019).
  • 14.Paquot Y., Duport F., Smerieri A., Dambre J., Schrauwen B., Haelterman M., Massar S., Optoelectronic reservoir computing. Sci. Rep. 2, 287 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Larger L., Soriano M. C., Brunner D., Appeltant L., Gutierrez J. M., Pesquera L., Mirasso C. R., Fischer I., Photonic information processing beyond Turing: An optoelectronic implementation of reservoir computing. Opt. Express 20, 3241–3249 (2012). [DOI] [PubMed] [Google Scholar]
  • 16.Duport F., Schneider B., Smerieri A., Haelterman M., Massar S., All-optical reservoir computing. Opt. Express 20, 22783–22795 (2012). [DOI] [PubMed] [Google Scholar]
  • 17.Tanaka K., Tokudome Y., Minami Y., Honda S., Nakajima T., Takei K., Nakajima K., Self-organization of remote reservoirs: Transferring computation to spatially distant locations. Adv. Intell. Syst. 4, 2100166 (2022). [Google Scholar]
  • 18.Bhovad P., Li S., Physical reservoir computing with origami and its application to robotic crawling. Sci. Rep. 11, 13002 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Nakajima K., Hauser H., Li T., Pfeifer R., Information processing via physical soft body. Sci. Rep. 5, 10487 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Caluwaerts K., Despraz J., Işçen A., Sabelhaus A. P., Bruce J., Schrauwen B., SunSpiral V., Design and control of compliant tensegrity robots through simulation and hardware validation. J. R. Soc. Interface 11, 20140520 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Nakajima K., Hauser H., Kang R., Guglielmino E., Caldwell D., Pfeifer R., A soft body as a reservoir: Case studies in a dynamic model of octopus-inspired soft robotic arm. Front. Comput. Neurosci. 7, 91 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Du C., Cai F., Zidan M. A., Ma W., Lee S. H., Lu W. D., Reservoir computing using dynamic memristors for temporal information processing. Nat. Commun. 8, 2204 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Appeltant L., Soriano M. C., Van der Sande G., Danckaert J., Massar S., Dambre J., Schrauwen B., Mirasso C. R., Fischer I., Information processing using a single dynamical node as complex system. Nat. Commun. 2, 468 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Zhong Y., Tang J., Li X., Liang X., Liu Z., Li Y., Xi Y., Yao P., Hao Z., Gao B., Qian H., Wu H., A memristor-based analogue reservoir computing system for real-time and power-efficient signal processing. Nat. Electron. 5, 672–681 (2022). [Google Scholar]
  • 25.Chen Z., Li W., Fan Z., Dong S., Chen Y., Qin M., Zeng M., Lu X., Zhou G., Gao X., Liu J.-M., All-ferroelectric implementation of reservoir computing. Nat. Commun. 14, 3585 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Torrejon J., Riou M., Araujo F. A., Tsunegi S., Khalsa G., Querlioz D., Bortolotti P., Cros V., Yakushiji K., Fukushima A., Kubota H., Yuasa S., Stiles M. D., Grollier J., Neuromorphic computing with nanoscale spintronic oscillators. Nature 547, 428–431 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Tran Q. H., Nakajima K., Learning temporal quantum tomography. Phy. Rev. Lett. 127, 260401 (2021). [DOI] [PubMed] [Google Scholar]
  • 28.Martínez-Peña R., Giorgi G. L., Nokkala J., Soriano M. C., Zambrini R., Dynamical phase transitions in quantum reservoir computing. Phys. Rev. Lett. 127, 100502 (2021). [DOI] [PubMed] [Google Scholar]
  • 29.Ghosh S., Paterek T., Liew T. C. H., Quantum neuromorphic platform for quantum state preparation. Phy. Rev. Lett. 123, 260404 (2019). [DOI] [PubMed] [Google Scholar]
  • 30.Fujii K., Nakajima K., Harnessing disordered-ensemble quantum dynamics for machine learning. Phys. Rev. Applied 8, 024030 (2017). [Google Scholar]
  • 31.Ghosh S., Nakajima K., Krisnanda T., Fujii K., Liew T. C. H., Quantum neuromorphic computing with reservoir computing networks. Adv. Quantum Technol. 4, 2100053 (2021). [Google Scholar]
  • 32.A. Goudarzi, M. R. Lakin, D. Stefanovic, “DNA reservoir computing: A novel molecular computing approach” in DNA Computing and Molecular Programming (Springer, 2013), pp. 76–89. 10.1007/978-3-319-01928-4_6. [DOI]
  • 33.Obst O., Trinchi A., Hardin S. G., Chadwick M., Cole I., Muster T. H., Hoschke N., Ostry D., Price D., Pham K. N., Wark T., Nano-scale reservoir computing. Nano Commun. Netw. 4, 189–196 (2013). [Google Scholar]
  • 34.Liu X., Parhi K. K., Reservoir computing using DNA oscillators. ACS Synth. Biol. 11, 780–787 (2022). [DOI] [PubMed] [Google Scholar]
  • 35.Matsuo T., Sato D., Koh S.-G., Shima H., Naitoh Y., Akinaga H., Itoh T., Nokami T., Kobayashi M., Kinoshita K., Dynamic nonlinear behavior of ionic liquid-based reservoir computing devices. ACS Appl. Mater. Interfaces 14, 36890–36901 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.B. Jones, D. Stekel, J. Rowe, C. Fernando, Is there a Liquid State Machine in the Bacterium Escherichia Coli? in 2007 IEEE Symposium on Artificial Life (IEEE, 2007), pp. 187–191.
  • 37.Dion G., Mejaouri S., Sylvestre J., Reservoir computing with a single delay-coupled non-linear mechanical oscillator. J. Appl. Phys. 124, 152132 (2018). [Google Scholar]
  • 38.Brunner D., Soriano M. C., Mirasso C. R., Fischer I., Parallel photonic information processing at gigabyte per second data rates using transient states. Nat. Commun. 4, 1364 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Moon J., Ma W., Shin J. H., Cai F., Du C., Lee S. H., Lu W. D., Temporal data classification and forecasting using a memristor-based reservoir computing system. Nat. Electron. 2, 480–487 (2019). [Google Scholar]
  • 40.Zhou J., Huang L., Chung H. J., Huang J., Suraj T. S., Lin D. J. X., Qiu J., Chen S., Yap S. L. K., Toh Y. T., Ng S. K., Tan H. K., Soumyanarayanan A., Lim S. T., Chiral interlayer exchange coupling for asymmetric domain wall propagation in field-free magnetization switching. ACS Nano 17, 9049–9058 (2023). [DOI] [PubMed] [Google Scholar]
  • 41.Zhou J., Zhao T., Shu X., Liu L., Lin W., Chen S., Shi S., Yan X., Liu X., Chen J., Spin–orbit torque-induced domain nucleation for neuromorphic computing. Adv. Mater. 33, 2103672 (2021). [DOI] [PubMed] [Google Scholar]
  • 42.Fukami S., Zhang C., DuttaGupta S., Kurenkov A., Ohno H., Magnetization switching by spin–orbit torque in an antiferromagnet–ferromagnet bilayer system. Nat. Mater. 15, 535–541 (2016). [DOI] [PubMed] [Google Scholar]
  • 43.Chen S., Mishra R., Chen H., Yang H., Qiu X., Mimicking synaptic plasticity with a wedged Pt/Co/Pt spin–orbit torque device. J. Phys. D Appl. Phys. 55, 095001 (2021). [Google Scholar]
  • 44.Garello K., Avci C. O., Miron I. M., Baumgartner M., Ghosh A., Auffret S., Boulle O., Gaudin G., Gambardella P., Ultrafast magnetization switching by spin-orbit torques. Appl. Phys. Lett. 105, 212402 (2014). [Google Scholar]
  • 45.W.-B. Liao, T.-Y. Chen, Y.-C. Hsiao, C.-F. Pai, Pulse-width and temperature dependence of memristive spin–orbit torque switching. arXiv:2012.05531 (2020).
  • 46.Sato N., Xue F., White R. M., Bi C., Wang S. X., Two-terminal spin–orbit torque magnetoresistive random access memory. Nat. Electron. 1, 508–511 (2018). [Google Scholar]
  • 47.der Sande G. V., Brunner D., Soriano M. C., Advances in photonic reservoir computing. Nanophotonics 6, 561–576 (2017). [Google Scholar]
  • 48.Kendall J. D., Kumar S., The building blocks of a brain-inspired computer. Appl. Phys. Rev. 7, 011305 (2020). [Google Scholar]
  • 49.Lee O. J., Liu L. Q., Pai C. F., Li Y., Tseng H. W., Gowtham P. G., Park J. P., Ralph D. C., Buhrman R. A., Central role of domain wall depinning for perpendicular magnetization switching driven by spin torque from the spin Hall effect. Phys. Rev. B 89, 024418 (2014). [Google Scholar]
  • 50.Y. LeCun, C. Cortes, C. J. C. Burges, The MNIST database of handwritten digits (1998); https://yann.lecun.com/exdb/mnist/.
  • 51.M. Lukoševičius, Simple Echo State Network implementations (2016); https://mantas.info/code/simple_esn/.
  • 52.Glass L., Mackey M., Mackey-Glass equation. Scholarpedia 5, 6908 (2010). [Google Scholar]
  • 53.Devolder T., Rontani D., Petit-Watelot S., Bouzehouane K., Andrieu S., Létang J., Yoo M.-W., Adam J.-P., Chappert C., Girod S., Cros V., Sciamanna M., Kim J.-V., Chaos in magnetic nanocontact vortex oscillators. Phys. Rev. Lett. 123, 147701 (2019). [DOI] [PubMed] [Google Scholar]
  • 54.Krizakova V., Perumkunnil M., Couet S., Gambardella P., Garello K., Spin-orbit torque switching of magnetic tunnel junctions for memory applications. J. Magn. Magn. Mater. 562, 169692 (2022). [Google Scholar]
  • 55.Parkin S. S. P., Kaiser C., Panchula A., Rice P. M., Hughes B., Samant M., Yang S.-H., Giant tunnelling magnetoresistance at room temperature with MgO (100) tunnel barriers. Nat. Mater. 3, 862–867 (2004). [DOI] [PubMed] [Google Scholar]
  • 56.Zhou J., Huang L., Yap S. L. K., Lin D. J. X., Chen B., Chen S., Wong S. K., Qiu J., Lourembam J., Soumyanarayanan A., Lim S. T., Synergizing intrinsic symmetry breaking with spin–orbit torques for field-free perpendicular magnetic tunnel junction. APL Mater. 12, 081105 (2024). [Google Scholar]
  • 57.Kumar A., Lin D. J. X., Das D., Huang L., Yap S. L. K., Tan H. R., Tan H. K., Lim R. J. J., Toh Y. T., Chen S., Lim S. T., Fong X., Ho P., Multistate compound magnetic tunnel junction synapses for digital recognition. ACS Appl.Mater. Interfaces 16, 10335–10343 (2024). [DOI] [PubMed] [Google Scholar]
  • 58.Liu L., Wang D., Wang D., Sun Y., Lin H., Gong X., Zhang Y., Tang R., Mai Z., Hou Z., Yang Y., Li P., Wang L., Luo Q., Li L., Xing G., Liu M., Domain wall magnetic tunnel junction-based artificial synapses and neurons for all-spin neuromorphic hardware. Nat. Commun. 15, 4534 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Chen S., Lourembam J., Ho P., Toh A. K. J., Huang J., Chen X., Tan H. K., Yap S. L. K., Lim R. J. J., Tan H. R., Suraj T. S., Sim M. I., Toh Y. T., Lim I., Lim N. C. B., Zhou J., Chung H. J., Lim S. T., Soumyanarayanan A., All-electrical skyrmionic magnetic tunnel junction. Nature 627, 522–527 (2024). [DOI] [PubMed] [Google Scholar]
  • 60.Pinna D., Abreu Araujo F., Kim J. V., Cros V., Querlioz D., Bessiere P., Droulez J., Grollier J., Skyrmion gas manipulation for probabilistic computing. Phys. Rev. Applied 9, 064018 (2018). [Google Scholar]
  • 61.Everschor-Sitte K., Majumdar A., Wolk K., Meier D., Topological magnetic and ferroelectric systems for reservoir computing. Nat. Rev. Phys. 6, 455–462 (2024). [Google Scholar]
  • 62.Alomar M. L., Skibinsky-Gitlin E. S., Frasser C. F., Canals V., Isern E., Roca M., Rosselló J. L., Efficient parallel implementation of reservoir computing systems. Neural Comput. Appl. 32, 2299–2313 (2020). [Google Scholar]
  • 63.Alomar M. L., Soriano M. C., Escalona-Morán M., Canals V., Fischer I., Mirasso C. R., Rosselló J. L., Digital implementation of a single dynamical node reservoir computer. IEEE Trans. Circuits Syst. II: Express Briefs 62, 977–981 (2015). [Google Scholar]
  • 64.Alomar M. L., Canals V., Perez-Mora N., Martínez-Moll V., Rosselló J. L., FPGA-based stochastic echo state networks for time-series forecasting. Comput. Intell. Neurosci. 2016, 3917892 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Dai Z., Xiang F., He C., Wang Z., Zhang W., Li Y., Yue J., Shang D., A scalable small-footprint time-space-pipelined architecture for reservoir computing. IEEE Trans. Circuits Syst. II Express Briefs 70, 3069–3073 (2023). [Google Scholar]
  • 66.Liang X., Zhong Y., Tang J., Liu Z., Yao P., Sun K., Zhang Q., Gao B., Heidari H., Qian H., Wu H., Rotating neurons for all-analog implementation of cyclic reservoir computing. Nat. Commun. 13, 1549 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Milano G., Pedretti G., Montano K., Ricci S., Hashemkhani S., Boarino L., Ielmini D., Ricciardi C., In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire networks. Nat. Mater. 21, 195–202 (2022). [DOI] [PubMed] [Google Scholar]
  • 68.Gartside J. C., Stenning K. D., Vanstone A., Holder H. H., Arroo D. M., Dion T., Caravelli F., Kurebayashi H., Branford W. R., Reconfigurable training and reservoir computing in an artificial spin-vortex ice via spin-wave fingerprinting. Nat. Nanotechnol. 17, 460–469 (2022). [DOI] [PubMed] [Google Scholar]
  • 69.W. Ma, T. Hennen, M. Lueker-Boden, R. Galbraith, J. Goode, W. H. Choi, P. F. Chiu, J. A. J. Rupp, D. J. Wouters, R. Waser, D. Bedau, A Mott Insulator-Based Oscillator Circuit for Reservoir Computing in 2020 IEEE International Symposium on Circuits and Systems (ISCAS) (IEEE, 2020), pp. 1–5.
  • 70.Zhong W. M., Luo C. L., Tang X. G., Lu X. B., Dai J. Y., Dynamic FET-based memristor with relaxor antiferroelectric HfO2 gate dielectric for fast reservoir computing. Mater. Today Nano 23, 100357 (2023). [Google Scholar]
  • 71.Wang S., Li Y., Wang D., Zhang W., Chen X., Dong D., Wang S., Zhang X., Lin P., Gallicchio C., Xu X., Liu Q., Cheng K.-T., Wang Z., Shang D., Liu M., Echo state graph neural networks with analogue random resistive memory arrays. Nat. Mach. Intell. 5, 104–113 (2023). [Google Scholar]
  • 72.Y. Wang, Q. Wang, S. Shi, X. He, Z. Tang, K. Zhao, X. Chu, Benchmarking the Performance and Energy Efficiency of AI Accelerators for AI Training in 2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID) (IEEE, 2020), pp. 744–751.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Notes S1 to S4

Figs. S1 to S8

Tables S1 to S6

Legends for movies S1 to S3

References

sciadv.adr5262_sm.pdf (1.8MB, pdf)

Movies S1 to S3


Articles from Science Advances are provided here courtesy of American Association for the Advancement of Science

RESOURCES