Skip to main content
Advanced Science logoLink to Advanced Science
. 2025 Apr 26;12(22):2417735. doi: 10.1002/advs.202417735

All‐Electrical Control of Spin Synapses for Neuromorphic Computing: Bridging Multi‐State Memory with Quantization for Efficient Neural Networks

Tzu‐Chuan Hsin 1, Chun‐Yi Lin 1, Po‐Chuan Wang 1, Chun Yang 1, Chi‐Feng Pai 1,
PMCID: PMC12165024  PMID: 40285600

Abstract

The development of energy‐efficient, brain‐inspired neuromorphic computing demands advanced memory devices capable of mimicking synaptic behavior to achieve high accuracy and adaptability. In this study, three types of all‐electrically controlled, field‐free spin synapse devices designed with unique spintronic structures presented: the Néel orange‐peel effect, interlayer Dzyaloshinskii‐Moriya interaction (i‐DMI), and tilted anisotropy. To systematically evaluate their neuromorphic potential, a benchmarking framework is introduced that characterizes cycle‐to‐cycle (CTC) variation, a critical factor for reliable synaptic weight updates. Among these designs, the tilted anisotropy device achieves an 11‐state memory with minimal CTC variation (2%), making it particularly suited for complex synaptic emulation. Through comprehensive benchmarking, this multi‐state device in convolutional neural networks (CNNs) using post‐training quantization is implemented. Results indicate that per‐channel quantization, particularly with the min‐max and mean squared error (MSE) observers, enhances classification accuracy on the CIFAR‐10 dataset, achieving up to 81.51% and 81.12% in ResNet‐18—values that closely approach the baseline accuracy. This evaluation underscores the potential of field‐free spintronic synapses in neuromorphic architectures, offering an area‐efficient solution that integrates multi‐state functionality with robust switching performance. The findings highlight the promise of these devices in advancing neuromorphic computing, contributing to energy‐efficient, high‐performance systems inspired by neural processes.

Keywords: field‐free switching, neuromorphic computing, neural network, perpendicular magnetic anisotropy, spin‐orbit torques


This study develops three all‐electrically controlled, field‐free spintronic synapse devices for neuromorphic computing. The tilted anisotropy device achieves an 11‐state memory with minimal cycle‐to‐cycle variation (2%), enabling high‐accuracy neural network quantization (81.51% in ResNet‐18). These findings position spintronic synapses as a promising solution for energy‐efficient AI hardware.

graphic file with name ADVS-12-2417735-g003.jpg

1. Introduction

The rapid rise of artificial intelligence (AI) has spurred demand for energy‐efficient and innovative computing solutions beyond traditional von Neumann architectures.[ 1 , 2 , 3 , 4 ] Neuromorphic computing, an unconventional computing paradigm that implements neural network algorithms on next‐generation hardware platforms, has gained significant attention.[ 5 , 6 , 7 , 8 ] Among various emerging non‐volatile memories, while resistive random‐access memory (RRAM) and phase‐change memory (PCM) offer high ON/OFF ratios and multilevel resistance states, their reliance on stochastic resistance tuning mechanisms, such as ion migration and phase transitions,[ 9 , 10 , 11 ] causes material degradation and variability, limiting long‐term reliability.[ 12 , 13 , 14 , 15 , 16 ] In contrast, spintronic synapses achieve deterministic multi‐state switching through domain wall motion, ensuring superior endurance, long retention times, and seamless CMOS compatibility.[ 17 , 18 ] By leveraging spin‐orbit torque (SOT), spintronics enables gradual domain nucleation or domain wall propagation, allowing multiple intermediate magnetization states that emulate synaptic weight updates and neural behavior.[ 2 , 19 , 20 , 21 , 22 ] However, achieving precise electric control of domain wall motion remains a key challenge for their practical implementation. Additionally, SOT‐based memory devices often require an external magnetic field for deterministic switching due to symmetry constraints,[ 23 , 24 ] as seen in W/CoFeB/MgO‐based devices with perpendicular magnetic anisotropy (PMA)[ 17 , 25 ] and antiferromagnetic systems such as Pt/Co/NiO[ 26 ] and Pt/Co/IrMn.[ 27 ]

Several studies have explored field‐free solutions to enhance the suitability of SOT devices for neuromorphic computing. For instance, the PtMn/[Co/Ni]n system has demonstrated field‐free SOT switching via exchange bias, as shown by Kurenkov et al.[ 8 , 20 ] However, its current‐induced switching ratio is significantly lower than that of field‐driven switching, leading to substantial inefficiencies. Building on inversion symmetry characteristics, CoPt systems have exhibited analog‐like behavior under current pulses of varying amplitudes.[ 19 , 28 ] However, modulating memristive behavior through pulse amplitude rather than pulse number is impractical for hardware implementation, as it complicates circuit design and integration with peripheral systems.[ 29 , 30 ] Recently, FePt systems have shown promise for neuromorphic computing by incorporating field‐free mechanisms such as interlayer exchange coupling.[ 31 , 32 ] However, significant cycle‐to‐cycle (CTC) variations have been observed during memristive switching, particularly in the absence of an applied in‐plane field, compromising reliability and scalability for neuromorphic applications.[ 33 ] Among existing field‐free methods, the CoPt system with tilted anisotropy stands out as a promising candidate. It not only enables pure current‐induced switching but also exhibits remarkably low CTC variation,[ 34 , 35 ] making it well‐suited for high‐precision synaptic weight updates. This necessitates a systematic study to elucidate the intricate relationship between field‐free SOT switching and stable analog‐like behavior.

In the realm of hardware implementation of neuromorphic computing using memristive devices, two key challenges have been widely recognized. First, while numerous studies have explored the use of discrete resistance states as weights in artificial neural networks (ANNs) with the MNIST dataset,[ 19 , 26 , 31 , 32 , 36 ] research on their performance with more complex datasets, such as CIFAR‐10, and in more advanced architectures, such as convolutional neural networks (CNNs), remains limited.[ 37 , 38 , 39 , 40 ] Second, the quantization process, which is essential for bridging the gap between resistance states and software models, is often presented in a simplified manner or lacks comprehensive discussion. The most common approach relies on using the minimum and maximum values of the weight matrix during quantization,[ 36 , 41 , 42 , 43 , 44 ] raising the question of whether this is the most effective method. Alternative techniques have been proposed in software‐based quantization range selection to mitigate potential errors,[ 44 ] highlighting the need for further exploration of optimal quantization strategies.

Our study builds on this foundation by systematically addressing these gaps. In this work, we employ three types of optimized all‐electrical, field‐free spin synapse devices[ 45 , 46 ] to establish a benchmarking framework that spans different material designs and device structures, with the goal of achieving field‐free, ideal memristive behavior for neural network computation. Through pulse‐number experiments, each device is configured to enable distinct intermediate resistance states corresponding to multi‐level memory. This study introduces two key novelties. First, our systematic benchmarking approach allows for a direct comparison of synaptic performance across diverse device configurations. Second, we demonstrate an 11‐distinguishable‐state behavior in the tilted anisotropy device, identifying its low CTC variation and ideal characteristics for neural network computation. These resistance states are then integrated into convolutional neural networks (CNNs) through post‐training quantization (PTQ) with various observers, serving as a critical step in simulating hardware‐based inference. Specifically, PTQ approximates the mapping of floating‐point weights in neural network models to discrete resistance states in the devices, laying the groundwork for efficient hardware implementation. By bridging memristive spintronic devices with neural networks, this study provides a robust framework for synapse emulation in neuromorphic computing systems, advancing the potential for energy‐efficient and scalable artificial intelligence applications.

2. Results and Discussion

2.1. Field‐Free Current‐Induced Magnetization Switching

Three distinct stacked structures are prepared with different field‐free solutions to explore the all‐electric control spin synapse. Figure  1a–c are the schematics of the different switching mechanisms. Figure 1d–f are the corresponding current‐induced magnetization switching results by gauging anomalous Hall resistance RH with a current pulse width of t pulse = 50 ms.

Figure 1.

Figure 1

Three all‐electrical control devices with different field‐free solutions. Schematics of the samples with a) Sample I: the Néel orange‐peel effect, b) Sample II: the i‐DMI effect, and c) Sample III: tilted anisotropy. d–f) Representative current‐induced magnetization switching loops corresponding to the three field‐free solutions.

Sample I: CoFeB(4)/W(1.4)/CoFeB(1.6)/MgO(1.1)/Ta(2), in which the field‐free current‐induced magnetization switching mechanism can be explained by the Néel orange‐peel effect[ 46 , 47 , 48 , 49 , 50 , 51 ] in Figure 1a,d. This effect originates from roughness‐induced magnetostatic coupling at the interfaces between the in‐plane and out‐of‐plane magnetic layers. Correlated surface roughness creates localized magnetic poles that give rise to a weak but stable in‐plane effective field, which serves as an intrinsic symmetry‐breaking field even in the absence of external magnetic bias.[ 47 , 50 ] A CoFeB(4)/W(1.4)/CoFeB(1.6) T‐type structure is utilized in this scenario, where in‐plane magnetic anisotropy (IMA) and PMA layers coexist. The red arrows represent the magnetization directions. With aligning the magnetization M of the in‐plane CoFeB layer along the current‐channel direction ( M CoFeB // ± x), the surface roughness of the CoFeB/W/CoFeB stack generates a significant Néel orange‐peel effective field within the structure, acting as a built‐in bias field (H x), which reorients the domain wall moments in the PMA layer to align parallel to M CoFeB. This alignment facilitates domain wall propagation under the influence of the current‐induced SOT effective field.[ 24 , 52 ] However, the thickness of the W spacer layer is limited to 1.4 nm,[ 46 , 53 ] and a marked decline in performance is observed when the thickness increases to 2 nm, as indicated by Kao et al.[ 46 ] Hence, SOT switching cannot be fully optimized, given that the 1.4 nm W layer is still smaller than its spin diffusion length (≈3 nm).[ 54 , 55 ] The corresponding zero‐field SOT effective fields and pulse width dependence of current‐induced magnetization switching are documented in Figure S1 (Supporting Information).

Sample II: Ta(0.5)/CoFeB(1.2)/Pt(2.5)/Co(0.6)/Pt(0.6)/Ta(2), which possesses the strong i‐DMI effect[ 45 , 56 , 57 , 58 , 59 , 60 ] as shown in Figure 1b,e. The i‐DMI originates from spin‐orbit coupling in systems with broken inversion symmetry, particularly in ferromagnet/heavy metal/ferromagnet heterostructures.[ 56 , 58 ] Herein, CoFeB(1.2) has IMA while Co(0.6) sandwiched between two Pt layers exhibits PMA, building a T‐type structure. However, as indicated by the red arrows, the magnetization of the in‐plane CoFeB layer aligns perpendicularly with the current channel ( M // ±y). The coherent switching between the in‐plane CoFeB and the PMA Co layers makes the current‐induced switching possible via the DMI energy form ‐ D ∙( M Co× M CoFeB)[ 45 , 58 , 60 ] with the oblique deposition of the spacer Pt. The wave function interference at both Pt/Co interfaces leads to oscillatory interlayer exchange coupling, where the strength of the i‐DMI depends sensitively on the capping thickness. The optimized asymmetric Pt design enhances the i‐DMI strength through constructive interference of the electron wave functions.[ 60 ] This strong coupling leads to efficient current‐induced magnetization switching with a large current‐induced effective field H z eff/I DC of 9.5 Oe mA−1 and 100% switching ratio (current‐scan vs field‐scan), as detailed in Figure S2 (Supporting Information). Please note that the orientation of the D ‐vector should align along the x‐axis.

Sample III: Ta(0.5)/Pt(6.1)/Co(0.7)/Pt(1), which is the tilted anisotropy sample[ 24 , 45 , 61 , 62 , 63 ] in Figure 1c,f. As implied by the name, the anisotropy axis in this device is deliberately tilted away from the z‐axis. This is achieved by obliquely depositing the bottom Pt layer. The tilted anisotropy originates from the structural modifications in the Pt(6) layer, where the (111) texture tilts due to the deposition angle. This tilt affects the neighboring Co layer, leading to M Co (the red arrow) tilt from the z‐axis toward the yz plane.[ 63 ] One classical trend of the tilted anisotropy device is the nonlinear trend of the current‐induced effective field, which has been verified experimentally.[ 45 , 63 ] This increasing H z eff with the applied current pulse also leads to a 100% field‐free current‐induced SOT switching, as shown in Figure S3 (Supporting Information). The objective of this study is to compare and identify the most effective field‐free switching mechanism for neuromorphic computing applications, as detailed in the subsequent experimental section.

2.2. Nonvolatile 11‐State Memory

Extensive electrical measurements are conducted to evaluate the memristor's ability to store multiple states and emulate synaptic behavior. Figure  2a illustrates the transport measurement setup and the detection of current‐induced magnetization switching via changes in RH ​, with t pulse​ set to 50 µs in the tilted anisotropy device (Sample III). This duration represents the minimum time required for complete switching in the micron‐size sample, as shown in Figure S3 (Supporting Information). Furthermore, the selection of a 50 µs pulse width and a pulse amplitude of 13 mA are determined through systematic optimization to maximize the number of distinguishable resistance states while minimizing the CTC variation (details provided in Section S4, Supporting Information). The sequence of current pulses is illustrated in Figure 2b. The process begins by aligning the magnetization downward (‐z‐direction) using a large reset current (I reset) of −23.5 mA. Subsequently, 19 consecutive current pulses (I pulse), each with t pulse = 50 µs and a consistent magnitude of 13 mA, are applied. Read current pulses are then applied after each I reset and/or I pulse, with a magnitude of 0.1 mA and a pulse width of 50 ms. The resulting RH readout, shown in Figure 2c, exhibits a gradual increase characteristic of long‐term potentiation (LTP), eventually reaching a saturated state with nonlinear behavior.[ 64 ] Moreover, Figure 2c reveals that the Hall resistance difference ∆RH , when switched by pulses of uniform magnitude, is ≈68% of the ∆RH observed in Figure 2a, where pulses of varying amplitude are used. Notably, each cycle exhibits remarkable reproducibility. Following the same procedure, the process can be conducted in reverse, as shown in Figure 2d, where I reset = 23.5 mA and the I pulse = −13 mA. The resulting RH readout, shown in Figure 2e, exhibits a gradual decrease, indicative of long‐term depression (LTD). Hence, the full range of magnetization states can be accessed by applying two opposing trains of current pulses.

Figure 2.

Figure 2

Memristive behaviors in the tilted anisotropy device (Sample III) with sequential 50 µs current pulses. a) The upper panel shows the Hall bar device and coordinate system for electrical measurements. The lower panel demonstrates current‐induced magnetization switching under a current pulse of 50 µs. b) The first applied train of current pulses and c) the corresponding response of RH , where equal 19 positive current pulses of 13 mA are applied after a large reset current of −23.5 mA. d) The second applied train of current pulses and e) the corresponding response of RH , where equal 19 positive current pulses of −13 mA are applied after a large reset current of 23.5 mA.

A further set of measurements is conducted to determine the number of stable states and assess the reliability and range of resistance variation. As shown in Figure  3a, a new pulse sequence is applied using the same I reset and I pulse values. Read currents are introduced after a set number of pulses at specified intervals (npulse = 1, 2, 3, 5, 10, 19), with these intervals increasing as RH approaches saturation to ensure that the multilevel resistance states are evenly distributed. The read current is applied only after I reset and at these six specified pulse intervals. Figure 3b reveals seven distinct resistance states that emerge in response to repeated sequences of these seven pulse numbers. To quantify the variability of each state, 25 data points are collected for each of the seven distinct magnetic states by repeating the sequence accordingly, as shown in Figure 3c. Similarly, by reversing the pulse sequence (I reset = 23.5 mA and I pulse = −13 mA), the opposite resistance modulation could be achieved, as demonstrated in Figure 3d,e. Another set of seven distinct states was collected, as illustrated in Figure 3f.

Figure 3.

Figure 3

Demonstration of 11 distinguishable states in the tilted anisotropy device (Sample III) with sequential 50 µs current pulses. a,d) The applied train of six specific current pulse numbers (coined as sequences), and b,e) the corresponding response of RH . Seven distinguishable states were obtained by c) the first applied chain (down‐to‐up) and f) the second applied chain (up‐to‐down). g) The cumulative distribution functions (CDFs) versus RH for 11 distinguishable states.

With these two pulse sequences, the full range of RH values is expressed through a total of 11 discrete states, eliminating overlapping states. To quantify the CTC variability, the standard deviation (σ) of the 25 data points per state is computed, and cumulative distribution functions (CDFs) are constructed against the measured RH ​ to visualize this variation, as shown in Figure 3g.[ 33 ] To provide a quantitative and comparable benchmark for evaluating material and device design, the CTC variation is defined as the ratio σ/RHtotal, where RHtotal represents the total Hall resistance difference switched by current pulses. The highest CTC variation among the 11 states is 2.8%, while the rest do not exceed 2%, confirming the robustness and stability of the 11 discrete resistance states in the tilted anisotropy device. Furthermore, the same multilevel switching behavior is reproduced across three devices fabricated under identical conditions, each exhibiting at least 9 distinguishable states with CTC variations below 2.7% (Figure S10, Supporting Information), further validating the consistency and robustness of our approach.

In addition to demonstrating the capability of achieving 11 distinguishable resistance states, it is crucial to highlight the advantage of the tilted anisotropy device in minimizing CTC variation compared to other emerging memory technologies. The average CTC variation of 2% surpasses that of RRAM and PCM, which typically exhibit variations ranging from 10% to 100%.[ 12 , 13 , 14 , 15 , 33 ] This discrepancy can be attributed to the stochastic resistance tuning mechanisms in RRAM and PCM, which stem from ion migration and phase transition effects,[ 9 , 10 , 11 ] introducing uncertainty in neuromorphic computing. In contrast, the stable multilevel states in our device underscore the feasibility of spintronic synapses as a robust and energy‐efficient alternative.

In this study, we utilize RH ​ from the anomalous Hall effect (AHE) as the readout parameter to verify the physical feasibility of achieving and controlling 11 discrete resistance states, which forms the foundation for our subsequent simulation and application discussions. Our results demonstrate that through precise domain wall motion control, stable intermediate states can be achieved without external magnetic fields, with a CTC variation of only ≈2%, confirming the robustness and reproducibility of this multi‐state behavior. This represents a significant advancement compared to previous works,[ 8 , 17 , 19 , 20 , 25 , 26 , 27 , 28 ] where CTC variations were rarely quantified, leading to unreliable resistance states for neuromorphic computing applications. While RH measurements effectively validate multi‐state switching, practical large‐scale implementations may require alternative readout mechanisms with higher signal contrast and lower variability. One promising approach is tunnel magnetoresistance (TMR) in magnetic tunnel junctions (MTJs), which can provide a significantly higher ON/OFF ratio, typically exceeding several hundred percent.[ 65 ] Such an enhancement would improve state distinguishability and enable more precise weight updates in neuromorphic architectures. Although RH is sufficient for fundamental verification, integrating an MTJ‐based readout could further enhance robustness, making spintronic synapses more viable for large‐scale applications.[ 29 , 30 ]

2.3. The Benchmark of Three Field‐Free Solutions for Neuromorphic Computing

In analyzing the suitability of three distinct field‐free spin synapse devices for neuromorphic computing applications, their fundamental switching characteristics and memristive behaviors are examined. Identical pulse number measurements are performed on the Néel orange‐peel effect and i‐DMI samples to determine σ/RHtotal and the available stable states (see Figures S7 and S8 in the Supporting Information) with Table  1 summarizing the comparative benchmarks.

Table 1.

The benchmark of three field‐free solutions for neuromorphic computing.

Sample I, Néel orange‐peel effect Sample II, i‐DMI Sample III, Tilted anisotropy
Switching ratio (%) current‐scan vs field‐scan 71 100 100
Switching ratio (%) pulse number vs field‐scan 34 100 68
Effective field (H z eff/I DC) 3.4 9.7 3.5
J pulse (1010 A/m2) 9 @50ms 17 @10µs 32 @50µs
CTC variation (σ/RHtotal, %) 7.2 4.5 2.8
Available state number 3 4 11
Thermal stability factor 56.5 20.8 30

First, evaluating the switching performance is crucial for the memristive window, comparing the switching ratio driven by the field scan, the current scan, and the pulse number metrics. For example, in the Néel orange‐peel effect sample, current‐induced switching yields a switching ratio of 71% during field scans, whereas the pulse number experiment shows a reduced ratio of 34%. This reduction can be attributed to the relatively modest current‐induced effective field (Hzeff/I DC = 3.4 Oe mA−1), which is likely a consequence of the thin 1.4 nm W layer. This layer represents a trade‐off between enhancing the Néel orange‐peel effect and maintaining an optimal SOT performance.[ 46 ] In contrast, the i‐DMI sample maintains a 100% switching ratio across different tests, thanks to its robust current‐induced effective field of Hzeff/I DC = 9.7 Oe mA−1. This large value is generated by the coherent switching of the magnetic layers within the strong i‐DMI coupling, which is induced by the asymmetric Pt configuration on both sides of the Co layer and the resulting electron wave function interference.[ 45 , 60 ] For the tilted anisotropy sample, the current‐induced effective field increases with the applied current, enabling a 100% switching ratio in the current scan mode. This characteristic can be attributed to a spin‐transfer‐torque‐like behavior, and the nonlinear trend in the current‐induced effective field, as shown in Figure S3c (Supporting Information), has also been reported in previous works.[ 45 , 63 ] Nevertheless, a non‐saturated 68% is achieved due to the specific amplitude pulse applied to generate the same effective field. The pulse width selection follows the minimum duration required for full switching in each sample.[ 20 ] Notably, the Néel orange‐peel effect sample requires a pulse width of 50 ms due to the high resistivity of its W/CoFeB‐based heterostructure. This requirement highlights the advantages of Pt‐based devices, which offer lower resistivity and higher energy efficiency.

Beyond switching performance, the number of intermediate states available in each device varies significantly. The Néel orange‐peel effect sample exhibits only three intermediate states, as indicated by its large σ/RHtotal​ value of 7.2% and a relatively low switching ratio. The i‐DMI sample, despite its moderate σ/RHtotal​ value of 4.5% and 100% switching ratio, also supports a limited number of four intermediate states. The strong current‐induced effective field in this sample favors a binary switching mechanism, as shown in Figure S8b (Supporting Information).

Thermal stability is another crucial parameter for evaluating device performance, particularly in field‐free spintronic applications. The Néel orange‐peel effect sample exhibits a high thermal stability factor of ∆ = 56.5, making it suitable for storage memory applications that require long retention times exceeding ten years. The i‐DMI sample, in contrast, offers robust current‐induced switching with low power consumption, characterized by a zero thermal critical switching current density of J c0 ≈ 2.05  ×  1011 A m−2 and distinct binary behavior. These characteristics position it as an ideal candidate for last‐level cache memory applications, where a lower thermal stability factor of 20.8 is permissible due to the frequent refreshing of data.[ 66 ] Ultimately, the tilted anisotropy sample is the most promising candidate for neuromorphic computing applications due to its moderate current‐induced effective field, a thermal stability factor of ∆ ≈ 30, and the presence of 11 intermediate states, which enable complex synaptic function emulation. Note that in this benchmark, device‐to‐device variation is carefully evaluated. As discussed in Section 2.2 and illustrated in Figures S9 and S10 (Supporting Information), the tilted anisotropy devices consistently demonstrated robust multilevel switching behavior across samples, confirming minimal impact from fabrication‐induced variations.

2.4. Implementation of 11 Distinguishable States to the Convolutional Neural Network

2.4.1. Artificial Weights Quantization

To demonstrate the capability of our multi‐state device in neuromorphic computing, the 11 distinguishable states from the tilted anisotropy sample are applied as artificial weights to a representative convolutional neural network, ResNet‐18. Using the PyTorch framework, the architecture of ResNet‐18, shown in Figure  4a, includes pooling layers, batch normalization (BN), ReLU activation, fully connected (FC) layers, and convolutional layers. The 18 weight layers in ResNet‐18 comprise 17 convolutional layers and one FC layer, where convolutional layers store weight tensors of size C in × K × K × C out , with C in and C out representing the number of input and output channels, respectively, and K denoting the kernel size. Leveraging a residual learning framework, ResNet‐18 excels in image recognition, achieving training and testing accuracies of 88.07% and 81.78%, respectively.

Figure 4.

Figure 4

Application of artificial weights in the CNN model using 11 distinguishable states. a) Architecture of ResNet‐18, including max‐pooling layers, batch normalization (BN), ReLU activation, fully connected (FC) layers, and convolution layers. The blocks with a yellow‐green gradient represent modules for weight quantization. b) Illustration of the PTQ process, showing the conversion of pristine Hall resistance states ( R H ) into differential resistance states ( R diff ), the extraction of characteristic resistance states ( R char ), and R char after range adjustment (Rchar). (c) Dependence of R char and R diff on the state index for 4, 6, and 11 pristine states. Classification accuracy of the per‐tensor d) and per‐channel e) quantized ResNet‐18 model with 4, 6, and 11 pristine states, with and without observers. The dashed line represents the baseline accuracy of 81.78% for ResNet‐18.

The simulated on‐chip inference of ResNet‐18 is conducted through a PTQ process, where software‐based weights are mapped to quantized resistance states of the devices, as illustrated in Figure 4b. The pristine anomalous Hall resistance states ( R H ) are first converted into differential resistance states ( R diff ) by calculating the resistance difference between pairs of memristors. This step is critical for generating negative resistance (or conductance), which cannot be directly represented by a single memristor.[ 41 , 67 , 68 ] The resulting R diff values, spanning the full range from negative to positive (q 1 to qn ), are thus generated. However, since R H ​ in our experiment is not evenly spaced, the intervals between resistance states may be too narrow to differentiate effectively, particularly when accounting for the device's inherent variability, such as nonlinearity and asymmetric resistance modulation.[ 64 , 69 ] To address these non‐ideal characteristics, several measures are implemented. Representative resistance values are chosen by ensuring the difference between any two R diff values meets a minimum threshold, defined as a certain percentage of the total resistance range. In this study, a pre‐examined 2% threshold is applied:

qkqk10.02(qnq1) (1)

This process produces characteristic resistance states ( R char ), resulting in 36 weight levels (q 1 to qk ) derived from 11 pristine states. Similarly, R char is calculated for 4 and 6 out of the 11 pristine states, yielding 6 and 22 weight levels, respectively (details are provided in Section S8, Supporting Information). The dependence of R char and R diff on the state index for 4, 6, and 11 pristine states is shown in Figure 4c. Note that the distribution of weight levels becomes increasingly uneven as the number of pristine states decreases.

Once R char is determined, the software weights can be preliminarily mapped. However, the quantization accuracy (∼10‐45%) often deviates significantly from the baseline, particularly in cases involving lower weight levels. This discrepancy arises because R char ​ lacks values near zero, leading to substantial quantization errors for values that are primarily distributed around the mean value of zero in the ResNet‐18 model, as illustrated in Figure S10 (Supporting Information).

To address this issue and bridge the gap between the software and hardware weights, a refined set of characteristic states denoted as Rchar, can be obtained by applying a quantization range adjustment to R char ​ using a linear transformation:[ 44 , 70 ]

2.4.1. (2)

where the scaling (S) and offset (Z) coefficients are determined as follows:

S=qmaxqminqkq1andZ=(qk×S)qmax (3)

The minimum (qmin ) and maximum (qmax ) weights can be derived using various observers, such as min‐max, mean square error, and batch normalization, which will be discussed in detail.

First, in the min‐max observer, for any given weight matrix W , the qmin and qmax values are determined as follows:

qmin=minW (4)
qmax=maxW (5)

This approach ensures that the full dynamic range of the original 32‐bit floating‐point (FP32) weight matrix is preserved during quantization. Additional details on the quantization of specific weight layers after applying the min‐max observer are presented in Figure S10 (Supporting Information).

Despite the widespread use of the min‐max observer for defining qmin and qmax , this method has a significant drawback, which is its sensitivity to large outliers in the weight matrix. Given the limited number of states in our device, ensuring that no clipping errors occur may result in excessive rounding errors. Therefore, it is essential to explore alternative quantization observers and assess their performance across different quantization scenarios. The mean squared error (MSE) method addresses this issue by determining. qmin and qmax as follows:[ 44 ]

argminqmin,qmaxWW^qmin,qmaxF2 (6)

where W^(qmin,qmax) represents the quantized version of W , and ‖ · ‖ F denotes the Forbenius norm. Minimizing the MSE between W and W^ provides an effective approach for introducing clipping errors in a controlled manner.

The third observer employed in this study, the batch normalization (BN) observer, determines qmin and qmax as follows:

qmin=μN×σ (7)
qmax=μ+N×σ (8)

where NR+, and μ and σ are the mean and standard deviation of the weight matrix W , respectively. By selectively defining qmin and qmax for specific weight tensors using an optimized N value, the BN observer effectively mitigates the impact of large outliers. Further details on this method can be found in Section S9 (Supporting Information).

In our study, we focus on applying discrete states to neural network weights. While prior work has explored simulating activation functions via memristive switching loops,[ 19 , 26 , 32 ] experimental validation of their role in vector‐matrix multiplication (VMM) or multiply‐accumulate (MAC) operations remains lacking.[ 42 , 67 , 71 ] Recent pioneering studies have investigated the prototype of neuron circuit design, but further research is needed to establish its feasibility.[ 72 ] Moreover, software and hardware weights can often be considered interchangeable, as the original values can be recovered by applying the inverse of the scaling factor.[ 41 , 64 ] This makes PTQ a viable approach for hardware weight mapping. In our work, we implement a memristor‐based CNN by employing a linear transformation to align hardware weights with the corresponding software weight range. This method is highly adaptable and can be extended to both simpler and more complex neural network architectures. For instance, a demonstration of the PTQ process in a multilayer perceptron (MLP) model is provided in Section S10 (Supporting Information).

2.4.2. Per‐Tensor Quantization

Given the complexity of 4‐D tensors, convolutional weights can be quantized using either per‐tensor or per‐channel schemes. In the per‐tensor quantization approach, quantization parameters are applied to the entire weight tensor of a layer, treating it as a single unit. This method is similar to the quantization strategy used for fully connected layers. Detailed weight statistics before and after quantization are presented in Figure S13 (Supporting Information). After applying these quantization methods, the overall quantization accuracy generally improves compared to cases without an observer, as shown in Figure 4d. Among the methods, the min‐max and MSE observers are particularly effective for CNN per‐tensor quantization, achieving accuracies close to the baseline at 79.22% and 79.12%, respectively. Furthermore, the dependence of quantization accuracy on the number of states is analyzed, demonstrating that only when using artificial weights with 11 pristine states does the accuracy approach the baseline. This finding highlights the importance of using a higher number of artificial weights for achieving optimal performance in hardware‐implemented neural networks.

2.4.3. Per‐Channel Quantization

While per‐tensor quantization is suitable for many cases, its accuracy is relatively low when the number of pristine states is as low as 4 or 6, and even with 11 pristine states, there is still room for improvement. To overcome this limitation, per‐channel quantization is employed, where quantization parameters are defined independently for each segment of output channels within a layer.[ 44 , 73 ] A detailed analysis of weight transfer error distributions for the first convolutional layer across all output channels is presented in Figure S14 (Supporting Information). Per‐channel quantization consistently demonstrates higher accuracy compared to per‐tensor quantization across all weight levels and for all three observers in the ResNet‐18 model, as shown in Figure 4e. Notably, the BN observer achieves improved accuracy with only six states, while the highest quantization accuracy is obtained with 11 states—reaching 81.51% and 81.12% for the min‐max and MSE observers, respectively—which is close to the baseline performance. A detailed comparison of classification accuracies for various quantization schemes in ResNet‐18 is provided in Table S1 of the Supporting Information.

3. Conclusion

This study demonstrates the potential of all‐electrical, field‐free spin synapse devices for neuromorphic computing applications. Through a systematic exploration of three distinct spintronic structures, including the Néel orange‐peel effect, i‐DMI, and tilted anisotropy, we identify these configurations as promising candidates for achieving stable and efficient multi‐state magnetization switching. Our findings emphasize that the tilted anisotropy device, with its robust 11‐state memory capability, moderate thermal stability (∆ ≈ 30), and an average CTC variation of ≈2%, excels in simulating complex synaptic functions, making it particularly suitable for advanced neuromorphic computing systems. Meanwhile, the Néel orange‐peel and i‐DMI structures offer advantageous trade‐offs in endurance and switching efficiency, providing versatility for different hierarchical memory levels within neuromorphic architectures.

Furthermore, we validate the application of these multistate devices in ResNet‐18 through PTQ processes, where adjusting the quantization range via linear transformation is crucial to managing the nonlinear LTP and LTD resistances. By mapping device resistance states to quantized weight levels, we achieve reliable weight transformation for CNN models, demonstrating high classification accuracy with minimal quantization error. The BN observer exhibits strong quantization performance even with fewer weight levels, highlighting its potential for efficient quantization in constrained settings. Notably, per‐channel quantization with the MSE (min‐max) observer achieves an impressive accuracy of 81.51% (81.12%), surpassing per‐tensor quantization and demonstrating its effectiveness in capturing finer weight variations in CNNs. This result underscores the flexibility of these spintronic devices in supporting diverse quantization strategies for real‐world AI applications. Future work includes investigating on‐chip learning algorithms[ 39 , 40 , 74 , 75 ] and linearly discretized resistance modulation in spintronic devices through tilted anisotropy, with potential applications in spiking neural networks[ 35 , 76 , 77 ] and image edge detection.[ 28 ]

4. Experimental Section

Sample Growth

Three distinct stacked structures are prepared by magnetron sputtering: Sample I: CoFeB(4)/W(1.4)/CoFeB(1.6)/MgO(1.1)/Ta(2), in which the current‐induced magnetization switching can be explained by the Néel orange‐peel effect. Sample II: Ta(0.5)/CoFeB(1.2)/Pt(2.5)/Co(0.6)/Pt(0.6)/Ta(2), which possesses the strong i‐DMI effect. Sample III: Ta(0.5)/Pt(6.1)/Co(0.7)/Pt(1), the tilted anisotropy sample. Sample I is sputtered with the sample holder rotated at 10 revolutions per minute (RPM). Sample II and Sample III are sputtered with the bottom Pt layer (2.5 and 6.1 nm Pt respectively) being obliquely deposited without rotating the sample holder, while the remaining layers are prepared with a rotating speed of 10 RPM. All three samples are deposited onto SiO2 substrates in an ultra‐high vacuum magnetron sputtering system with a base pressure of 5  ×  10−8 Torr. The metallic (oxide) layers are deposited by DC (RF) sputtering under an Ar growth pressure of 3 mTorr. The composition of CoFeB is 20%, 60%, and 20%, respectively. The numbers in the parenthesis are the thickness of each layer in nanometers. The films are deposited via DC magnetron sputtering at room temperature and an Ar growth pressure of 3 mTorr. The bottom Ta serves as a seed layer to ensure the uniformity of the devices, and Ta(2) layers are used to cap the stacks and provide protection.

Device Fabrication

The samples are patterned into Hall bar devices with a current channel width of 5 µm and voltage arm width of 3 µm through photolithography and lift‐off process.

Electrical and Magnetic Measurement

RH is measured by a homemade probe station, which has a projected vector field magnet capable of simultaneously applying in‐plane and out‐of‐plane magnetic fields. Electrical measurements are performed with a DC source (by Keithley 2400) and a voltage meter (by Keithley 2000). For the pulse number measurement, the write current pulses shorter than 10 ms are generated by an Agilent Pulse Generator 81110A.

Dataset and Training Conditions

The CNN simulation utilizes the CIFAR‐10 dataset, consisting of 40 000 training images, 10 000 validation images, and 10 000 testing images, each with dimensions of 32 × 32 × 3. ResNet‐18 is trained using pre‐trained weights with a mini‐batch size of 32, a learning rate of 0.001, and over five epochs.

Conflict of Interest

The authors declare no competing interests.

Supporting information

Supporting Information

Acknowledgements

T.‐C.H. and C.‐Y.L. contributed equally to this work. This work was supported by the National Science and Technology Council (NSTC) under grant No. NSTC 113‐2112‐M002‐015 and by the Center of Atomic Initiative for New Materials (AI‐Mat) under grant No. NTU‐113L9008, National Taiwan University. This work was also supported by the Semiconductor Fabrication Lab of the Consortia of Key Technologies, and the Nano‐Electro‐Mechanical‐System Research Center, at National Taiwan University.

Hsin T.‐C., Lin C.‐Y., Wang P.‐C., Yang C., Pai C.‐F., All‐Electrical Control of Spin Synapses for Neuromorphic Computing: Bridging Multi‐State Memory with Quantization for Efficient Neural Networks. Adv. Sci. 2025, 12, 2417735. 10.1002/advs.202417735

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  • 1. Von Neumann J., IEEE Ann. History Comput. 1993, 15, 27. [Google Scholar]
  • 2. Gu K., Guan Y., Hazra B. K., Deniz H., Migliorini A., Zhang W., Parkin S. S., Nat. Nanotech 2022, 17, 1065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Zou X., Xu S., Chen X., Yan L., Han Y., Sci. China Inf. Sci. 2021, 64, 160404. [Google Scholar]
  • 4. Aly M. M. S., Wu T. F., Bartolo A., Malviya Y. H., Hwang W., Hills G., Markov I., Wootters M., Shulaker M. M., Wong H.‐S. P., Proc. IEEE 2018, 107, 19. [Google Scholar]
  • 5. Grollier J., Querlioz D., Camsari K., Everschor‐Sitte K., Fukami S., Stiles M. D., Nat. Electron 2020, 3, 360. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Christensen D. V., Dittmann R., Linares‐Barranco B., Sebastian A., Gallo M. L., Redaelli A., Slesazeck S., Mikolajick T., Spiga S., Menzel S., Neuromorph. Comput. Eng. 2022, 2, 022501. [Google Scholar]
  • 7. Zhang X., Cai W., Wang M., Pan B., Cao K., Guo M., Zhang T., Cheng H., Li S., Zhu D., Adv. Sci. 2021, 8, 2004645. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Kurenkov A., Fukami S., Ohno H., J. Appl. Phys. 2020, 128, 010902. [Google Scholar]
  • 9. Govoreanu B., Kar G. S., Chen Y., Paraschiv V., Kubicek S., Fantini A., Radu I., Goux L., Clima S., Degraeve R., presented at 2011 Int. Electron Devices Meeting, IEEE, NewYork: 2011. [Google Scholar]
  • 10. Wong H.‐S. P., Salahuddin S., Nat. Nanotech 2015, 10, 191. [DOI] [PubMed] [Google Scholar]
  • 11. Burr G. W., Shelby R. M., Sebastian A., Kim S., Kim S., Sidler S., Virwani K., Ishii M., Narayanan P., Fumarola A., Adv. Phys.: X 2017, 2, 89. [Google Scholar]
  • 12. Aziza H., Coulié K., Rahajandraibe W., presented at 2021 IEEE 22nd Latin American Test Symposium (LATS), IEEE, New York: 2021. [Google Scholar]
  • 13. Aziza H., Postel‐Pellerin J., Fieback M., Hamdioui S., Xun H., Taouil M., Coulié K., Rahajandraibe W., presented at 2024 IEEE 25th Latin American Test Symposium (LATS), IEEE, New York: 2024. [Google Scholar]
  • 14. Nandakumar S., Gallo M. L., Boybat I., Rajendran B., Sebastian A., Eleftheriou E., J. Appl. Phys. 2018, 124, 152135. [Google Scholar]
  • 15. Khan A. I., Yu H., Zhang H., Goggin J. R., Kwon H., Wu X., Perez C., Neilson K. M., Asheghi M., Goodson K. E., Adv. Mater. 2023, 35, 2300107. [DOI] [PubMed] [Google Scholar]
  • 16. Chakraborty I., Ali M., Ankit A., Jain S., Roy S., Sridharan S., Agrawal A., Raghunathan A., Roy K., Proc. IEEE 2020, 108, 2276. [Google Scholar]
  • 17. Zhang S., Luo S., Xu N., Zou Q., Song M., Yun J., Luo Q., Guo Z., Li R., Tian W., Adv. Electron. Mater. 2019, 5, 1800782. [Google Scholar]
  • 18. Garello K., Avci C. O., Miron I. M., Baumgartner M., Ghosh A., Auffret S., Boulle O., Gaudin G., Gambardella P., Appl. Phys. Lett. 2014, 105, 212402. [Google Scholar]
  • 19. Zhou J., Zhao T., Shu X., Liu L., Lin W., Chen S., Shi S., Yan X., Liu X., Chen J., Adv. Mater. 2021, 33, 2103672. [DOI] [PubMed] [Google Scholar]
  • 20. Kurenkov A., DuttaGupta S., Zhang C., Fukami S., Horio Y., Ohno H., Adv. Mater. 2019, 31, 1900636. [DOI] [PubMed] [Google Scholar]
  • 21. Fukami S., Zhang C., DuttaGupta S., Kurenkov A., Ohno H., Nat. Mater. 2016, 15, 535. [DOI] [PubMed] [Google Scholar]
  • 22. Al Misba W., Kaisar T., Bhattacharya D., Atulasimha J., IEEE Trans. Electron Devices 2021, 69, 1658. [Google Scholar]
  • 23. Fukami S., Anekawa T., Zhang C., Ohno H., Nat. Nanotech 2016, 11, 621. [DOI] [PubMed] [Google Scholar]
  • 24. Pai C.‐F., Mann M., Tan A. J., Beach G. S., Phys. Rev. B 2016, 93, 144409. [Google Scholar]
  • 25. Yang S., Shin J., Kim T., Moon K.‐W., Kim J., Jang G., Hyeon D. S., Yang J., Hwang C., Jeong Y., NPG Asia Mater 2021, 13, 11. [Google Scholar]
  • 26. Ojha D. K., Huang Y.‐H., Lin Y.‐L., Chatterjee R., Chang W.‐Y., Tseng Y.‐C., Nano Lett. 2024, 24, 7706. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Wang S. Y., Chen S. H., Chang H. K., Li Y. T., Tseng C. H., Chen P. C., Yang C. Y., Lai C. H., Adv. Electron. Mater. 2023, 9, 2300472. [Google Scholar]
  • 28. Yang L., Li W., Zuo C., Tao Y., Jin F., Li H., Tang R., Dong K., Adv. Electron. Mater. 2024, 10, 2300885. [Google Scholar]
  • 29. Chen P.‐Y., Peng X., Yu S., IEEE Trans. on CAD 2018, 37, 3067. [Google Scholar]
  • 30. Chen P.‐Y., Peng X., Yu S., presented at 2017 IEEE International Electron Devices Meeting (IEDM), IEEE, NewYork: 2017. [Google Scholar]
  • 31. Tao Y., Sun C., Li W., Wang C., Jin F., Zhang Y., Guo Z., Zheng Y., Wang X., Dong K., ACS Appl. Nano Mater 2023, 6, 875. [Google Scholar]
  • 32. Dong K., Guo Z., Jiao Y., Li R., Sun C., Tao Y., Zhang S., Hong J., You L., Phys. Rev. Appl. 2023, 19, 024034. [Google Scholar]
  • 33. Pérez E., Pérez‐Ávila A. J., Romero‐Zaliz R., Mahadevaiah M. K., Pérez‐Bosch Quesada E., Roldán J. B., Jiménez‐Molinos F., Wenger C., Electronics 2021, 10, 1084. [Google Scholar]
  • 34. Xie R., Liu S., Yang T., Zhu M., Huang Q., Cao Q., Yan S., J. Magn. Magn. Mater. 2025, 614, 172726. [Google Scholar]
  • 35. Han X., Wang Z., Wang Y., Wang D., Zheng L., Zhao L., Huang Q., Cao Q., Chen Y., Bai L., Adv. Funct. Mater. 2024, 34, 2404679. [Google Scholar]
  • 36. Yadav R. S., Gupta P., Holla A., Ali Khan K. I., Muduli P. K., Bhowmik D., ACS Appl. Electron. Mater. 2023, 5, 484. [Google Scholar]
  • 37. Jeong J., Jang Y., Kang M. G., Hwang S., Park J., Park B. G., Adv. Electron. Mater. 2024, 10, 2300889. [Google Scholar]
  • 38. Verma G., Soni S., Nisar A., Dhull S., Kaushik B. K., IEEE Trans. Electron Devices 2025, 72, 1772. [Google Scholar]
  • 39. Dhull S., Al Misba W., Nisar A., Atulasimha J., Kaushik B. K., IEEE Trans. Neural Networks Learn. Syst. 2024, 36, 4996. [DOI] [PubMed] [Google Scholar]
  • 40. Desai V. B., Kaushik D., Sharda J., Bhowmik D., Neur. Comp. Eng. 2022, 2, 024006. [Google Scholar]
  • 41. Li C., Hu M., Li Y., Jiang H., Ge N., Montgomery E., Zhang J., Song W., Dávila N., Graves C. E., Nat. Electron 2018, 1, 52. [Google Scholar]
  • 42. Aguirre F., Sebastian A., Gallo M. Le, Song W., Wang T., Yang J. J., Lu W., Chang M.‐F., Ielmini D., Yang Y., Nat. Commun. 2024, 15, 1974. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Zhang Y., Wang J., Lian C., Bai Y., Wang G., Zhang Z., Zheng Z., Chen L., Zhang K., Sirakoulis G., IEEE Trans. Circ. Syst. I 2021, 68, 1193. [Google Scholar]
  • 44. Nagel M., Fournarakis M., Amjad R. A., Bondarenko Y., Van Baalen M., Blankevoort T., arXiv 2021, 210608295. [Google Scholar]
  • 45. Lin C.‐Y., Wang P.‐C., Huang Y.‐H., Liao W.‐B., Song M.‐Y., Bao X., Pai C.‐F., ACS Mater. Lett. 2023, 6, 400. [Google Scholar]
  • 46. Kao S.‐C., Lin C.‐Y., Liao W.‐B., Wang P.‐C., Hu C.‐Y., Huang Y.‐H., Liu Y.‐T., Pai C.‐F., APL Mater. 2023, 11, 111104. [Google Scholar]
  • 47. Schrag B., Anguelouch A., Ingvarsson S., Xiao G., Lu Y., Trouilloud P., Gupta A., Wanner R., Gallagher W., P. Rice. Appl.Phys. Lett. 2000, 77, 2373. [Google Scholar]
  • 48. Kools J., Kula W., Mauri D., Lin T., J. Appl. Phys. 1999, 85, 4466. [Google Scholar]
  • 49. Yang W., Yan Z., Xing Y., Cheng C., Guo C., Luo X., Zhao M., Yu G., Wan C., Stebliy M., Appl. Phys. Lett. 2022, 120, 122402. [Google Scholar]
  • 50. Murray N., Liao W.‐B., Wang T.‐C., Chang L.‐J., Tsai L.‐Z., Tsai T.‐Y., Lee S.‐F., Pai C.‐F., Phys. Rev. B 2019, 100, 104441. [Google Scholar]
  • 51. Chen W., Qian L., Xiao G., Sci. Rep. 2018, 8, 8144. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52. Emori S., Martinez E., Lee K.‐J., Lee H.‐W., Bauer U., Ahn S.‐M., Agrawal P., Bono D. C., Beach G. S., Phys. Rev. B 2014, 90, 184427. [Google Scholar]
  • 53. Zhou J., Huang L., Yap S. L. K., Lin D. J. X., Chen B., Chen S., Wong S. K., Qiu J., Lourembam J., Soumyanarayanan A., APL Mater. 2024, 12, 081105. [Google Scholar]
  • 54. Hao Q., Chen W., Xiao G., Appl. Phys. Lett. 2015, 106, 182403. [Google Scholar]
  • 55. Lee J.‐S., Cho J., You C.‐Y., J. Vac. Sci. Technol. 2016, 34, 021502. [Google Scholar]
  • 56. Fernández‐Pacheco A., Vedmedenko E., Ummelen F., Mansell R., Petit D., Cowburn R. P., Nat. Mater. 2019, 18, 679. [DOI] [PubMed] [Google Scholar]
  • 57. Han D.‐S., Lee K., Hanke J.‐P., Mokrousov Y., Kim K.‐W., Yoo W., Van Hees Y. L., Kim T.‐W., Lavrijsen R., You C.‐Y., Nat. Mater. 2019, 18, 703. [DOI] [PubMed] [Google Scholar]
  • 58. Huang Y.‐H., Huang C.‐C., Liao W.‐B., Chen T.‐Y., Pai C.‐F., Phys. Rev. Appl. 2022, 18, 034046. [Google Scholar]
  • 59. Avci C. O., Lambert C.‐H., Sala G., Gambardella P., Phys. Rev. Lett. 2021, 127, 167202. [DOI] [PubMed] [Google Scholar]
  • 60. Lin C.‐Y., Hsieh J.‐Y., Wang P.‐C., Tsai C.‐C., Pai C.‐F., APL Mach. Learn. 2024, 2, 046110. [Google Scholar]
  • 61. Mohanan V., Ganesh K., Kumar P. A., Phys. Rev. B 2017, 96, 104412. [Google Scholar]
  • 62. Torrejon J., Garcia‐Sanchez F., Taniguchi T., Sinha J., Mitani S., Kim J.‐V., Hayashi M., Phys. Rev. B 2015, 91, 214434. [Google Scholar]
  • 63. Hu C.‐Y., Chen W.‐D., Liu Y.‐T., Huang C.‐C., Pai C.‐F., NPG Asia Mater 2024, 16, 1. [Google Scholar]
  • 64. Sun X., Yu S., IEEE J. Emerg. Selected Topics Circ. Syst. 2019, 9, 570. [Google Scholar]
  • 65. Scheike T., Wen Z., Sukegawa H., Mitani S., Appl. Phys. Lett. 2023, 122, 112404. [Google Scholar]
  • 66. Dieny B., Goldfarb R. B., Lee K.‐J., Introduction to Magnetic Random‐Access Memory, John Wiley & Sons, Hoboken, NJ: 2016. [Google Scholar]
  • 67. Yao P., Wu H., Gao B., Tang J., Zhang Q., Zhang W., Yang J. J., Qian H., Nature 2020, 577, 641. [DOI] [PubMed] [Google Scholar]
  • 68. Park J., Kim S., Song M. S., Youn S., Kim K., Kim T.‐H., Kim H., ACS Appl. Mater. Interfaces 2024, 16, 1054. [DOI] [PubMed] [Google Scholar]
  • 69. Cao Z., Zhang S., Hou J., Duan W., You L., IEEE Trans. Electron Devices 2023, 70, 6336. [Google Scholar]
  • 70. Hu M., Strachan J. P., Li Z., Grafals E. M., Davila N., Graves C., Lam S., Ge N., Yang J. J., Williams R. S., presented at Proceedings of the 53rd Annual Design Automation Conference, Association for Computing Machinery, New York: 2016. [Google Scholar]
  • 71. Jung S., Lee H., Myung S., Kim H., Yoon S. K., Kwon S.‐W., Ju Y., Kim M., Yi W., Han S., Nature 2022, 601, 211. [DOI] [PubMed] [Google Scholar]
  • 72. Liu L., Wang D., Wang D., Sun Y., Lin H., Gong X., Zhang Y., Tang R., Mai Z., Hou Z., Nat. Commun. 2024, 15, 4534. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73. Nagel M., Baalen M. v., Blankevoort T., Welling M., presented at Proceedings of the IEEE/CVF International Conference on Computer Vision , IEEE, New York: 2019. [Google Scholar]
  • 74. Al Misba W., Lozano M., Querlioz D., J. Atulasimha. IEEE Access 2022, 10, 84946. [Google Scholar]
  • 75. Kaushik D., Sharda J., Bhowmik D., Nanotechnology 2020, 31, 364004. [DOI] [PubMed] [Google Scholar]
  • 76. Wang D., Tang R., Lin H., Liu L., Xu N., Sun Y., Zhao X., Wang Z., Wang D., Mai Z., Nat. Commun. 2023, 14, 1068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77. Wang D., Wang Z., Xu N., Liu L., Lin H., Zhao X., Jiang S., Lin W., Gao N., Liu M., Adv. Sci. 2022, 9, 2203006. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting Information

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.


Articles from Advanced Science are provided here courtesy of Wiley

RESOURCES