Abstract
Recent advances in neural network‐based computing have enabled human‐like information processing in areas such as image classification and voice recognition. However, many neural networks run on conventional computers that operate at GHz clock frequency and consume considerable power compared to biological neural networks, such as human brains, which work with a much slower spiking rate. Although many electronic devices aiming to emulate the energy efficiency of biological neural networks have been explored, achieving long timescales while maintaining scalability remains an important challenge. In this study, a field‐effect transistor based on the oxide semiconductor strontium titanate (SrTiO3) achieves leaky integration on a long timescale by leveraging the drift–diffusion of oxygen vacancies in this material. Experimental analysis and finite‐element model simulations reveal the mechanism behind the leaky integration of the SrTiO3 transistor. With a timescale in the order of one second, which is close to that of biological neuron activity, this transistor is a promising component for biomimicking neuromorphic computing.
Keywords: drift–diffusion, leaky integration, neural network, oxygen vacancy, reservoir computing
To mimic the energy efficiency of biological neural networks, an artificial neural device that works on long timescales with minimal energy consumption is needed. By leveraging the drift–diffusion dynamics of oxygen vacancies, a field‐effect transistor made of an oxide semiconductor achieves a timescale of the order of one second, comparable to that of biological neurons, with negligible power loss.
![]()
1. Introduction
In the realm of neural computation, achieving efficient information processing analogous to that realized by biological systems remains a paramount challenge. Particularly in applications such as speech recognition and motion detection with temporal dynamics of the order of milliseconds to seconds, biological neural networks outperform modern computers, even though the processing timescale of the former is much longer than that of the latter.[ 1 , 2 , 3 ] Despite advances in deep‐learning frameworks, which typically run on von Neumann computer architectures, the energy costs remain prohibitively high for real‐time performance on portable edge devices, necessitating the exploration of novel architectures and devices to realize energy‐efficient practical neural computing.
Neuromorphic computing, inspired by the efficiency of biological neural networks exchanging spikes between neurons (Figure 1a), seeks to replicate this capability using interconnected artificial neurons.[ 4 ] Biological neural networks operate on timescales ranging from ≈1 ms up to ≈10 s.[ 5 ] Focusing on the slow dynamics of ≈0.1 to 1 s, it is understood that neural networks adjust their dynamics to align with the timescales of incoming signals, enabling efficient real‐time information processing with low energy consumption. The energy efficiency of neural networks comes from the fact that most energy is consumed only when there is a spike, unlike conventional complementary metal–oxide–semiconductor (CMOS) integrated circuits, which consume a considerable amount of energy even when no signal is being processed. However, many studies on artificial neural networks currently focus on networks and devices that work at timescales much faster than that of biological neural networks.[ 6 , 7 , 8 ] When the timescale of a neural network is much faster than the incoming signals, the signals must first be stored in an external memory and then processed serially by taking them out of the memory. In this case, additional energy is needed for memory access, as in conventional computers, resulting in increased energy consumption. However, by designing artificial neurons to operate on timescales comparable to those of biological neural networks (i.e., up to 10 s), the need for frequent memory access is alleviated. Instead of continuously fetching and updating information from a memory, a neural network with a suitable timescale can retain and process sensory inputs over extended periods with minimal energy consumption.
Figure 1.

Biological neuron operation and SrTiO3‐based leaky‐integration FET. a) Schematic illustration of a neuron cell. Inset: schematic diagram of leaky integration. b) Schematic illustration (side view) of a SrTiO3‐based leaky‐integration FET. Application of gate voltage V G controls the drift–diffusion dynamics of oxygen vacancies , which modulates the induced drain current I D within the SrTiO3 crystal (bottom panel). The gate dielectric of the transistor consists of Parylene C and amorphous HfO x layers (top panels). c) Schematic diagram of the top view of the SrTiO3‐based FET. The right panel is a magnified view of the dashed rectangle in the left panel. d) Scanning electron microscope image of the FET.
The leaky‐integration behavior of neurons is a critical characteristic that determines the timescales of neural networks. An input pulse at the dendrites increases or decreases the electrical potential of the soma, which is called the membrane potential (V m). The change of V m accumulates, or integrates, when subsequent pulses arrive at the soma. Conversely, V m gradually decays — that is, leaks — to its resting value without subsequent pulses. When V m reaches a threshold value of V c, a pulse is generated from the axon and the information is transferred to connecting post‐synaptic neurons. Various electronic devices and technologies, such as CMOS circuits,[ 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 ] resistive random‐access memories,[ 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 ] ferroelectric devices,[ 26 , 27 , 28 , 29 , 30 , 31 ] spintronic devices,[ 32 , 33 ] and phase change memories,[ 34 , 35 , 36 , 37 ] have been explored to mimic this behavior. Furthermore, two‐terminal memristive leaky‐integration devices with high reliability and uniform characteristics demonstrating various functionalities have been reported recently.[ 38 ] However, it is still challenging to achieve long timescales while maintaining energy efficiency and scalability.[ 39 ] Taking CMOS as an example, a timescale of 1 s corresponds to a capacitance of 1 µF, which takes up considerable space on a die. Furthermore, the energy to drive such devices is substantial because the amount of energy generally increases as the timescale lengthens. Although many devices focus on nonvolatile operation,[ 40 , 41 , 42 , 43 ] this nonvolatility makes the implementation of leaky integration difficult because an external mechanism for leakage is needed.
The drift–diffusion of ions is advantageous for generating long timescales because of the slower motion of ions than that of electrons. Drift–diffusion of ions is a general phenomenon used in various applications, and these devices typically work on long timescales. Charged ions can be controlled by applied electric fields, but their material‐specific diffusion coefficient generally limits their migration speed, and their motion is dynamic and reversible. This means that achieving a long timescale is possible without compromising energy efficiency, device size, or volatility. We report the development of an artificial neuron device with leaky integration, as shown in Figure 1b,c, based on the drift–diffusion of oxygen vacancies () in SrTiO3, a well‐known oxide semiconductor.[ 44 , 45 ] are defects where oxygen atoms are missing in the crystal lattice. exert a profound influence on the electrical and ionic transport properties of materials. Electrostatically controlling the drift–diffusion dynamics of in SrTiO3 is attractive because of its large relative permittivity of 300 and the relatively small electron densities required to induce electrical conductivity. By harnessing the drift–diffusion dynamics of , we demonstrate the feasibility of achieving leaky integration over timescales relevant to real‐world information processing tasks, such as speech and motion analysis. Furthermore, we present a comprehensive investigation of the operation principles of our proposed device, combining experimental results with finite element simulations to elucidate the underlying mechanisms.
2. Results and Discussion
2.1. Handwriting Anomaly Detection and Effect of Timescales
The effect of timescale on the performance of a neural network is exemplified in the task of handwriting anomaly detection shown in Figure 2a. The task was to distinguish symbols (e.g., triangles) drawn by different people. The neural network received time‐varying trajectories of a pen and determined which person drew each triangle[ 46 ] (see the Experimental Section and Section S4, Supporting Information). The time‐dependent trajectory (i.e., x, y coordinates shown in Figure 2b) of a pen was processed in the framework of reservoir computing depicted schematically in Figure 2a.[ 47 , 48 , 49 ] The randomly interconnected neural network, called a reservoir, received temporal inputs and encoded shape characteristics of the trajectories into the nonlinear dynamics of constituting neurons, which in turn determined the habits of each person drawing the symbols. Figure 2c shows the anomaly score S A (i.e., Mahalanobis distance of the reservoir state; see Section S4, Supporting Information) that quantifies the difference between the trajectories of the training and testing phases. Here, the neural network was trained only with person A's trajectory and tested against the different trajectories of person A and person B. When only person B's trajectory was provided, S A increased by orders of magnitude. Therefore, handwriting anomaly detection was successful when the timescale of neurons τ n was 1.07 s. Surprisingly, this contrasts with the case when fast neurons (τ n = 10 µs, e.g., CMOS neurons, see Section S3, Supporting Information) were used, where handwriting anomaly detection was unsuccessful, as shown in Figure 2d (see Section S10, Supporting Information for a discussion of the effect of neuron timescale on information processing performance). The results clearly indicate the timescale of the neuron is an important factor that determines the performance of real‐time neuromorphic computing for signals that have long timescales (e.g., voice and motion).
Figure 2.

Effect of neuron timescale on brain‐inspired computing. a) Schematic illustration of reservoir computing‐based anomaly detection. b) Handwriting trajectory data used for evaluation. c,d) Anomaly scores calculated from the reservoir activity when handwriting trajectories were input. The timescales of the neural elements are (c) τ n = 1.07 s and (d) τ n = 10 µs.
2.2. Long‐Timescale Leaky‐Integration FET
The SrTiO3 field‐effect transistor (FET) was fabricated on a single‐crystal SrTiO3 substrate using standard photolithography (Figure 1b–d).[ 46 , 50 , 51 ] Details of device fabrication are provided in the Experimental Section. Figure 3a depicts the time evolution of the source–drain current (I D) of the transistor under 5‐V gate pulses with a fixed source–drain voltage (V D) of 0.5 V. Notably, I D gradually increased after the FET was turned on, despite the fixed 5‐V gate voltage (V G) during the pulses. The transient drop (rise) of I D on the rising (falling) edge of the V G pulse originated from the displacement currents caused by the charging (discharging) of the drain‐to‐gate and drain‐to‐source capacitance.
Figure 3.

Leaky integration in the SrTiO3 FET. a) The drain current I D (blue) started to flow when gate pulse V G (red) was applied to SrTiO3 FET. I D gradually increased during V G application, demonstrating leaky integration. b,c) Leaky integration of trains of pulses using the SrTiO3 FET. The amplitude of I D increased after every pulse at (b) f = 80 Hz, but it did not increase at (c) f = 1 Hz. d) Frequency modulation of the drain current. The amplitude of I D (blue) increased or decreased in response to the frequency change (green) of the gate pulses (red). e,f,g) Leaky‐integration rate I D(N = 100)/I D(N = 1) as a function of the inverse frequency of gate pulses when the (e) pulse width t p, (f) pulse voltage V p, and (g) source–drain voltage V D were varied.
To examine the transistor's response across different timescales, Figure 3b,c display the time evolution of I D during a series of gate pulses with varying pulse frequencies (f). At f = 80 Hz, the overall amplitude of I D gradually increased, whereas at f = 1 Hz, the amplitude remained relatively constant after repeated gate pulses. This frequency‐dependent behavior is further illustrated in Figure 3d, which shows that frequency varied continuously over time. Here, the amplitude of I D followed the frequency modulation: higher frequencies led to quicker amplitude increases, whereas lower frequencies led to faster amplitude decay.
To further quantify the rate of I D increase during gate pulses, Figure 3e,f,g illustrate the ratio of I D during the 100th pulse to I D during the first pulse across the parameter space defined by frequency and pulse width (t p), pulse voltage (V p), and V D, respectively. Notably, for a given t p, the ratio increased monotonically with frequency. Similarly, the ratio increased monotonically with t p at a fixed frequency. Thus, the on–drain current is influenced by not only the duration of the gate pulse but also the interval between gate pulses.
These observations provide two important insights. First, the SrTiO3 FET possesses an internal parameter that governs I D. Second, this internal parameter integrates upon application of gate voltage and leaks upon its release, thereby exemplifying a leaky‐integration function. Notably, the leaky‐integration rate remains nearly constant with integration timescales consistently between 10 ms and 1 s, even when varying V p and V D. The insensitivity of the leaky‐integration rate to these parameters arises from the mechanism of leaky integration, as discussed below.
2.3. Long‐Timescale Dynamics of Oxygen Vacancies in the Transistor
The long timescale of the FET in the order of one second implies the presence of prolonged dynamics beyond electron dynamics within the system. As we explain below, the leaky‐integration behavior originates from the drift–diffusion of across the SrTiO3 channel. Upon fabrication of the FET, SrTiO3 accommodates a nominal concentration of on the order of ≈1015 cm−3 dispersed throughout its structure, primarily originating from crystal defects and impurities (Figure 4a).[ 52 ] These , which can be considered positively charged ions, migrate in response to applied gate voltage and behave as donors to the semiconducting SrTiO3, injecting up to two electrons each into the conduction band. Nevertheless, this concentration is insufficient to induce conductivity in the SrTiO3 channel.[ 52 , 53 ]
Figure 4.

Oxygen drift–diffusion in the leaky‐integration SrTiO3 FET. a,b,c) Schematic illustration of the drift–diffusion of . The gate, source, and drain electrodes are labeled G, S, and D, respectively. d) Schematic diagram of the V G ramp used in the analysis. e) Snapshots of distribution at different time steps during drift–diffusion. f) Snapshots of the mobile electron distribution at different time steps during drift–diffusion. The scale bars in e and f are 1 µm. g) Time evolution of drain currents measured for the SrTiO3 FET. h) Time evolution of drain currents calculated for different diffusion coefficients in SrTiO3. Drain current continued to increase even after V G stabilized at t = 1.0 ms.
Upon application of a voltage to the gate electrode, electrons accumulated beneath the gate insulator via the standard electrostatic‐gating effect, establishing a conductive FET channel (Figure 4b).[ 54 ] Simultaneously, carrying up to +2q charge gradually drifted away from the gate‐insulator/SrTiO3 interface, resulting in a region with elevated concentration below the channel region. Consequently, the concentration decreased above the channel region just below the SrTiO3/insulator interface, prompting the generation of additional at the interface to counterbalance their decreased concentration. This build‐up of was accompanied by a progressive accumulation of the net electron concentration within the FET channel, thereby facilitating increased drain current.
Upon release of the gate voltage, electron depletion from the FET channel occurred promptly, leading to a decrease in drain current, typically below measurable levels (Figure 4c). Simultaneously, thermal diffusion drove migration toward the SrTiO3/insulator interface, gradually restoring their initial depth profile. Thus, drifted during voltage application and then diffused upon removal of the applied voltage. In principle, the accumulation of in the absence of electrostatic gating (i.e., without applied gate voltage) can induce a conductive FET channel if their concentration is raised above a threshold, making SrTiO3 metallic.
2.4. Finite Element Analysis of Oxygen Vacancy Drift–Diffusion
To further elucidate the dynamics of in the FET, we conducted a finite element analysis of oxygen drift–diffusion in SrTiO3 (detailed methodology is provided in the Experimental Section). In our simulations, the generation of at the SrTiO3 surface was assumed to be much faster than the diffusion of , characterized by the diffusion coefficient D vo, within the SrTiO3 crystal. We assess the validity of this assumption in the subsequent discussion. The profiles of calculated oxygen vacancy concentration (N VO) and electron density distribution (n) for D vo = 1 × 10−11 m2 s−1 are presented in Figure 4e,f, respectively, and the gate voltage (V G) ramping scheme is outlined in Figure 4d.
At t = 0.5 ms, corresponding to V G = 0, only a low density of was present throughout the FET body (Figure 4e‐I). Upon ramping V G to 5 V at t = 1.0 ms, the positive voltage caused to accumulate predominantly around the channel region, as shown in Figure 4e‐II. Notably, even after V G settled at 5 V by t = 1.0 ms, N VO continued to rise within the channel region and extended deeper into the FET body (Figure 4e‐III). This increase is a consequence of drift–diffusion, coupled with the continuous generation of , which accumulated at the crystal surface. Although the gate voltage was applied between the laterally placed gate and source electrodes, electric fields that pointed downward and extended deep into the crystal were generated because of the large dielectric constant of SrTiO3, propelling the into the crystal (see Section S12, Supporting Information for the electric field distribution).
The electron distribution profile closely mirrors that of . At t = 0.5 ms, electrons were completely depleted from the FET channel because of the band bending at the SrTiO3/insulator/gate electrode interface (Figure 4f‐I). Application of V G caused electron density to increase around the channel region through the standard field‐effect gating mechanism (Figure 4f‐II). However, unlike in a conventional FET, the electron density continued to rise beyond t = 1.0 ms, spreading to a wider area around the channel region. This is because the that migrated in this region via drift–diffusion doped electrons into the conduction band of SrTiO3, increasing the carrier density and making the channel more conductive (Figure 4f‐III).
Figure 4h highlights the role of the diffusion coefficient of in determining the rate of I D growth. Here, the time evolution of I D was calculated for D vo = 10−9, 10−10, 10−11, and 10−12 m2 s−1 (Figure 4h). I D displayed gradual sustained growth after t = 1.0 ms, in line with the Drude model, i.e., I D∝σ∝e µ e n, where σ, e, and µ e are conductivity, elementary charge, and electron mobility, respectively. Our calculated I D aligns well with the experimentally measured time evolution of I D (Figure 4g) when D vo = 10−11 m2 s−1. Thus, the drift–diffusion model effectively explains the leaky‐integration behavior of the SrTiO3 FET. Given that diffusion coefficients predominantly govern leaky‐integration behavior, variations of V p and V D exerted only minor effects on diffusion dynamics (see Section S6, Supporting Information for a discussion of the effect of V D on drift–diffusion).
2.5. Functional Modeling of the Leaky‐Integration FET
So far, we have demonstrated the importance of long timescales in human‐interactive neuromorphic computing, highlighting the potential of leaky‐integration SrTiO3 FETs as components for constructing artificial neural networks with long timescales. To facilitate their practical application, it is crucial to develop a simple yet functional model for SrTiO3 FETs, because finite element modeling poses considerable computational challenges.
To construct a quantitative model that characterizes the leaky‐integration behavior of SrTiO3 FETs, we first reexamined the leaky‐integration data in Figure 3b based on the idea that the drift–diffusion of donors alters the overall characteristics of SrTiO3 FETs. Figure 5a plots pulse‐by‐pulse ΔI D − V G curves obtained by splitting the data in Figure 3b. Here, we defined to exclude the displacement current contribution from I D associated with the drain‐to‐gate and drain‐to‐source capacitance C. These data revealed a gradual shift of the ΔI D − V G curves to lower voltage after each pulse, thus resulting in an increase of ΔI D.
Figure 5.

Functional modeling of leaky‐integration SrTiO3 FETs. a) Pulse‐dependent ΔI D versus V G curves of a SrTiO3 FET. Each curve corresponds to the data measured during different pulse cycles with f = 80 Hz, t p = 5 ms, V p = 5 V, and V D = 0.5 V. Only the data for the first pulse (N = 1) and multiples of ten pulses are shown for clarity. The black pentagon indicates V G when I D exceeds 10 nA. b) Double‐exponential shift of the threshold voltage as a function of the number of pulses. The data at f = 80 Hz (red circles) were fitted to a double‐exponential function (black curve). For comparison, the fitting result using a single‐exponential function (green dashed curve) and data at f = 1 Hz (blue diamonds) are also shown. The sample size was 100. The probability value, statistical test, and significance symbol are not applicable. c,d,e,f) Time evolution of drain current during leaky integration fitted to the phenomenological drift–diffusion model Equations (1) and (2) when (c) t p = 5 ms and (e) t p = 10 ms. (d) and (f) are magnified views of (c) and (e), respectively.
Next, we plotted the time evolution of the threshold voltage V th, defined as the gate voltage at which ΔI D exceeded 10 nA (depicted as black pentagons in Figure 5a), as shown in Figure 5b. As the number of input pulses increased, V th changed according to a double‐exponential decay. This exponential dependence on the V th shift directly stems from the drift–diffusion phenomena discussed in the previous Section 2.3 and 2.4. Notably, the observed dependence of V th on the number of input pulses cannot be adequately described by a single‐exponential decay, indicating the presence of multiple timescales within the physical system. The second timescale likely relates to that of oxygen vacancy generation at the interface between the SrTiO3 channel and gate insulator; its origin is discussed in Section 2.6. Practically, the existence of multiple timescales in neural networks such as reservoirs is useful because it allows signals with different rates to be handled within a single network. We also considered the time evolution of V th for data collected at f = 1 Hz. Unlike the case when f = 80 Hz, V th remained almost constant and exhibited minimal decay with increasing pulse number when f = 1 Hz. This frequency dependence suggests that V th is influenced by the history of applied gate voltage, including the timing and width of gate pulses. This frequency‐dependent behavior originates from the balance between the electric field‐driven drift dynamics and thermal diffusion dynamics of in the SrTiO3 crystal. When the frequency is high, the accumulate around the FET channel driven by the applied gate‐voltage pulses before relaxing to their thermodynamic distribution. This results in the shift of V th to smaller voltage because the accumulated behave as an electron donor to semiconducting SrTiO3. In contrast, when the frequency is low, V th does not shift because the relax to their thermodynamic distribution between voltage pulses. Therefore, the V th shift shown in Figure 5b is a direct consequence of the drift–diffusion dynamics explained in Section 2.3 and 2.4. Given that the shift of V th primarily governs the leaky‐integration behavior of the FET, the weak dependence on V p is desirable.
Considering that the shift of V th depends on the history of the gate voltage V G(t), we redefined the threshold voltage as follows:
| (1) |
where τ 1 and τ 2 are time constants, the main parameters of leaky integration, and A 1 and A 2 are the corresponding contributions of these timescales to , and V 0 is the initial threshold voltage before applying gate voltage. Subsequently, the drain current can be expressed in terms of for subthreshold and linear regions of FET operation as follows[ 55 ]:
| (2) |
where C is the combined gate and parasitic capacitance and I 0, V ss, and z are phenomenological constants.
Equations (1) and (2) were used to describe the experimental data; the results are presented in Figure 5c–f. Our phenomenological model successfully captured the time evolution of I D for the data obtained at both t p = 5 ms and t p = 10 ms using a single set of parameters: τ 1 = 1.07 s, τ 2 = 5.0 ms, A 1 = 0.44 s−1, A 2 = 15.7 s−1, V 0 = 5.5 V, I 0 = 108 nA, V ss = 0.58 V, z = 1.9 V−1, and C = 0.88 pF. This model enables quantification of the long‐timescale leaky integration of SrTiO3 in the order of one second and lays the groundwork for simulating SrTiO3‐based neural networks.
2.6. Oxygen Vacancy Generation at the SrTiO3 Crystal Surface
In general, thermodynamics dictates that the distribution of depends on the electrostatic potential V and is given by at equilibrium, where is the N vo value in a region far away from the active region of the device, k B is the Boltzmann constant, and T is temperature.[ 56 , 57 ] Therefore, it might seem counterintuitive to observe an increase of N vo with positive gate voltage, as shown in Figure 4e. However, under dynamic or steady‐state conditions (i.e., finite generation), N vo deviates from the equilibrium distribution and depends on the concentration of at the SrTiO3 crystal surface. The drift–diffusion dynamics are governed by Fick's law ∂N vo/∂t = ∂(D vo∂N vo/∂z)/∂z with the boundary condition at the surface given by , where is the N vo value at the external surface of SrTiO3, and k vo is the oxygen‐exchange kinetic coefficient at the SrTiO3 surface.[ 56 ] At the limit of fast generation at the SrTiO3 surface (k vo ≫ D vo), the boundary condition simplifies to . Under this condition, the increase of N vo at a positive gate voltage can be attributed to the increased generation rate at the crystal surface, leading to an accumulation of in the region near the SrTiO3 surface.
In the case of SrTiO3 FETs, the source of is likely generated near the source and drain electrodes. These electrodes were fabricated by thermally depositing Ti metal directly onto SrTiO3; a process known to generate a relatively high concentration of near the surface through oxygen gettering and render SrTiO3 electrically conductive. This region effectively serves as a reservoir for , meaning is readily available near the channel region, and thus making the condition k vo ≫ D vo relevant here. Additionally, electrical water splitting can occur at the SrTiO3 surface, which promotes generation and, consequently, nonvolatile switching of SrTiO3‐based FETs.[ 58 ] Furthermore, recent studies suggest that electric field‐dependent proton transport can occur through oxides and polymers.[ 59 , 60 , 61 , 62 , 63 ] Protons can induce formation when injected onto the surface of SrTiO3 through the gate insulator. While water and protons were not intentionally included in the FET structure, they may be present after lithography or in the environment. Although the value of k vo in this device geometry is not precisely known, k vo can be relatively large (resulting in shorter timescales) compared to D vo, especially when an external bias voltage is applied. Indeed, the time evolution of the threshold voltage (V th) followed a double‐exponential function, indicating multiple timescales in the system. The observation of multiple timescales possibly reflects the dynamics of generation combined with the slower drift–diffusion processes. If a single neuron element can exhibit both short and long timescales simultaneously, it holds the potential to enable the construction of neural networks with richer representation, thus forthcoming investigations in this direction are anticipated.[ 64 ]
3. Conclusion
We demonstrated that a FET made of the oxide semiconductor SrTiO3 can exhibit leaky‐integration behavior with long timescales. The operating mechanism of the SrTiO3 FET was underpinned by the drift–diffusion of , which acted as mobile donors with a long timescale under the influence of an externally applied electric field. Unlike conventional CMOS devices with leaky integration, this device exhibited a distinct characteristic of conductance change evolution over long timescales, typically on the order of one second. The FET can operate with much lower energy consumption than that of conventional CMOS leaky‐integration devices because the leaky integration was induced solely through electrostatic means without net electrical currents or Joule heating (see Sections S9 and S14, Supporting Information for a discussion of energy consumption). Moreover, we found that the timescales of the leaky integration were insensitive to the channel size of the transistor. This invariability of leaky‐integration timescale was caused by its drift–diffusion mechanism and will aid device miniaturization, contributing to further decreasing energy consumption without affecting the timescale (see Section S9, Supporting Information for a discussion of scalability). Timescale may be adjusted by, for example, engineering the electrostatic profile through back‐gating. These general concepts can be applied to other oxides to realize long timescales in miniaturized devices.[ 65 ] We also highlighted the critical role of timescales in devices used in artificial neural networks, particularly in the context of energy‐efficient neuromorphic computing. A long timescale is indispensable for tasks that process human interactive signals. Our findings not only contribute to advancing the field of neuromorphic computing but also offer insights into harnessing emergent phenomena in oxide semiconductors to realize novel computing paradigms.
4. Experimental Section
Fabrication of the SrTiO3 FET
The SrTiO3 FET was fabricated using standard photolithography.[ 50 , 51 ] First, a 2.7 nm‐thick Parylene C film deposited on a commercial SrTiO3 substrate (Shinkosha, Co.) by chemical vapor deposition was partially etched by UV‐ozone irradiation to provide contact areas for the source and drain electrodes, which were subsequently formed by thermal deposition of Ti. A 2.8 nm‐thick Parylene C film was deposited on the entire substrate by chemical vapor deposition to form the gate dielectric followed by a 20 nm‐thick HfO x layer formed by atomic layer deposition. Next, the double‐layer Parylene C film was removed from the bonding pad regions by UV‐ozone irradiation. A thick SiO x layer was deposited on the gate bonding pad regions by radio‐frequency sputtering to ensure electrical isolation. Deposition of thick Ti/Au bonding pads and Ti/Au gate electrodes on the channel regions completed the FET fabrication process. The channel length and width were 2 and 8 µm, respectively.
Characterization of the SrTiO3 FET
The SrTiO3 FET was characterized by applying a time‐dependent voltage to the drain and gate electrodes using a function generator (WF1948, NF Circuit Design, Co.) and measuring the current flowing through the drain electrode using a device current waveform analyzer (CX3324A, Keysight Technologies, Inc.).
Finite Element Analysis of Drift–Diffusion Dynamics
Distributions of electrons and were modeled by the finite element method using COMSOL software (COMSOL, Inc.) using the semiconductor and electrochemistry modules with the quasi‐Fermi energy, electric potential, and oxygen vacancy concentration as model variables. For simplicity, the calculation was performed in a 2D geometry of 10 × 10 µm and all quantities were assumed to be uniform in the depth direction. The model assumed contributions only from the conduction band of SrTiO3 and that the oxygen vacancy donor was fully activated. Electron transport was calculated based on the standard Drude model with a constant relaxation time, and other effects, such as carrier trapping, were not included. The background concentration of was set to 1015 cm−3. The boundary condition for oxygen vacancy concentration was N vo = 1015 cm−3 for both the boundaries inside SrTiO3 and at the crystal surface, assuming that the oxygen‐vacancy generation rate at the surface was much faster than their diffusion rate.
Handwriting Anomaly Detection
Handwriting anomaly detection was demonstrated by numerically simulating reservoir dynamics using MATLAB software (Mathworks, Inc.) on a personal computer. The trajectory data were first stored in a data file and later fed into the simulation program. The position of a pen was recorded at a time interval of 10 ms as each research participant moved the pen on the input device. The positional coordinates in Figure 2b are in units of pixels scaled by a factor of 0.01. The reservoir was constructed as a randomly connected spiking neural network. Leaky‐integrate‐and‐fire neurons were used as the neuron model. The number of neurons used in the reservoir was 256. The detection was performed by first feeding the handwriting trajectory of person A as training data and recording all the states of constituting neurons. Next, the handwriting trajectory of a different person B (testing data, anomalous input) was input and the resulting neurons states were recorded. This process was repeated for person A's different trajectory (testing data, standard input). To quantify degrees of anomaly, the Mahalanobis distance of reservoir states, S A, from the neuron states using a time step of 1 ms were calculated.[ 49 ] The value of S A became large only when the neuron states differed considerably from the states during the training phase. Two kinds of reservoirs: one with long time constants (τ n = 1.07 s) and the other with short time constants (τ n = 10 µs) were simulated.
The dynamics of neural networks were simulated step by step with time intervals of 0.1 ms and 0.1 µs for the long‐timescale (τ n = 1.07 s) and short‐timescale (τ n = 10 µs) reservoirs, respectively. To match the time intervals of the trajectory data with the simulation, the same values were fed into the neural network 100 times (τ n = 1.07 s) and 100 000 times (τ n = 10 µs) per data point before proceeding to the next data point. It took ≈3.8 and 3800 s for the long‐timescale and short‐timescale reservoirs, respectively, to simulate the 1‐s equivalent of the neural network dynamics. The long simulation time for the short‐timescale reservoir was caused by the large number of simulation steps required to minimize the time discretization error. Details of the simulation are outlined in Section S4 (Supporting Information). See Section S11 (Supporting Information) for further discussion of the readout functions of the reservoir.
The handwriting trajectory data in Figure 2b were acquired using a tablet device after informed written consent from all participants. This research was conducted with the approval of the ethics board, Life Science Experiment Management Office, National Institute of Advanced Industrial Science and Technology, Approval No. 20221237C.
Statistical Analysis
In Figure 5b, the data fitting was performed by the Levenberg–Marquardt least‐squares method using Igor Pro software (WaveMetrics, Inc.). The sample size was 100 and the data points were obtained from the I D − V G curves shown in Figure 5a. No further pre‐processing of the data was conducted. Single‐ and double‐exponential functions of the form y 0 + Aexp (− γ x) and y 0 + A 1exp (− γ 1 x) + A 2exp (− γ 2 x), respectively, were used as the fitting functions. The fittings yielded the coefficients y 0 = 3.26 ± 0.017 V, A = 0.80 ± 0.014 V, γ = 0.021 ± 0.0011 for the single‐exponential function and y 0 = 3.09 ± 0.024 V, A 1 = 0.87 ± 0.018 V, γ 1 = 0.0127 ± 0.00067, A 2 = 0.247 ± 0.0098 V, γ 2 = 0.30 ± 0.025 for the double‐exponential function. No statistical tests were performed in this study.
Conflict of Interest
The authors declare no conflict of interest.
Supporting information
Supporting Information
Acknowledgements
This work was funded by the Japan Science and Technology Agency (JST) CREST Grant No. JPMJCR19K2. The authors thank Natasha Lundin, PhD, from Edanz (https://jp.edanz.com/ac) for editing a draft of this manuscript.
Inoue H., Tamura H., Kitoh A., Chen X., Byambadorj Z., Yajima T., Hotta Y., Iizuka T., Tanaka G., Inoue I. H., Taming Prolonged Ionic Drift–Diffusion Dynamics for Brain‐Inspired Computation. Adv. Mater. 2024, 37, 2407326. 10.1002/adma.202407326
Contributor Information
Hisashi Inoue, Email: hisashi.inoue@aist.go.jp.
Isao H. Inoue, Email: i.inoue@aist.go.jp.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author on reasonable request.
References
- 1. Wang B., Ke W., Guang J., Chen G., Yin L., Deng S., He Q., Liu Y., He T., Zheng R., Jiang Y., Zhang X., Li T., Luan G., Lu H. D., Zhang M., Zhang X., Shu Y., Front. Cell. Neurosci. 2016, 10, 239. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. O'Connor D. H., Peron S. P., Huber D., Svoboda K., Neuron 2010, 67, 1048. [DOI] [PubMed] [Google Scholar]
- 3. Shafi M., Zhou Y., Quintana J., Chow C., Fuster J., Bodner M., Neuroscience 2007, 146, 1082. [DOI] [PubMed] [Google Scholar]
- 4. Markovic D., Mizrahi A., Querlioz D., Grollier J., Nat. Rev. Phys. 2020, 2, 499. [Google Scholar]
- 5. Buzsáki G., Rhythms of the Brain, Oxford University Press, New York, NY: 2006. [Google Scholar]
- 6. Tanaka G., Yamane T., Héroux J. B., Nakane R., Kanazawa N., Takeda S., Numata H., Nakano D., Hirose A., Neural Netw. 2019, 115, 100. [DOI] [PubMed] [Google Scholar]
- 7. Roy K., Jaiswal A., Panda P., Nature 2019, 575, 607. [DOI] [PubMed] [Google Scholar]
- 8. Merolla P. A., Arthur J. V., Alvarez‐Icaza R., Cassidy A. S., Sawada J., Akopyan F., Jackson B. L., Imam N., Guo C., Nakamura Y., Brezzo B., Vo I., Esser S. K., Appuswamy R., Taba B., Amir A., Flickner M. D., Risk W. P., Manohar R., Modha D. S., Science 2014, 345, 668. [DOI] [PubMed] [Google Scholar]
- 9. Indiveri G., Chicca E., Douglas R., IEEE Trans. Neural Netw. 2006, 17, 211. [DOI] [PubMed] [Google Scholar]
- 10. Indiveri G., Linares‐Barranco B., Hamilton T., van Schaik A., Etienne‐Cummings R., Delbruck T., Liu S.‐C., Dudek P., Häfliger P., Renaud S., Schemmel J., Cauwenberghs G., Arthur J., Hynna K., Folowosele F., Saïghi S., Serrano‐Gotarredona T., Wijekoon J., Wang Y., Boahen K., Front. Neurosci. 2011, 5, 73. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Joubert A., Belhadj B., Temam O., Héliot R., The 2012 International Joint Conference on Neural Networks 2012.
- 12. Wu X., Saxena V., Zhu K., Balagopal S., IEEE Trans. Circuits and Syst. II: Express Br. 2015, 62, 1088. [Google Scholar]
- 13. Moradi S., Bhave S. A., Manohar R., in IEEE Symp. Series on Computational Intelligence, IEEE, Honolulu 2017.
- 14. Aamir S. A., Muller P., Kiene G., Kriener L., Stradmann Y., Grubl A., Schemmel J., Meier K., IEEE Trans. Biomed. Circuits Syst. 2018, 12, 1027. [DOI] [PubMed] [Google Scholar]
- 15. Chen X., Byambadorj Z., Yajima T., Inoue H., Inoue I. H., Iizuka T., Appl. Phys. Lett. 2023, 122, 074102. [Google Scholar]
- 16. Chen X., Yajima T., Inoue I. H., Iizuka T., Jpn. J. Appl. Phys. 2022, 61, SC1051. [Google Scholar]
- 17. Waser R., Aono M., Nat. Mater. 2007, 6, 833. [DOI] [PubMed] [Google Scholar]
- 18. Lee M.‐J., Lee C. B., Lee D., Lee S. R., Chang M., Hur J. H., Kim Y.‐B., Kim C.‐J., Seo D. H., Seo S., Chung U.‐I., Yoo I.‐K., Kim K., Nat. Mater. 2011, 10, 625. [DOI] [PubMed] [Google Scholar]
- 19. Wang Z., Joshi S., Savel'ev S. E., Jiang H., Midya R., Lin P., Hu M., Ge N., Strachan J. P., Li Z., Wu Q., Barnell M., Li G.‐L., Xin H. L., Williams R. S., Xia Q., Yang J. J., Nat. Mater. 2017, 16, 101. [DOI] [PubMed] [Google Scholar]
- 20. Zhang X., Wang W., Liu Q., Zhao X., Wei J., Cao R., Yao Z., Zhu X., Zhang F., Lv H., Long S., Liu M., IEEE Electron Device Lett. 2018, 39, 308. [Google Scholar]
- 21. Choi S., Tan S. H., Li Z., Kim Y., Choi C., Chen P.‐Y., Yeon H., Yu S., Kim J., Nat. Mater. 2018, 17, 335. [DOI] [PubMed] [Google Scholar]
- 22. Lee D., Kwak M., Moon K., Choi W., Park J., Yoo J., Song J., Lim S., Sung C., Banerjee W., Hwang H., Adv. Electron. Mater. 2019, 5, 1800866. [Google Scholar]
- 23. Yang J.‐Q., Wang R., Wang Z.‐P., Ma Q.‐Y., Mao J.‐Y., Ren Y., Yang X., Zhou Y., Han S.‐T., Nano Energy 2020, 74, 104828. [Google Scholar]
- 24. Kumar S., Williams R. S., Wang Z., Nature 2020, 585, 518. [DOI] [PubMed] [Google Scholar]
- 25. Duan Q., Jing Z., Zou X., Wang Y., Yang K., Zhang T., Wu S., Huang R., Yang Y., Nat. Commun. 2020, 11, 3399. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Luo J., Yu L., Liu T., Yang M., Fu Z., Liang Z., Chen L., Chen C., Liu S., Wu S., Huang Q., Huang R., in IEEE Int. Electron Devices Meeting, IEEE, San Francisco: 2019. [Google Scholar]
- 27. Wang Z., Crafton B., Gomez J., Xu R., Luo A., Krivokapic Z., Martin L., Datta S., Raychowdhury A., Khan A. I., in IEEE Int. Electron Devices Meeting, IEEE, San Francisco: 2018. [Google Scholar]
- 28. Khan A. I., Keshavarzi A., Datta S., Nat. Electron 2020, 3, 588. [Google Scholar]
- 29. Dutta S., Schafer C., Gomez J., Ni K., Joshi S., Datta S., Front. Neurosci. 2020, 14, 634. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Cao R., Zhang X., Liu S., Lu J., Wang Y., Jiang H., Yang Y., Sun Y., Wei W., Wang J., Xu H., Li Q., Liu Q., Nat. Commun. 2022, 13, 7018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Lee G., Kim H. J., Shin E. J., Kim S., Lee T. I., Cho B. J., IEEE Electron Device Lett. 2022, 43, 1375. [Google Scholar]
- 32. Sengupta A., Panda P., Wijesinghe P., Kim Y., Roy K., Sci. Rep. 2016, 6, 30039. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Wu M.‐H., Hong M.‐C., Chang C.‐C., Sahu P., Wei J.‐H., Lee H.‐Y., Shcu S.‐S., Hou T.‐H., in Symp. on VLSI Technology, IEEE, Kyoto: 2019. [Google Scholar]
- 34. Wright C. D., Hosseini P., Diosdado J. A. V., Adv. Funct. Mater. 2013, 23, 2248. [Google Scholar]
- 35. Tuma T., Pantazi A., Le Gallo M., Sebastian A., Eleftheriou E., Nat. Nanotechnol. 2016, 11, 693. [DOI] [PubMed] [Google Scholar]
- 36. Stoliar P., Tranchant J., Corraze B., Janod E., Besland M.‐P., Tesler F., Rozenberg M., Cario L., Adv. Funct. Mater. 2017, 27, 1604740. [Google Scholar]
- 37. Yi W., Tsang K. K., Lam S. K., Bai X., Crowell J. A., Flores E. A., Nat. Commun. 2018, 9, 4661. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Park S.‐O., Jeong H., Park J., Bae J., Choi S., Nat. Commun. 2022, 13, 2888. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Smith A. J., Blumenfeld H., Behar K. L., Rothman D. L., Shulman R. G., Hyder F., Proc. Natl. Acad. Sci. U SA 2002, 99, 10765. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Huang M., Schwacke M., Onen M., Del Alamo J., Li J., Yildiz B., Adv. Mater. 2023, 35, 2205169. [DOI] [PubMed] [Google Scholar]
- 41. Zidan M. A., Strachan J. P., Lu W. D., Nat. Electron. 2018, 1, 22. [Google Scholar]
- 42. Li Y., Wang Z., Midya R., Xia Q., Yang J. J., J. Phys. D: Appl. Phys. 2018, 51, 503002. [Google Scholar]
- 43. Zhou G., Wang Z., Sun B., Zhou F., Sun L., Zhao H., Hu X., Peng X., Yan J., Wang H., Wang W., Li J., Yan B., Kuang D., Wang Y., Wang L., Duan S., Adv. Electron. Mater. 2022, 8, 2101127. [Google Scholar]
- 44. Tufte O. N., Chapman P. W., Phys. Rev. 1967, 155, 796. [Google Scholar]
- 45. Frederikse H. P. R., Hosler W. R., Phys. Rev. 1967, 161, 822. [Google Scholar]
- 46. Inoue H., Tamura H., Kitoh A., Chen X., Byambadorj Z., Yajima T., Hotta Y., Iizuka T., Tanaka G., Inoue I. H., in IEEE Symp. on VLSI Technology and Circuits, IEEE, Kyoto: 2023. [Google Scholar]
- 47. Tamura H., Tanaka G., Artificial Neural Networks and Machine Learning, ICANN 2020, Lecture Notes in Computer Science, (Eds: Farkaš I., Masulli P., Wermter S.), 12397, Springer, Cham: 2020, pp. 459–469. [Google Scholar]
- 48. Tamura H., Tanaka G., Neural Netw. 2021, 143, 550. [DOI] [PubMed] [Google Scholar]
- 49. Tamura H., Fujiwara K., Aihara K., Tanaka G., techrxiv 2023, 22678774.
- 50. Stoliar P., Schulman A., Kitoh A., Sawa A., Inoue I. H., in IEEE Int. Electron Devices Meeting, IEEE, San Francisco: 2017. [Google Scholar]
- 51. Kumar N., Kitoh A., Inoue I. H., Sci. Rep. 2016, 6, 25789. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52. Kalabukhov A., Gunnarsson R., Börjesson J., Olsson E., Claeson T., Winkler D., Phys. Rev. B 2007, 75, 121404. [DOI] [PubMed] [Google Scholar]
- 53. Moos R., Hardtl K. H., J. Am. Ceram. Soc. 1997, 80, 2549. [Google Scholar]
- 54. Inoue H., Yoon H., Merz T. A., Swartz A. G., Hong S. S., Hikita Y., Hwang H. Y., Appl. Phys. Lett. 2019, 114, 231605. [Google Scholar]
- 55. Sze S. M., Lee M. K., Semiconductor Devices: Physics and Technology, John Wiley & Sons, New York, NY: 2012. [Google Scholar]
- 56. De Souza R. A., Metlenko V., Park D., Weirich T. E., Phys. Rev. B 2012, 85, 174109. [Google Scholar]
- 57. De Souza R. A., Martin M., Phys. Chem. Chem. Phys. 2008, 10, 2356. [DOI] [PubMed] [Google Scholar]
- 58. Ohta H., Sato Y., Kato T., Kim S., Nomura K., Ikuhara Y., Hosono H., Nat. Commun. 2010, 1, 118. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59. Kreuer K. D., Chem. Mater. 1996, 8, 610. [Google Scholar]
- 60. Zhu L. Q., Wan C. J., Guo L. Q., Shi Y., Wan Q., Nat. Commun. 2014, 5, 3158. [DOI] [PubMed] [Google Scholar]
- 61. Guo L. Q., Han H., Zhu L. Q., Guo Y. B., Yu F., Ren Z. Y., Xiao H., Ge Z. Y., Ding J. N., ACS Appl. Mater. Interfaces 2019, 11, 28352. [DOI] [PubMed] [Google Scholar]
- 62. Nikam R. D., Lee J., Choi W., Banerjee W., Kwak M., Yadav M., Hwang H., Small 2021, 17, 2103543. [DOI] [PubMed] [Google Scholar]
- 63. Onen M., Emond N., Li J., Yildiz B., del Alamo J. A., Nano Lett. 2021, 21, 6111. [DOI] [PubMed] [Google Scholar]
- 64. Tanaka G., Matsumori T., Yoshida H., Aihara K., Phys. Rev. Res. 2022, 4, L032014. [Google Scholar]
- 65. Coll M., Fontcuberta J., Althammer M., Bibes M., Boschker H., Calleja A., Cheng G., Cuoco M., Dittmann R., Dkhil B., Baggari I. E., Fanciulli M., Fina I., Fortunato E., Frontera C., Fujita S., Garcia V., Goennenwein S., Granqvist C.‐G., Grollier J., Gross R., Hagfeldt A., Herranz G., Hono K., Houwman E., Huijben M., Kalaboukhov A., Keeble D., Koster G., Kourkoutis L., et al., Appl. Surf. Sci. 2019, 482, 1. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supporting Information
Data Availability Statement
The data that support the findings of this study are available from the corresponding author on reasonable request.
