Skip to main content
Advanced Science logoLink to Advanced Science
. 2025 Jun 25;12(33):e07060. doi: 10.1002/advs.202507060

Programmable DNA‐Based Molecular Neural Network Biocomputing Circuits for Solving Partial Differential Equations

Yijun Xiao 1, Alfonso Rodríguez‐Patón 2, Jianmin Wang 3, Pan Zheng 4, Tongmao Ma 1,, Tao Song 1,
PMCID: PMC12412475  PMID: 40558108

Abstract

Partial differential equations, essential for modeling dynamic systems, persistently confront computational complexity bottlenecks in high‐dimensional problems, yet DNA‐based parallel computing architectures, leveraging their discrete mathematics merits, provide transformative potential by harnessing inherent molecular parallelism. This research introduces an augmented matrix‐based DNA molecular neural network to achieve molecular‐level solving of biological Brusselator PDEs. Two crucial innovations address existing technological constraints: (i) an augmented matrix‐based error‐feedback DNA molecular neural network, enabling multidimensional parameter integration through DNA strand displacement cascades and iterative weight optimization; (ii) incorporating membrane diffusion theory with division operation principles into DNA circuits to develop partial differential calculation modules. Simulation results demonstrate that the augmented matrix‐based DNA neural network efficiently and accurately learns target functions; integrating the proposed partial derivative computation strategy, this architecture solves the biological Brusselator PDE numerically with errors below 0.02 within 12,500 s. This work establishes a novel intelligent non‐silicon‐based computational framework, providing theoretical foundations and potential implementation paradigms for future bio‐inspired computing and unconventional computing devices in life science research.

Keywords: chemical reaction networks (CRNs), DNA computing, DNA strand displacement reactions, neural networks circuits, partial differential equations


This research introduces an augmented matrix‐based DNA molecular neural network to achieve molecular‐level solving of biological Brusselator PDEs. Crucial innovations include: (i) an augmented matrix‐based DNA molecular neural network, enabling multidimensional parameter integration through DNA strand displacement cascades and iterative weight optimization; (ii) incorporating membrane diffusion theory with division operation principles into DNA circuits to develop partial differential calculation modules.

graphic file with name ADVS-12-e07060-g001.jpg

1. Introduction

Partial Differential Equations are the core mathematical tools used to characterize the dynamic behavior of physical, biological, and engineering systems. Efficient solutions to PDEs are essential for critical areas such as climate modeling and drug diffusion simulation.[ 1 , 2 , 3 ] However, traditional numerical methods, such as the finite element method (FEM) and the finite difference method (FDM), encounter fundamental challenges in solving high‐dimensional nonlinear PDEs. The computational complexity and cost increase exponentially as dimensionality grows, known as the curse of dimensionality. In recent years, biomolecular systems have garnered significant attention due to their natural parallelism and energy‐efficient properties,[ 4 , 5 , 6 , 7 , 8 ] providing new avenues for nontraditional computational frameworks based on biomolecular systems. The programmable and parallel computational capabilities of DNA molecules[ 9 ] through strand displacement reactions provides a new idea for continuous‐space differential arithmetic‐through the gradient distribution of molecular concentration and the dynamic reaction network, it is expected to map directly the differential operators of PDEs, circumventing the loss of accuracy caused by traditional discretization. This bio‐physical fusion computing architecture provides the possibility of breaking through the traditional computational complexity boundaries for the development of a new generation of PDE solvers.

Solving high‐dimensional PDE remains a formidable challenge in computational science, where conventional numerical methods suffer from the curse of dimensionality that compromises both accuracy and efficiency. Recent advances in deep neural networks leverage nonlinear activation functions and continuous function approximation to adaptively capture high‐dimensional and nonlinear features, eliminating the requirement for precise analytical models or mesh generation. However, conventional silicon‐based neural architectures face inherent limitations in energy efficiency and physical interconnect density imposed by von Neumann architectures, hindering their capability for massively parallel computation of ultra‐large‐scale PDE systems.[ 10 , 11 ]

Non‐silicon‐based DNA neural networks provide a transformative paradigm to resolve this conflict. Molecular reactions in DNA computation exhibit extraordinary parallelism (“epi‐bit” parallel DNA molecular data storage[ 12 ]) and ultra‐low energy consumption (DNA‐based programmable gate arrays, DPGA[ 13 ]). While neural networks demonstrate superior approximation capacity for PDE solutions, their electronic implementations encounter fundamental bottlenecks in power dissipation and scalability. The chemical implementation of neural networks through programmable DNA reactions offers a disruptive computational framework that inherently bypasses these limitations through massive molecular parallelism. Cherry and Qian[ 14 ] DNA extends the molecular pattern recognition capabilities of DNA neural networks by constructing a winner‐takes‐all neural network strategy to recognize nine molecular patterns. Zou et al.[ 15 , 16 ] designed a nonlinear neural network based on the DNA strand replacement reaction, which was then utilized to perform standard quadratic function learning. In addition, they tested the robustness of the nonlinear neural network to detection of DNA strand concentration, the strand replacement reaction rate, and noise. The DNA molecular neural network system demonstrates adaptive behavior and supervised learning capabilities, demonstrating the promise of DNA displacement reaction circuits for artificial intelligence applications. Zou[ 17 ] also proposed a novel activation function that was embedded in a DNA nonlinear neural network to achieve the fitting and prediction of specific nonlinear functions. The function can be realized by enzyme‐free DNA hybridization reactions and has good nesting properties, allowing the construction of complete circuits in cascade with other DNA reactions. Xiong et al.[ 18 ] addressed the key scientific challenges of developing an ultra‐large‐scale DNA molecular response network by achieving independent regulation of the signal transmission function and the weight‐giving function for weight sharing based on modularity of the functions of the weight‐regulating region and the recognition region, with the molecular switch gate architecture as the basic circuit component. A large‐scale DNA convolutional neural network (ConvNet) has been constructed by cascading several modular molecular computing units with synthetic DNA regulatory circuits, which is capable of recognizing and classifying 32 classes of 144‐bit molecular information profiles. The developed DNA neural network has robust molecular mapping information processing capacity that is predicted to be utilized in intelligent biosensing and other fields.

DNA's programmability, adaptability, and biocompatibility[ 19 ] make it an ideal candidate for integrating computational and control applications in synthetic biology. DNA strand displacement (DSD) reaction‐driven architectures enable the development of biochemical circuits from DNA strand sequences, allowing for digital computing and analog simulation.[ 20 , 21 , 22 , 23 , 24 , 25 ] Previous research has demonstrated the promise of DNA circuits for pattern classification and optimization issues,[ 26 , 27 , 28 ] but the implementation of continuous mathematical operations (such as partial derivative computations) remains a challenge.

This research provides a methodology for solving partial differential equations using DNA molecular neural network based on an augmentation matrix. Its main contributions include: (i) constructing a DSD reaction‐driven augmentation matrix‐based DNA molecular neural network, mapping the weights to the DNA strand concentration by introducing the augmentation matrix encoding strategy, and realizing multi‐parameter combination output by utilizing parallelism of DNA toehold mediated displacement reaction to realize the output of multi‐parameter combination. (ii) deeply integrating the membrane diffusion theory and division operation principles into DNA circuits, proposing a scheme for calculating partial derivatives using DSD reactions, and then simulating the partial derivatives process, which breaks through the theoretical limitations of molecular systems in continuous mathematical operations; (iii) constructing the first DNA molecular neural network architecture to support the solution of partial derivative equations, which achieves adaptive adjustment of the weight parameters through an error‐feedback mechanism. This research develops new partial differential computational strategies for biomolecular computers while also providing a non‐silicon based computational device, facilitating a significant transition from discrete logic to continuous mathematics in biocomputing.

2. Methodology

2.1. CRNs‐Based Multi‐Combinatorial Parameter Parallel Output Neural Networks

2.1.1. CRNs‐Based Error Back Propagation Neural Network

Assuming N input parameters of a neural network, that is X(k) = [x 1(k), x 2(k), …, x N (k)] T , weights (Wi1)T=[wi01,wi11,,wiL1] are utilized to connect the i‐th unit of the input layer to multiple units of the hidden layer, where i = 1, 2, 3, …, N, (L + 1) denotes the number of units in the hidden layer; weights (Wn2)T=[wn02,wn12,,wnM2] are used to connect the n‐th unit of the hidden layer with multiple units of the output layer; (M + 1) denotes the number of units in the output layer; θn1 and θj2 are the thresholds of the input and hidden layers, respectively. The outputs of the hidden and output layers are denoted by yn1(k) and y j (k), respectively, where n = 1, 2, 3, …, L and j = 1, 2, 3, …, M. The entire procedure depicted in Figure 1  can be characterized as:

sn1(k)=i=1Nwin1xi(k)θn1yn1(k)=ψ(sn1(k))sj2(k)=n=1Lwnj2yn(k)θj2yj(k)=ψ(sj2(k)) (1)

where ψ(*) represents the activation function.

Figure 1.

Figure 1

CRNs‐based error back propagation neural network.

CRNs‐based input layer weighted sum (including threshold) calculation:

Xi+Win+Ink1Xi+2InIn+Snk1SnIn+θnk1X1+W11+I1k1X1+2I1XN+WN1+I1k1XN+2I1X1+W1L+ILk1X1+2ILXN+WNL+ILk1XN+2ILI1+S1k1S1IL+SLk1SLI1+θ1k1IL+θLk1 (2)

CRNs‐based hidden layer activation function: Taking the activation function ψ(*) = (*)2 as an example, the corresponding CRNs can be described as follows:

2.1.1. (3)

CRNs‐based output layer activation function: Taking the activation function ψ(*) = (*)2 as an example, the corresponding CRNs can be given:

2.1.1. (4)

CRNs‐based output layer weighted sum (including threshold) calculation:

2.1.1. (5)

CRNs‐based weight update mechanism: The weight update of the back‐propagation neural network can be achieved utilizing the gradient descent algorithm:

win1(k+1)=win1(k)+αΔwin1(k)wnj2(k+1)=wnj2(k)+αΔwnj2(k) (6)

where α(0 < α < 1) is the learning step size, Δwin1(k)=δn1(k)xi(k), and Δwnj2(k)=δj2(k)yn1(k). δn1 and δj2 are the back‐propagated errors.

Remark 1

The relationship between the neural network weight update and the loss function Loss can be indicated as follows:

winew=wioldη(Loss)w (7)

When substituting Equation (7) into Equation (6), the expression can be reformulated as:

win1(k+1)=win1(k)+αΔwin1(k)=win1(k)+α(Loss)winwnj2(k+1)=wnj2(k)+αΔwnj2(k)=wnj2(k)+α(Loss)wnj (8)

CRNs‐based weight update mechanism can be described as:

Win+Laink5k6LbinVnj+Lcink7k8LdinSn+Lenk9k10Lfnyn+Lgnk11k12LhnYj+Lmjk13k14Lnj (9)

where k m (m = 5, 7, 9, 11, 13) is the forward reaction rate, and k n (n = 6, 8, 10, 12, 14) is the reverse reaction rate. The initial concentrations of [W in , La in , Lb in ], [V nj , Lc in , Ld in ], [S n , Le n , Lf n ], [y n , Lg n , Lh n ], and [Y j , Lm j , Ln j ] satisfy the following conditions:

[Win]0[Lbin]0,[Lain]0[Lbin]0[Vnj]0[Ldin]0,[Lcin]0[Ldin]0[Sn]0[Lfn]0,[Len]0[Lfn]0[yn]0[Lhn]0,[Lgn]0[Lhn]0[Yj]0[Lnj]0,[Lmj]0[Lnj]0 (10)

According to Reactions (2)‐(5), the ordinary differential equations (ODEs) of substances I n , Inline graphic, Inline graphic, and Inline graphic can be written as:

2.1.1. (11)

When the concentrations of substances I n , Inline graphic, Inline graphic and Inline graphic achieve steady‐state equilibrium, that is dIndt=0, Inline graphic, Inline graphic, and Inline graphic, the following results can be obtained:

2.1.1. (12)

2.1.2. Realizing Multi‐Combination Variable Output Using Augmented Matrix Neural Network

Figure  2 depicts the structure of a backpropagation neural network utilizing the augmented matrix. By using the augmented matrix represented in the left sub‐figure, the neural network's input parameter setting could be indirectly controlled via the multiplication operation, allowing the neural network to regulate its output with regard to numerous combination parameters.

Figure 2.

Figure 2

Error back propagation neural network architecture diagram based on augmented matrix.

The functional relationship between the input and output layers of the right subgraph has been rigorously demonstrated in Equation (12). The augmented matrix parameter is applied to the first sub‐equation, which is equivalent to introducing an additional parameter ξ into i=1Nwin1xi(k), denoted as i=1Nξwin1xi(k), where ξ is assigned a value of either 0 or 1. Noted that the augmented matrix in Figure 2 specifically refers to an augmented identity matrix, constructed by appending an additional row of unity elements to the conventional identity matrix. This particular matrix configuration constitutes a critical implementation aspect of the proposed methodology.

Assuming that the augmented matrix is the augmented identity matrix, the following results can be obtained:

X(k)Matrix=x1(k),x2(k),,xN(k)T100010001111=x1(k)x2(k)xN1(k)xN(k)100010001111=x1(k)000x2(k)000xN(k)x1(k)x2(k)xN(k) (13)

Equation (12) describes the relationship between each neural network output d j and the input parameter x i as follows: Inline graphic. By combining Equations (12) and (13), we obtain the following neural network output matrix.

2.1.2. (14)

2.2. Principle of CRNs‐Based Partial Derivative Calculation

Further analysis of Equation (14) yields:

2.2. (15)

In Equation (15), d combi (x 1, x 2, x 3, …, x N ) represents the sum of the combined terms of at least two variables in the variable output x 1, x 2, x 3, …, x N by the neural network. If d combi (x 1, x 2, x 3, …, x N ) = 0, it means that the function learned by the neural network does not contain any composite terms; conversely, d combi (x 1, x 2, x 3, …, x N ) ≠ 0 means that the function learned by the neural network has composite terms. To determine whether there are composite terms, the partial derivative calculation of the input variable x 1, x 2, x 3, …, x N is taken into account.

According to the partial derivative calculation principle, the corresponding partial derivative is calculated for the neural network output d^j:

[d^j(x1,x2,x3,,xN)]xi=[d^j(xi)]xi+[dcombi(x1,x2,x3,,xN)]xi2[d^j(x1,x2,x3,,xN)][xi]2=2[d^j(xi)][xi]2+2[dcombi(x1,x2,x3,,xN)][xi]2,i=1,2,3,,N (16)

2.2.1. Partial Derivatives Calculation of Single Variable Terms

If a function learned by the neural network does not include any composite terms, that is d combi (x 1, x 2, x 3, …, x N ) = 0, the following considerations are necessary when calculating partial derivatives:

dXidt(t)=limε0+Xi(t)Xi(tε)ε,i=1,2,3,,N (17)

where Inline graphic, which is the output of the first N neural networks in Equation (14).

The partial differential derivative process based on the membrane diffusion method shown in Figure  3 consists of two stages: the first‐order partial derivative and the second‐order partial derivative. The second‐order derivative calculation is obtained by performing a secondary derivative calculation based on the result of the first‐order derivative calculation.

Figure 3.

Figure 3

Architecture diagram of the partial differential derivation solution based on membrane diffusion theory.

Consider the first‐order derivative CRN process based on membrane diffusion, which consists of one input, an intermediate species, and two outputs. The external input U aext exists in large quantities outside the membrane, so its concentration remains unaffected by dynamics. Once it passes through the membrane, it is labeled as U ain . Substance U aext activates substance Dau+, and substance U ain activates substance Dau. In addition, a rapid annihilation reaction removes Dau+ and Dau.

The first‐order core derivative CRN can be composed of exactly the following 12 reactions:

UaextkdiffUaext+Uain,UainkdiffUaextk,kdiffUaext+Dau+,Dau+kUaink,kdiffUain+Dau,DaukDau++DaukfastDau+αaDau++Umm,DauβaDau+UnnUmmδa,Unnδa,Umm+Unnγa (18)

By applying mass action kinetics to the initial seven reactions outlined in Equation (18), the following outcomes can be derived:

dUaindt=kdiff(UaextUain)dUau+dt=kkdiffUaextkUau+kfastUau+UaudUaudt=kkdiffUainkUaukfastUau+Uau (19)

Assuming Uau=Uau+Uau, Equation (20) can be obtained from Equation (19):

dUaudt=dUau+dtdUaudt=k*kdiff(UaextUain)k(Uau+Uau)=k*UaextUain1kdiffUau (20)

As input, a sine wave offset to ensure positive value. CRNs of the sine wave U aext (t) = 1 + sin (t) can be expressed:

AplAp+Bp+UaextAmlAm+Bm+UbextBmlAp+BmBplAm+BpBm+BpfastAm+ApfastUaext+Ubextfast (21)

The parameters l and fast represent the catalytic reaction rate and annihilation reaction rate, respectively, and are set to l = 1.0 and fast = 105. Figure  4a depicts the sine offset resulting from Equation (21); the simulated first‐order derivative of the sine wave function using the membrane diffusion theory is shown in Figure 4c.

Figure 4.

Figure 4

CRN simulation and derivative calculation of equation 1+Sin(t) and 1+Cos(t). a) CRNs representations of the equation 1+Sin(t), b) CRNs representations of the equation 1+Cos(t), c) Simulation results of the derivative of equation 1+Sin(t), and d) Simulation results of the derivative of equation 1+Cos(t).

And a cosine wave offset to remain positive as input. CRNs of the cosine wave Inline graphic can be expressed as:

2.2.1. (22)

The parameters Inline graphic and Inline graphic represent the catalytic reaction rate and annihilation reaction rate, respectively, and are set to Inline graphic and Inline graphic. Figure 4b illustrates the cosine outcome represented by Equation (22), while Figure 4d displays the simulation results of the first‐order derivative of the cosine function using the membrane diffusion theory.

Remark 2

Assuming Uain(t)=0tkdiffUaext(s)Uain(s)ds, the first formula of Equation (19) can be symbolically integrated and transformed into

Uain(t)=kdiff0tUaext(s)dskdiff0tUain(s)ds (23)

Let u = ts, then s = tu, ds = −du. When s = 0, u = t is attained, and when s = t, u = 0 is produced. Then Equation (23) can be rewritten as:

Uain(t)=kdifft0Uaext(tu)(du)kdiff0tUain(s)ds (24)

Using the property that exp(−k diff s) tends to 0 at s → ∞, the upper limit of the integral is extended to ∞.

Uain(t)=kdiff0exp(kdiffs)Uaext(ts)ds (25)

If k diff → ∞, then τ=1kdiff0. Assuming U aext is infinitely differentiable, Taylor expansion of U aext (ts) at s = 0 gives the following results:

Uaext(ts)=n=0(s)nn!Uaext(n)(t) (26)

Substituting Equation (26) into Equation (25), equation(27) can be obtained.

Uain(t)=0kdiffexp(kdiffs)n=0(s)nn!Uaext(n)(t)ds=n=0kdiffn!Uaext(n)(t)0(s)nexp(kdiffs)ds (27)

For the integral In=0(s)nexp(kdiffs)ds, integration by parts can be utilized to rewrite it. Let u = (− s) n , dv = exp (− k diff s)ds, then du=n(s)n1ds,v=1kdiffexp(kdiffs).

In=(s)n1kdiffexp(kdiffs)0+nkdiff0(s)n1exp(kdiffs)ds (28)

When s → ∞, (s)n1kdiffexp(kdiffs)0; when S = 0, (s)n1kdiffexp(kdiffs)0=0. According to the above analysis, In=nkdiffIn1 can be inferred. Equation (29) can be developed further.

In=(1)n(τ)n+1n,τ=1kdiff (29)

Equation (29) can be substituted into Equation (27), yielding the following result:

2.2.1. (30)

Substituting Equation (30) into Equation (20), Equation (31) can be obtained:

dUaudt=dUau+dtdUaudt=k*kdiff(UaextUain)k(Uau+Uau)=k*kdiff(Uaext(Uaext(tτ)+o(τ2)))kUau (31)

Further simplification yields equation(32).

dUaudt=k*kdiff(Uaext(Uaext(tτ))kUau+o(τ) (32)

Introducing a delay ε and setting ε=1k, the following result can be achieved:

D(t)=Uaext(tε)Uaext(tτε)τ+o(τ)+o(ε2) (33)

o(τ) is the high‐order infinitesimal term related to τ generated in the previous simplification process, and o2) is the high‐order infinitesimal term generated by the approximation related to ε.

As illustrated in Figure  5 , the subtraction operation consists of the last five reactions in Equation (18), and the corresponding ordinary differential equations (ODEs) can be mathematically derived by

dUmmdt=αaDau+δaUmmγaUmmUnndUnndt=βaDauδaUnnγaUmmUnn (34)

Assuming that the reaction rate α a = β a = δ a and U nn → 0, when the reaction system described by Equation (18) achieves steady‐state equilibrium, the left‐hand side terms dUmmdt and dUnndt of Equation (34) can be set to zero, resulting in the following:

Umm=Dau+Dau (35)

Combining Equation (20), Equation (36) can be obtained:

dUmmdt=dUau+dtdUaudt=k*kdiff(UaextUain)k(Uau+Uau)=k*UaextUain1kdiffUmm (36)

According to Equation (36), when the reaction system (18) achieves a steady state, the first‐order partial differential result U mm of substance U aext can be produced:

Umm=UaextUain1kdiff (37)

Combining Equations (20), (30) and (37), the following result can be obtained:

dUaextdt(t)=limε0+Uaext(t)Uaext(tε)εUaextUain1kdiff=Umm (38)

where ε=1kdiff. Similarly, the second‐order derivative output can be derived as:

dOmmdt=dOau+dtdOaudt=k*kdiff(DmmUacore)k(Oau+Oau)=k*DmmUacore1kdiff2Omm (39)

The parameter k diff2 denotes the rate in the second‐order derivative stage. Equation (38) yields the first‐order partial differential result O mm of substance (parameter) U ain , as well as the second‐order partial differential result of U aext :

Omm=DmmUacore1kdiff2 (40)

The differential CRN aims to compute a function f from an unknown input signal U aext . Compute f(U aext (t)) for a given function f. However, the differential CRN only provides for an approximation of the input signal's derivative. The first‐order calculation equation can be expressed as:

dYdt=f(Uaext(t))dUaextdt=f(Uaext(t))limε0+Uaext(t)Uaext(tε)εf(Uaext(t))UaextUain1kdiff=f(Uaext(t))Umm,Y(0)=f(Uaext(0))ε=1kdiff (41)

The second‐order calculation equation can be described by Equation (42).

d2Ydt2=ddt(f(Uaext(t)))dUaextdt+f(Uaext(t)ddtdUaextdt=f(Uaext(t))dUaextdtdUaextdt+f(Uaext(t))ddtdUaextdt=f(Uaext(t))dUaextdt2+f(Uaext(t)d2Uaextdt2,Y(0)=f(Uaext(0)) (42)
Figure 5.

Figure 5

Block diagram of the subtraction operator.

Let us consider the cube function depicted in Figure 6:

dYdt=Uaext2Ummd2Ydt2=2UaextUmm2+Uaext2Omm (43)

And a sine wave offset to remain positive as input: U aext (t) = 1 + sin (t). Theoretical verification of the high‐order derivative results of function 1 + sin (t) based on membrane diffusion theory:

x(t)=1+sin(t)x(tτ)=cos(tτ)=cos(t)+τsin(t)+O(τ12)x(tτ)=sin(tτ)=sin(t) +τcos(t)+O(τ22) (44)

Then compute the output to the second order and first order:

y(t)=2x(s)x2(sτ)ds+x2(s)x(sτ)ds=2(1+sin(s))[cos(s)+τsin(s)]2ds+(1+sin(s))2[sin(s)+τcos(s)]ds=2(1+sin(s))cos2(s)ds+(1+sin(s))2[sin(s)]ds+2(1+sin(s))[cos(s)2τsin(s)+τ2sin2(s)]ds+(1+sin(s))2τcos(s)]ds=[1+sin(t)]2*cos(t)+2(1+sin(s))[cos(s)2τsin(s)+τ2sin2(s)]ds+(1+sin(s))2τcos(s)]dsy(t)[1+sin(t)]2*cos(t)+τty(t)=[1+sin(t)]2*cos(t)ds+τtdsy(t)13[1+sin(t)]3 (45)
Figure 6.

Figure 6

Calculation of higher‐order derivatives based on membrane diffusion theory.

2.2.2. Partial Derivatives Calculation of Compound Terms

If the function learned by the neural network does not contain any composite terms, i.e., d combi (x 1, x 2, x 3, …, x N ) ≠ 0, the mutual influence between distinct variables in the composite terms needs to be addressed while calculating partial derivatives.

Case 1

If d combi (x, t) = xt + x 2 t 2, the corresponding CRNs can be described by:

X+TkX+T+BBk2X+2Tk2X+2T+CCkBkB+DCkC+DDk (46)

Using mass action kinetics, the ordinary differential equations (ODEs) of Equation (46) could be derived:

dBdt=kXTkBdCdt=kX2T2kCdDdt=kB+kCkD (47)

When the reaction system (46) reaches a steady state, the terms dBdt, dCdt, and dDdt on the left side of Equation (47) can be zeroed, resulting in Equation (48).

B=XTC=X2T2D=B+C=XT+X2T2 (48)

The partial differential derivation principle can yield the following results:

combi(x,t)x=1xxt+2xx2t2 (49)

The first‐order partial derivative of a single variable x computed via CRNs is given as:

div[B,X,E]:BkB+EX+EkXdiv[C,X,F]:C2kC+FX+FkXadd[E,F,G]EkE+GFkF+GGk (50)

The ODEs of Equation (50) are indicated as:

dEdt=kBkXEdFdt=2kCkXFdGdt=kE+kFkG (51)

When the reaction system (50) reaches a steady state, it can be obtained:

E=BXF=2CXG=E+F=BX+2CX=1XXT+2XX2T2 (52)

Obviously, the outcome of Equation (52) is consistent with that of Equation (49). Furthermore, the CRNs generated using the first‐order partial derivatives of variables x and t could be represented as:

div[B,(XT),K]:BkB+KX+T+KkX+Tdiv[C,(XT),L]:C22kC+LX+T+LkX+Tadd[H,L,M]KkK+MLkL+MMk (53)
Remark 3

Assuming that d combi (x, t) = a*xt + b*xt 2 + c*x 2 t 2 + d*x 3 t + e*x 3 t 2 + f*x 3 t 3 is the composite term, the following result could be reached using the partial differential derivation module.

ycombi(x,t)x=x(at+bt2)+cx2t2+x3(dt+et2+ft3)=1xx(at+bt2)+2xcx2t2+3xx3(dt+et2+ft3)ycombi(x,t)t=t(ax+dx3)+t2(bx+cx2+ex3t2)+t3(fx3)=1tt(ax+dx3)+2tt2(bx+cx2+ex3t2)+3tt3(fx3)ycombi(x,t)xt=11xtaxt+12xtbxt2+22xtcx2t2+31xtdx3t+32xtex3t2+33xtfx3t3 (54)

According to the two equations ycombi(x,t)x and ycombi(x,t)t in Equation (54), the calculation rule of the first‐order partial derivative with respect to variable x or t can be derived:

Exponent of the variablexortinycombicurrentVariablexortycombicurrent (55)

The CRNs calculated by the first‐order partial derivative of a single variable x can be expressed as:

ycombicurrentk1ycombicurrent+OX+Ok2X (56)

where k1k2 is the exponent of the variable x in y combicurrent .

The CRNs calculated by the first‐order partial derivative of a single variable t can be described as:

ycombicurrentk3ycombicurrent+PT+Pk4T (57)

where k3k4 is the exponent of the variable t in y combicurrent . According to equation ycombi(x,t)xt in Equation (54), calculation rules of the first‐order partial derivatives with respect to variables x and t can be obtained:

Multiplication of the exponentsof the variablexandtinycombicurrentVariable(xt)ycombicurrent (58)

The CRNs generated using the first‐order partial derivatives of variables x and t could be represented as:

ycombicurrentk5ycombicurrent+QX+T+Qk6X+T (59)

where k5k6 represents multiplication of the exponents of the variable x and t in y combicurrent .

2.3. CRNs‐Based Objective Function Verification Module

Figure  7 depicts the architecture of the augmented matrix neural network for solving partial differential equations, which is based on the objective function verification module. It should be noted that adding a function verification module has two advantages: one is to characterize the learning process of the neural network function and dynamically calculate the real‐time result of the objective function, that is, the signal P* in Figure 7; and the other is to calculate the error value between the neural network output result and the expected result of the objective function, that is, the signal Error 1 in Figure 7.

Figure 7.

Figure 7

Augmented matrix neural network based on function verification module for solving partial differential equations.

Assume that the objective function required for the neural network to learn is a typical quadratic function, which is:

f(x,t)=ax2+bt2+cxt (60)

The CRNs‐based target function module in the function verification model could be stated as follows:

2.3. (61)

The CRNs‐based error comparator module can be described as:

2.3. (62)

Figure 7 illustrates how the error Error 2 (the equation error) could be derived while computing the partial differential equation. When combined with the actual target error Error 1 as specified by Equation (62), the computation error Loss of the entire system could be obtained:

2.3. (63)

2.4. Augmented Matrix Neural Network for Solving Biological Brusselator Partial Differential Equation

2.4.1. Biological Brusselator Partial Differential Equation Based on Diffusion Term

The Brusselator model is a classic theoretical model for studying the dissipative structure of nonlinear chemical systems and the formation of biological patterns. The Brusselator model (Figure 8) based on CRNs can be described as:

2.4.1. (64)
Figure 8.

Figure 8

Biological Brusselator reaction model.

Assuming constant concentrations of substances A and B, the differential equation representing the concentration changes of intermediate species Inline graphic and Inline graphic with time and space, including diffusion terms in a one‐dimensional domain, could be described as:

2.4.1. (65)

where Inline graphic and Inline graphic are the diffusion coefficients of Inline graphic and Inline graphic respectively, and k a , k b , k c , k d is a non‐negative non‐zero reaction rate. Constraints are implemented by applying homogeneous Neumann boundary conditions:

2.4.1. (66)

Nonnegative initial conditions:

2.4.1. (67)

The Brusselator reaction‐diffusion system has no known analytical solution and must be solved numerically. In the absence of diffusion, if Equation (65) obtains a steady‐state solution, the parameters (a, b) must satisfy:

2.4.1. (68)

The parameters of the biological Brusselator model represented by Equation (64) are established; the starting concentrations of the two input chemicals Inline graphic and Inline graphic are set to 3nM; and the reaction rates are set to k a = 0.003, k b = 0.002, k c = 0.01, and k d = 0.001. The simulation results are depicted in Figure  9 .

Figure 9.

Figure 9

Simulation results of the biological Brusselator model.

2.4.2. Solving the Biological Brusselator Partial Differential Equations

The Brusselator RD (reaction‐diffusion) system described by Equation (65) primarily involves differential calculations, second‐order partial derivative calculations, addition and subtraction operations, and so on. To simplify the partial differential equation solution process, the first equation in Equation (65) is chosen as the research target in the process. The major goal of this research is to solve partial differential equations using the proposed augmented matrix‐based back propagation neural network and the partial differential calculation theory.

2.4.2. (69)

Assume that parameters Inline graphic, Inline graphic, and Inline graphic are constants, and parameter Inline graphic is a function of x, t, i.e., Inline graphic, with Inline graphic also serving as the output of the augmented matrix‐based back propagation neural network. Equation (69) could be rewritten as:

2.4.2. (70)

Using Inline graphic, Inline graphic and Inline graphic to represent Inline graphic, Inline graphic and Inline graphic respectively, the CRNs of Equation (70) can be expressed as:

2.4.2. (71)

It is worth noting that the actual output species of the chemical system given by Equation (71) is Error 2, whereas Equation (70) produces R 11. The two output species demonstrate the following relationship:

dError2dt=kR112kError2 (72)

When the reaction system is stable, the following result can be obtained through Equation (72).

Error2=R112 (73)

From the perspective of solving partial differential equations, the final concentration of substance Error 2 can be utilized to represent the equation loss error.

3. Implementations With DSD Reaction Networks

In the second section, an augmented matrix neural network (AMN) is built using chemical reaction networks (CRNs) to produce multivariable combination items. The third section implements the membrane diffusion principle and the division calculation principle to calculate the partial derivatives of single‐variable and combination items based on CRNs. In the fourth section, the biological Brusselator model with diffusion terms is studied, the objective function verification module is designed, and the biological Brusselator partial differential equation is solved using an augmented matrix neural network based on the solving‐verified module.

To simplify the description, this section focuses on describing the DNA strand displacement reaction implementation pathways of typical catalytic, degradation, and annihilation reactions in different functional modules.

3.1. DNA Implementation of Back Propagation Neural Network Based on Augmented Matrix

Catalysis reaction model 1 (Figure 10 )

Xi+Wi+InkiXi+2InIn+AnqiqmaxHai+HbiHbi+WiqmqmaxBi+HciBi+HdiqmaxHei+WasteHei+XiqmaxXi+2In+Waste,qi=ki (74)

Figure 10.

Figure 10

Schematic representation of DNA reactions of Equation (74).

Degradation reaction model (Figure 11 )

In+SnkiSnIn+OaiqiqmaxCi+ObiObi+SnqmqmaxDi+OciDi+OdiqmaxSn+Waste,qi=ki (75)

Figure 11.

Figure 11

Schematic representation of DNA reactions of Equation (75).

Annihilation reaction module (Figure 12 )

In+θnkiIn+QaiqiqmaxCi+QbiQbi+θnqmaxWaste,qi=ki (76)

Figure 12.

Figure 12

Schematic representation of DNA reactions of Equation (76).

Catalysis reaction model 2 (Figure 13 )

2Yi+Iiki2IiYi+GaiqiqmaxEi+GbiGbi+IiqmqmaxFi+GciGci+YiqmqmaxHi+GdiHi+GeiqmqmaxJi+GfiGfi+Kiqmax2Ii+Waste,qi=ki (77)

Figure 13.

Figure 13

Schematic representation of DNA reactions of Equation (77).

Adjustment reaction model (Figure 14 )

Wi+LaikaikbiLbiWi+LaiqaiqbiLbi,qai=kai,qbi=kbi (78)

Figure 14.

Figure 14

Schematic representation of DNA reactions of Equation (78).

3.2. DNA Implementation of Partial Derivative Calculation

3.2.1. Calculation of Partial Derivatives of Single Variable Terms

Catalysis reaction model (Figure 15 )

UaextkdiffUaext+Dau+×Uaext+AaextqiBext+WasteBext+CextqmaxAbext+WasteAbext+AcextqmaxUaext+Dau++Waste,qi=kdiffCmax (79)
Figure 15.

Figure 15

Schematic representation of DNA reactions of Equation (79).

Annihilation reaction module

Umm+UnnγaUmm+RaiqiqmaxRi+RbiRbi+UnnqmaxWaste,qi=γa (80)

Degradation reaction model (Figure 16 )

Dau+kDau++Mnqi,qi=kCmax (81)
Figure 16.

Figure 16

Schematic representation of DNA reactions of Equation (81).

3.2.2. Calculation of Partial Derivatives of Compound Terms

Catalysis reaction model

BkB+EB+AaqiB+WasteB+CqmaxAb+WasteAb+AcqmaxB+E+Waste,qi=kCmax (82)

Degradation reaction model 1

X+EkiEX+OaqiqmaxC+ObOb+EqmaxqmaxD+OcD+OdqmaxE+Waste,qi=ki (83)

Degradation reaction model 2

GkG+Mqi,qi=kCmax (84)

3.3. DNA Implementation of Partial Differential Equation Solving & Objective Function Verification

Catalysis reaction mode 1

3.3. (85)

Degradation reaction model

R1kR1+Mnqi,qi=kCmax (86)

Catalysis reaction mode 2

3.3. (87)

Catalysis reaction mode 3

3.3. (88)

Catalysis reaction mode 4

X+TkX+T+Cnext×X+AnqiqmaxEai+EbiT+EbiqmaxEci+WasteEci+DciqmaxGci+WasteGci+DdiqmaxX+T+Cnext+Waste,qi=k (89)

Remark 4

For DNA realization, Set AS listed in Table  1 represents auxiliary substances involved in the reactions, whereas Set IS denotes intermediate substances. The subset Set IW comprises the inert wastes that does not interact with other species engage in subsequent interactions. In addition, C max  indicates the initial concentration of auxiliary substances, while q max  denotes the maximum strand‐displacement rate. The actual reaction rate q i is obtained from the corresponding DNA implementations. To mitigate the impact of auxiliary species consumption on kinetic process, the concentration of Set AS in Equations (74)–(89) is initialized at its maximum value.

Table 1.

Classification of species in the DSD reactions of Equations (74)‐(89).

Classification Species
SetAS
An,Hdi,Oai,Odi,Qai,θn,Gai,Gei,ki,Aaext,Cext,Acext,Rai,Aa,C,Ac,Oa,Od,Pi,Ta,Dci,Ddi
SetIS
Hbi,Bi,Hei,Obi,Di,Qbi,Gbi,Gci,Hi,Gfi,Bext,Abext,Rbi,B,Ab,Ob,D,Qa,Qb,Ebi,Eci,Gci
SetIW
Hai,Hci,Waste,Ci,Oci,Ei,Fi,Gdi,Ji,Ri,C,Oc,Sp1,Sp2,Sp3,Eai,

4. Results

Based on the design ideas mentioned above, the neural network scheme that utilizes augmented matrix to realize the solution of partial differential equations in biological Brussels verified by utilizing the function verification module can be implemented through the DSD mechanism. The feasibility of DSD reaction realization has been confirmed using Visual DSD software. All reaction rates and total substrate values involved in the CRNs‐based neural networks with augmented matrices, partial derivative calculations (integrating the principles of membrane diffusion and division operations) and the Brusselator partial differential equation are provided in Tables 2, 3, 4, with feasible values of C max  = 1000nM and q max  = 107/M/s. The simulation results are then methodically examined, with an emphasis on three aspects: overall error Loss, the objective function verification module error Error 1, and partial differential equation computation error Error 2. A quantitative comparison of actual simulation results and theoretical values in biological PDE calculations is shown in Table 5.

Table 2.

Parametric representation for augmented matrix‐based neural network response rates and weights.

Parameters Descriptions Nominal‐Values
k 1 Input layer rate 6.0 × 10−5s−1
k 2 Hidden layer rate 6.0 × 10−5s−1
k 3 Output layer rate 6.0 × 10−5s−1
k 4 Activation function rate 6.0 × 10−5s−1
k 5, k 7, k 9, k 11, k 13 Forward reaction rate 3.65 × 10−6~s−1
k 6, k 8, k 10, k 12, k 14 Reverse reaction rate 9.0 × 10−5s−1
W 11 Input layer weight 0.63
W 12 Input layer weight 0.24
W 13 Input layer weight 0.06
W 14 Input layer weight 0.42
W 21 Input layer weight 0.39
W 22 Input layer weight 0.36
W 23 Input layer weight 0.06
W 24 Input layer weight 0.26
W 31 Hidden layer weight 0.11
W 32 Hidden layer weight 0.42
W 33 Hidden layer weight 0.09
W 34 Hidden‐layer weight 0.01
W 41 Hidden layer weight 0.16
W 42 Hidden layer weight 0.06
W 43 Hidden layer weight 0.35
W 44 Hidden layer weight 0.38
W 51 Output layer weight 0.19
W 52 Output layer weight 0.21

Table 3.

Parametric representation for partial derivative calculations.

Parameters Descriptions Nominal‐Values
k diff Reaction rate 4.0 × 10−4~s−1
k Reaction rate 3.0 × 10−3~s−1
k fast Degradation reaction rate 10
α a Catalytic reaction rate 5.0 × 10−4‐s−1
β a Catalytic reaction rate 5.0 × 10−4‐s−1
δ a Degradation reaction rate 5.0 × 10−4‐s−1
γ a Annihilation reaction rate 1.0
k x Reaction rate 6.0 × 10−4‐s−1

Table 4.

Parametric representation for the Brusselator partial differential equation.

Parameters Descriptions Nominal‐Values
D u Constant 0.5
k a Constant 1.0
k b Constant 0.3
k c Constant 0.2
k d Constant 0.1
A Constant 2
B Constant 2
V Constant 1

Table 5.

Quantitative comparison of actual simulation results and theoretical values in biological PDE calculations.

Parameters Descriptions Actual concentration(nM) Theoretical concentration(nM)
Error 1 Neural network output error 0 0
Error 2 Partial differential equation (PDE) error 0.02 0
Loss Overall error 0.02 0
P Neural network output 1.34 1.34
P* Target function output 1.34 1.34
U(x,t)x
First order partial derivative of variable x 1.86 1.85
2U(x,t)x2
Second order partial derivative of variable x 2 2
U(x,t)t
First order partial derivative of variable t 2.4 2.4
U x Sum of terms of a single variable: x 0.64 0.64
Uxx
First order partial derivative of variable x 1.6 1.6
U combt Sum of composite terms of variable x 0.2 0.2
Ucombtx
First order partial derivative of variable x 0.25 0.25
U t Sum of terms of a single variable t 0.5 0.5
Utt
First order partial derivative of variable t 2 2
Ucombtt
First order partial derivative of variable t 0.4 0.4

4.1. Overall Error Loss

According to the architecture depicted in Figure  8 , the overall error Loss is an exceptionally important metric for the neural network based on the augmented matrix in solving the partial differential equations of the biological Brusselator. The overall error Loss consists of two parts: the error Error 1 between the actual output of the DNA‐based neural network and the ideal result of the objective function; the error Error 2 generated during the process of solving the partial differential equation by the neural network, and the simulation results are illustrated in Figure 17.

Figure 17.

Figure 17

Error analysis of the augmented matrix‐based neural network for solving partial differential equations in biological Brusselator.

All three error curves in Figure 17 clearly follow the same trend: gradually approaching zero steadily over time. Error 1 (red curve) is sufficient to control the precise output of the neural network at time t = 20000s, whereas Error 2 (green curve) and Loss (orange curve) are stable and converge to 0.02nM. The approach of solving the Bio‐Brusselator partial differential equations by a neural network based on the augmented matrix is viable and effective within the acceptable error range.

4.2. Objective Function Verification Module Error Error 1

This section focuses on the analysis of the results around the error Error 1 between the augmented matrix‐based neural network output P and the objective function result P*, as well as the calculation of partial derivatives for each variable.

4.2.1. Error 1 Between The Augmented Matrix‐Based Neural Network‐Based Output P and The Objective Function Result P*

The augmented matrix‐based neural network output error analysis is depicted in Figure  18 . In comparison to the target function output P* (green curve), the Neural network output P (red curve) achieves the same targeted concentration level with a faster response time and more efficiency. In addition, Neural network output error gradually approaches zero over time, indicating that the augmented matrix‐based neural network has strong tuning ability.

Figure 18.

Figure 18

Neural network output error analysis.

4.2.2. Partial Derivative Calculation of Various Variables

Partial derivative calculation of various variables is an important measure of the performance of neural networks based on augmented matrices. Figure  19 describes the results of calculating the first‐order and second‐order partial derivatives of the neural network outputs with respect to variables x and t, respectively (i.e., with reference to the steady‐state output concentration level).

Figure 19.

Figure 19

Partial derivative calculation analysis of various variable.

4.3. Calculation Error Error 2 in Partial Differential Equations

The derivation of partial derivatives of various variables, as well as the computational errors in partial differential equations, are critical for solving partial differential equations in the biological Brusselator. The first‐order and second‐order partial derivatives of the variable x are next examined, as well as the first‐order partial derivatives of the variable t and the computational error Error 2 of the partial differential equation.

4.3.1. Partial Derivative Calculation Analysis

Figure  20a demonstrates the first‐order partial derivative computation for variable x. Evidently, U(x,t)x (blue curve) is divided into two parts: Uxx (the term accumulation U x of a single variable x and the first‐order partial derivatives with respect to the parameter x, green curve) and Ucombix (the composite term accumulation U combi of a variable x and the first‐order partial derivatives with respect to the parameter x, purple curve).

Figure 20.

Figure 20

Partial derivative calculation analysis. a) First order partial derivative of variable x, b) First order partial derivative of variable t, and c) Second order partial derivative of variable x.

U(x,t)t consists of two parts: Utt (the term accumulation U t of a single variable t and the first‐order partial derivatives with respect to the parameter t) and Ucombit (the composite term accumulation U combi of a variable t and the first‐order partial derivatives with respect to the parameter t). The first‐order partial derivatives of the variable t are computed, as shown in Figure 20(b).

The second‐order partial differentiation of the variable x is basically a second derivative calculation utilizing the findings of the first‐order partial derivatives, and the simulation results are illustrated in Figure 20(c).

4.3.2. Partial Differential Equation (PDE) Error Error 2

The augmented matrix‐based neural network scheme for solving partial differential equations in biological Brussels is feasible and effective, as demonstrated by Figure  21 . The error (blue curve) generated by the partial differential equation solving process gradually tends to zero over time, which is within a certain allowable error range.

Figure 21.

Figure 21

Partial differential equation (PDE) error analysis.

5. Conclusion 

In this paper, the theoretical and technological development of molecular computing is advanced through a triple innovation. The augmented matrix encoding strategy transforms abstract neural network parameters into tunable DNA biochemical reaction rates and realizes multi‐parameter parallel computation using strand displacement cascade reactions; the partial derivative computation scheme based on membrane diffusion‐division synergistic mechanism fills the theoretical gap in spatial differential computation of molecular systems by simulating partial differential terms through precisely designed reaction networks; and the adaptive biochemical partial differential equations solver supporting biological partial differential equations solving DNA neural network architecture, which verifies the feasibility of continuous mathematical modeling in biochemical systems. Especially crucial is that the error feedback‐driven weight adjustment mechanism simulates the principle of biological homeostatic regulation, which provides a new idea for constructing autonomous molecular controllers. The framework not only extends DNA computation from discrete logic to continuous differential mathematics, but its molecular parallelism and continuous computing capability also enables real‐time dynamic analysis (e.g., metabolic flux redistribution simulation) of complex biological networks. This breakthrough provides new ideas for synthetic biology and opens up a new path for developing molecular‐scale artificial intelligence systems with embedded physical intelligence.

6. Experimental Section

Chemical reaction networks (CRNs): Complex chemical reaction networks provide a framework to capture the intricate behaviors of chemical systems, even when such systems are purely theoretical. These networks are typically modeled by differential equations based on mass action kinetics, as exemplified by the representation of system (90):

i:Sa+SbθScii:SakSa+Sbiii:Sbγiv:Scμ2Sc (90)

The parameters θ and γ indicate the binding and degradation rates, and k denotes the catalytic rate. The differential equations of Equation (90) can be written as:

dSadt=θSaSbdSbdt=θSaSb+kSaγSbdScdt=θSaSb+μSc (91)

In addition, the corresponding stoichiometry matrix M, which encapsulates the fundamental stoichiometric relationships, can be systematically derived.

6. (92)

Biological signaling in biochemical circuits fundamentally relies on concentration‐based quantification, though constrained by the inherent non‐negativity of chemical species concentrations that precludes direct expression of negative values. The dual‐rail encoding strategy resolves this limitation through differential pair representation, where signal r is mathematically defined as r = r +r through opposing molecular species concentrations, with the corresponding kinetic relations r˙=r˙+r˙ maintaining mathematical consistency while allowing comprehensive signal representation.

DNA strand displacement reaction (DSD): Integrated complex chemical reactions are utilized in designing circuits capable of realizing desired behavior; to meet functional requirements, the molecules are structured as DNA strands by binding toeholds and their complementary toeholds.

For example, the DSD reactions of equation Sa+SbθSc in Equation (90) are Equation (93).

Sa+RaqiqmaxZp1+Zp2Sb+Zp1qmaxZp3+Zp4Zp4+RbqmaxSc+Zp5 (93)

The chemical species Ra and Rb are classified as substrates, whereas S a and S b are identified as reactants, with S c serving as the target product. As illustrated in Figure  22 , the reaction mechanism comprises a combination of multiple reactions involving single‐stranded and double‐stranded DNA species. Additionally, methodologies for implementing DNA strand displacement (DSD) in broader chemical reaction networks (CRNs), including degradation pathways, are systematically detailed in Section 4.

Figure 22.

Figure 22

Schematic representation of DSD reactions of Equation (93).

Simulation Methods

This work relies on Visual DSD software for verification, and the overall research framework adheres to the technical roadmap depicted in Figure  23 .

Figure 23.

Figure 23

Technical roadmap for the entire research framework.

Mathematical derivation and verification

The partial differential equation that should be solved is:

graphic file with name ADVS-12-e07060-e035.jpg (94)

Assume the solution is a standard quadratic function:

U(x,t)=ax2+bt2+cxt (95)

where a, b, c > 0.

Calculation of partial derivatives of solution functions:

U(x,t)t=(ax2+bt2+cxt)t=2bt+cxU(x,t)x=(ax2+bt2+cxt)x=2ax+ctU2(x,t)x2=(2ax+ct)x=2a (96)

Let D u = m 1, k c = m 2, V′ = v 1, k a = m 3, A′ = v 2, k b = m 4, B′ = v 3, k d = m 5; substituting Equation (96) into Equation (94), the following result can be obtained:

(2bt+cx)m1(2a)m2v1(ax2+bt2+cxt)2m3v2+m4v3(ax2+bt2+cxt)+m5(ax2+bt2+cxt)=0 (97)

Additional sorting can be obtained:

(2bt+cx)m1(2a)m2v1(ax2+bt2+cxt)2m3v2+(m4v3+m5)(ax2+bt2+cxt)=0 (98)

Steady‐state condition analysis:

In order to satisfy that x = 0.8, t = 0.5 is a steady‐state solution, the partial differential equation is required to valid. Substituting a = b into Equation (95), Equation (99) can be produced.

U(0.8,0.5)=a(0.8)2+b(0.5)2+c(0.8)(0.5)=0.64a+0.25b+0.4c (99)

Compute partial derivatives and other terms:

U(x,t)tx=0.8,t=0.5=2b(0.5)+c(0.8)=b+0.8c2U(x,t)x2x=0.8,t=0.5=2a (100)

Substituting Equations (99) and (100) into Equation (94), the following results are obtained:

(b+0.8c)2m1am2v1(0.64a+0.25b+0.4c)2m3v2+(m4v3+m5)(0.64a+0.25b+0.4c)=0 (101)

Parameter determination:

Constraints

  • Parameters a, b, c, m 1, m 2, m 3, m 4, m 5, v 1, v 2, v 3 > 0

  • The equation satisfies the steady‐state solution at x = 0.8, t = 0.5.

Parameter Design

a=1,b=2,c=0.5,m1=0.5,m2=0.2,m3=1,m4=0.3,m5=0.1,v1=1,v2=2,v3=2 (102)

Substituting the parameters in Equation (95), we can obtain:

U(x.t)=x2+2t2+0.5xt (103)

Calculate the terms for x = 0.8, t = 0.5:

U(0.8,0.5)=(0.8)2+2(0.5)2+0.5(0.8)(0.5)=1.34Ut=2bt+cx=2(2)(0.5)+0.5(0.8)=2.42Ux2=2a=2 (104)

Substitute Equation (104) into the Equation (94) to verify:

(2.4)2(0.5)(2)0.2(1)(1.34)2(1)(1)+(0.3(2)+0.1)(1.34)=0.021120 (105)

Equation (103) is confirmed to be true, therefore

U(x,t)=x2+2t2+0.5xt (106)

is the desired steady‐state solution.

Conflict of Interest

The authors declare no conflict of interest.

Author contributions

Y.X. and T.S. conceived the study. Y.X. designed the methodology and prepared the original draft. T.M., A.R.‐P. and T.S. provided conceptual guidance and wrote the manuscript. J.W., P.Z. and T.S. reviewed and edited the manuscript. T.S. supervised the study and acquired funding.

Acknowledgements

The authors gratefully acknowledge Postdoc Mathieu Hemery for help with understanding the differential calculations utilizing membrane diffusion theory. This work was supported by National Natural Science Foundation of China (Grant Nos. 62272479, 62372469, 62202498), Taishan Scholarship (tstp20240506).

Xiao Y., Rodríguez‐Patón A., Wang J., Zheng P., Ma T., and Song T., “Programmable DNA‐Based Molecular Neural Network Biocomputing Circuits for Solving Partial Differential Equations.” Adv. Sci. 12, no. 33 (2025): 12, e07060. 10.1002/advs.202507060

Contributor Information

Tongmao Ma, Email: tongmao.ma@upc.edu.cn.

Tao Song, Email: tsong@upc.edu.cn.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.


Articles from Advanced Science are provided here courtesy of Wiley

RESOURCES