Abstract
Sparse connectivity is a hallmark of the brain and a desired property of artificial neural networks. It promotes energy efficiency, simplifies training, and enhances the robustness of network function. Thus, a detailed understanding of how to achieve sparsity without jeopardizing network performance is beneficial for neuroscience, deep learning, and neuromorphic computing applications. We used an exactly solvable model of associative learning to evaluate the effects of various sparsity-inducing constraints on connectivity and function. We determine the optimal level of sparsity achieved by the ℓ0 norm constraint and find that nearly the same efficiency can be obtained by eliminating weak connections. We show that this method of achieving sparsity can be implemented online, making it compatible with neuroscience and machine learning applications.
Sparsity is a central theme across disciplines. In neuroscience, it is a characteristic feature of brain networks [1], offering advantages such as simplified circuit development, reduced brain volume and wiring, lower metabolic cost, and more efficient learning. In compressed sensing, sparsity enables signal recovery from a limited number of measurements [2]. In neuromorphic computing, it reduces latency and power consumption while enhancing scalability and robustness [3]. In machine learning, it improves generalizability and interpretability of the models, while reducing computational cost and training time [4]. Across all fields, sparsity simplifies the construction and maintenance of the systems, albeit at the expense of functionality, as fewer connections can simply do less. Thus, understanding how to achieve sparsity without significantly compromising the system’s function is of paramount importance.
To study the sparsity-function tradeoff in detail, we considered a model of associative memory storage by a single perceptron [5,6]. This model is inspired by the brain and serves as a building block in machine learning networks (Figure 1A). It offers the advantage of analytical tractability while capturing key properties of larger networks, regarding learning capacity and connectivity. In its simplest form, the model was first solved by Cover [7], who showed that a perceptron with N inputs can perform binary classification of up to 2N random input patterns. Subsequently, advances in statistical physics, such as replica and cavity methods [8–10], have enabled solutions of more general perceptron models [11–17], albeit in the N → ∞ limit. Notable examples relevant to this work include the study of Gardner and Derrida [11], who determined the perceptron’s capacity for robustly learning patterns under an ℓ2 norm constraint on connection weights; Bouten et al. [12], who extended this approach by including an ℓ0 norm constraint; Brunel et al. [13], who introduced a fixed firing threshold and all excitatory inputs to model learning by cerebellar Purkinje cells; and Chapeton at al. [14], who incorporated both excitatory and inhibitory inputs to model cortical neurons.
Figure 1:

Constrained perceptron as a neuron-based model of associative learning. A. Schematic of a cortical neuron receiving inputs from multiple axons (red) carrying action potentials. The neuron is modeled by a binary perceptron that learns by adjusting its connection strengths under biologically inspired constraints. B. Probability of successful learning a set of associations vs. their number, m. C. At capacity, the distribution of connection strengths of a perceptron receiving N = 250 inputs (green bars) agrees with the prediction of replica theory (black line). Results were obtained for an ℓ1 norm constrained perceptron (base model) using the parameters shown in (B, C).
Parallel to analytical efforts, the perceptron learning rule was developed to solve learning problems numerically, ensuring convergence in feasible cases [18–20]. The rule applies in online settings, where examples arrive sequentially, making it highly relevant for neuroscience and machine learning. However, findings from replica theory and applications of the perceptron learning rule have revealed that the high learning capacity of the simple perceptron is achieved through dense connectivity.
The central message of this letter is that sparsity in learning emerges from constraints on connectivity and function. For example, one way to impose sparsity is through a quenched ℓ0 norm constraint (qℓ0), where a fixed subset of connections is removed before learning begins. While simple, this method is inefficient, as connections are cut without the knowledge of the data. In contrast, the annealed ℓ0 norm constraint fixes the total number of connections but allows learning to select them based on the examples. This data-driven flexibility yields optimal sparsity but results in an NP-hard problem [21], making it impractical for online learning. Still, an analytical solution in this case can be obtained in the N → ∞ limit, providing a valuable benchmark for evaluating other methods.
In this letter, we explore additional methods of inducing sparsity during learning by imposing constraints, focusing on those inspired by the brain. We show how the strengths of the constraints mediate tradeoffs between sparsity and learning capacity. Notably, we introduce gap constraints that, by disallowing small-weight connections, yield a near-optimal tradeoff between sparsity and learning capacity, like that achieved by the ℓ0 norm. We develop a perceptron-type learning rule for online associative learning under gap constraints, showing that, beyond promoting sparsity, these constraints improve both accuracy and generalizability.
Associative memory storage with sparsity-inducing constraints (SICs)—
To study how SICs affect network connectivity and function, we considered a model of associative memory storage by a single perceptron (Figure 1A). In its basic form, the model works as follows: a perceptron of N inputs (indexed by j) stores m binary (0, 1) input-output associations , by adjusting its connection strengths, Jj. Components of vectors () and scalars yμ are drawn independently from Bernoulli distributions with respective probabilities fj and fout of being 1. An association μ is successfully stored if the postsynaptic input is < 0 for yμ = 0, or > 0 for yμ = 1. These conditions can be combined using the Heaviside step function,
| (1) |
The properties of this basic model have been extensively studied analytically and numerically. As shown in Figure 1B, the probability of successfully storing a set of m associations decreases with memory load m/N, exhibiting a transition from 1 to 0 as the problem becomes increasingly unfeasible. The memory load yielding a success probability of 0.5 is called the perceptron’s associative memory storage capacity, α. With increasing N, the transition from successful learning to inability to learn the entire set of associations sharpens, and α approaches the critical capacity, αc, in the N → ∞ limit.
Since the solution space of Eq. (1) is open—if {Jj} is a solution, so is {cJj}, Ɐ c > 0—regularization is needed to study perceptron’s connections. We do this by fixing the ℓ1 norm of connection strengths, , where w denotes the average connection magnitude. This regularization is inspired by the brain, where the strength of a synaptic connection correlates with limiting resources like presynaptic vesicle count and postsynaptic spine volume [22]. While the ℓ1 norm is widely used to promote sparsity in regression and deep learning, in this model, it alone does not induce sparsity during learning. Instead, we find that at capacity, the distribution of Jj in this base model is Gaussian (Figure 1C) [23], and associations are stored with dense connectivity.
In the following, we investigated the effects of several brain-inspired constraints, added to the base model to induce sparsity during learning. (1) Since cortical neurons are believed to operate with fixed firing thresholds [14], we added a fixed threshold h ≥ 0 to the base model of Eq. (1). (2) Memory retrieval in the brain must tolerate some postsynaptic noise [24]; therefore, we included a robustness constraint ensuring perfect recall of stored associations under bounded noise η, , where κ is called the robustness parameter. (3) Synaptic strengths in the cerebral cortex can change during learning, but excitatory connections (gj = +1) typically do not become inhibitory (gj = ‒1), and vice versa, consistent with Dale’s principle [25]. We modeled this as sign constraints. (4) We also considered ℓ0 norm constraint that keeps the number of non-zero-weight connections fixed during learning. It can be expressed as , where p denotes the fraction of non-zero-weight connections. Although there is no direct evidence that neurons in the brain maintain constant synapse count during learning, this constraint yields optimal sparsity and serves as a benchmark for other models. (5) Due to the brain’s discrete molecular machinery, functional synapses have a minimum weight known as quantal amplitude [26,27]. We modeled this with gap constraints, requiring weights to be either zero (no connection or silent synapse) or above a threshold, (functional synapse), creating gaps in the connection strength distribution. A sparse associative learning model may incorporate some or all of these constraints,
| (2) |
For an intuitive explanation of how the selected constraints promote sparsity during learning, see Appendix A.
Analytical solution of the sparse learning model in the large N limit—
Since neurons in biological and artificial networks typically receive a large number of inputs (N ~ 103 – 106), we first applied the replica method to analyze the sparse learning model of Eqs. (2) in the N → ∞ limit. Our previous findings, along with Figures 1B and 1C, show that replica solutions at capacity closely approximate the properties of finite-size models even for N ~100. Extending previous work [11–14,17], we derived a general analytical solution that encompasses all 26 = 64 combinations of constraints of Eqs. (2) [23].
Figure 2 (black lines) shows how the critical capacity, αc, sparsity, S (fraction of zero-weight connections), and the probability density of normalized connection strengths, p(J/w), depend on the strengths of five SICs. The ℓ1 norm constraint was included in all cases to ensure finite solutions, and all relevant quantities were normalized with w. For brevity, we only show the results for homogeneous models with fj = fout = 0.5 and Δj = Δ. At its maximum, αc = 2, in agreement with the result of Cover [7]. This maximal capacity is achieved only by a fully connected perceptron (S = 0) in the absence of SICs. Constraints increase S at the expense of αc, establishing tradeoffs between these structural and functional properties of the perceptron.
Figure 2:

Effects of constraints on critical capacity and connectivity. Critical capacity (A1) and sparsity (A2) obtained with replica theory (black lines) and linear optimization (green points) in the h + ℓ1 case are shown as functions of the threshold. Model parameters are in (A1). A3. The probability density of connection strengths for h/Nw = 0.4 (blue circle in A1,2) contains S = 0.34 fraction of zero-weight connections (dashed arrow at 0 representing a Dirac delta function). Same for the κ + ℓ1 (B1–3), sign + ℓ1 (C1–3), ℓ0 + ℓ1 (D1–3), and gap + ℓ1 (E1–3) cases. The dashed line in (D1) corresponds to the quenched ℓ0 norm constraint case (qℓ0). In (D3), the distribution obtained with replica theory exhibits symmetric gaps around zero. In (E3), finite fractions of weights accumulate at the outer edges of the gaps (s±).
At critical capacity, the connection strength distribution p(J/w) of a constrained perceptron is generally composed of two truncated Gaussians (for J > 0 and J < 0) and a finite fraction of J = 0 connections (Figure 2, bottom row). In the ℓ0 + ℓ1 case, p(J/w) contains a gap on each side of J = 0, indicating the absence of weak non-zero-weight connections. This gap, induced by the ℓ0 norm, promotes sparsity with minimal impact on memory storage capacity. Since the ℓ0 constraint provides optimal sparsity for a given αc by creating gaps in p(J/w), we expected to achieve a similar result with engineered gaps that are biologically inspired. Indeed, the solution of the model in the gap + ℓ1 case (Figure 2E) confirms that local gap constraints can promote sparsity nearly as effectively as the global ℓ0 constraint. However, in the gap case, p(J/w) contains additional finite fractions of connections at the outer edges of the gaps, s±, reflecting a slightly lower efficiency of these constraints in comparison with ℓ0. All replica results were validated with numerical simulations (green points and bars in Figures 2 and S1) as detailed in Appendix B.
The tradeoff between capacity and sparsity is mediated by constraints—
Figure 3A explicitly shows the tradeoff between critical capacity and sparsity across the five constraint types. The ℓ0 + ℓ1 constraint (blue line) yields the optimal tradeoff, i.e., it is the closest curve to the unattainable S = 1, αc = 2 point (top right corner). Remarkably, this constraint makes it possible to achieve nearly maximal capacity (αc = 1.86) with only 50% of the input connections (S = 0.5), a result first reported by Bouten et al. [12] in a related model. The gap + ℓ1 case closely follows the optimal tradeoff line, reaching αc = 1.82 at the same sparsity level. This is much better than the trivial way of achieving sparsity by simply pruning a set fraction of connections before learning (qℓ0, thick gray line). The latter is equivalent to the h + ℓ1 and sign + ℓ1 cases, which, however, have more limited ranges. The κ + ℓ1 constraint offers the least effective tradeoff between capacity and sparsity. However, by directly enforcing the robustness of stored memories, this constraint provides an added functional benefit.
Figure 3:

Sparsity in constrained perceptron models. A. Tradeoff between sparsity and critical capacity across models (key in B). Arrows show directions of increasing constraint strength. The gray line (qℓ0 + ℓ1 case) is thickened for clarity. B. In the ℓ0 + ℓ1 case (blue), sparsity induces a gap in connection strength distribution, while in the gap + ℓ1 case (green), the gap is engineered to induce sparsity.
Notably, the two best tradeoff cases, ℓ0 + ℓ1 and gap + ℓ1, function with no weak non-zero-weight connections. This is not unexpected, as eliminating a weak connection has a relatively small effect on the perceptron’s function while increasing sparsity by the same fixed amount that removal of any connection would produce. Figure 3B shows that in both cases, sparsity correlates with gap size, with the gap + ℓ1 model requiring a larger gap to match the sparsity of ℓ0 + ℓ1.
Online learning with SICs—
We next tested whether the insights from the analytical solution extend to practical problems, such as learning to discriminate finite and correlated input patterns online. Such problems arise in machine learning and are arguably encountered by neurons in the brain. They are typically unfeasible and only require “good enough” or approximate solutions that succeed in most cases. In online learning, examples are presented sequentially, and learning proceeds stochastically for a fixed number of steps or until the desired accuracy is achieved. To explore this, we modified the perceptron learning rule [19] so that each update step not only learns associations but also enforces the SICs of Eqs. (2) [23]. Given the lack of biological plausibility and the NP-hard nature of the ℓ0 norm constraint, we did not attempt to extend the sparse learning rule on the ℓ0 + ℓ1 case but considered qℓ0 + ℓ1 instead. While the computational complexity of the gap-constrained problem is unknown, it appears more tractable for online algorithms, likely due to the local nature of these constraints.
We evaluated the sparse learning rule on the classification task of the MNIST handwritten digits [28] (see Appendix B). Figure 4 shows that the results are in qualitative agreement with the analytical results of Figures 2 and 3. Differences could be attributed to the fact that the MNIST images are spatially correlated, while replica results were derived for random input patterns. The highest balanced accuracy during training is achieved without SICs and with dense connectivity (S = 0). With increasing strengths of the constraints, the accuracy declines monotonically while S rises, revealing a tradeoff similar to the analytical results of Figure 2.
Figure 4:

Sparse learning with an online perceptron-type rule. Balanced accuracy (A1) and sparsity (A2) during training and testing on the MNIST dataset in the h + ℓ1 constrained case, plotted vs. threshold. A3. Tradeoff between sparsity and balanced accuracy. Same in the κ + ℓ1 (B1–3), sign + ℓ1 (C1–3), qℓ0 + ℓ1 (D1–3), and gap + ℓ1 (E1–3) cases.
All SICs improve generalizability, evidenced by the convergence of the training and testing accuracy curves with increasing strengths of the constraints. As expected, robustness to noise can improve testing accuracy, shown by the initial rise of the green curve in Figure 4B1. Interestingly, a similar effect is seen in the gap + ℓ1 case, demonstrating that removing weak connections, in addition to increasing generalizability, can improve the model’s robustness to noise and small input variations.
Figure 4 (bottom row) shows that the tradeoff curves of balanced accuracy and sparsity follow a similar ordering to the analytical results of Figure 3A. In the absence of the ℓ0 + ℓ1 case, the best tradeoff is achieved by the gap constraints (Figure 4E3). A balanced test accuracy of 0.92 is attained by an unconstrained, fully connected system (S = 0). Remarkably, the same test accuracy could be achieved using gap constraints with only 15% of connections, while the maximum test accuracy of 0.94 required 40% of connections. Whether gap constraints contribute to the sparse connectivity observed in the cerebral cortex (Appendix C) and whether they can be effectively implemented in deep learning applications (Appendix D), remains an open question.
Supplementary Material
Acknowledgments—
This work was supported by AFOSR grant FA9550-15-1-0398, NSF grant IIS-1526642, and NIH grant R56 NS128413.
End Matter
Appendix A: How constraints lead to sparsity during learning—
We note that the problem of memory storage to capacity, i.e., maximization of memory load for a fixed w, is equivalent to the problem of minimization of w for a given load. The solution region to the former problem is not easy to visualize as it involves an increasing number of associations, each represented by a hyperplane that selects half of the connection strength space. The solution region of the latter problem can be visualized more easily, as it involves a continuously changing variable w. Figure 5 illustrates the solution region for a perceptron with N = 2 inputs (axes) learning m = 2 associations (blue half-planes) in the presence of h, κ, ℓ0, and gap constraints. In these cases, minimization of w selects a unique solution that is sparse.
Figure 5:

Constraints can induce sparse connectivity. A. A binary perceptron with N = 2 inputs and a threshold h > 0 is loaded with m = 2 associations (blue half-planes) in the presence of an ℓ1 norm constraint (orange square). The solution region of this problem is indicated with black line segments. Minimization of ℓ1 norm, which is achieved by scaling down the orange square, leads to a unique solution that is sparse (corner marked by a black dot). Similar arguments hold in the presence of robustness constraint, κ (B), and gap constraints, Δ (C). D. In the presence of an ℓ0 norm constraint (blue arrows, p = 0.5 in the example), the solution is sparse by design.
Appendix B: Validation of the results of the replica method and simulations with the sparse learning rule—
Linear programming was used to validate the results of the replica method for models under h, κ, and sign constraints, while integer linear programming was used for the ℓ0 and gap constrained cases (Figures 2 and S1) [23]. All results were averaged over 100 runs for every parameter setting. Due to the higher computational complexity of the latter method, numerical solutions in the last two cases were limited to comparatively small values of N, leading to an underestimation of critical capacity (as seen in Figure 1B), though sparsity estimates agreed with the replica results (Figure 2).
To evaluate the performance of the sparse learning rule on the task of classification of handwritten digits of the MNIST dataset [28], a separate constrained perceptron model was trained to classify each digit. Each model was trained on 60,000 and tested on 10,000 examples in which all images of that digit, Xμ, were associated with yμ = 1 and the rest with yμ = 0. Because the binary classification problems solved by these models are highly unbalanced (Positive/Negative examples ~ 1/9 ratio), Balanced Accuracy [29] was used instead of capacity to evaluate their performance. The model was trained for 106 learning steps, and the training and testing results were averaged over all digits.
Appendix C: On comparison with brain connectivity—
Although a direct quantitative comparison of learning in the brain and the constrained learning models presented here is unwarranted, we provide some numerical values to stimulate further research. It is well established that the brain operates with sparse connectivity, with S ≈ 0.8–0.9 in the cerebral cortex [17]. The quantal amplitude can be estimated from paired recordings combined with the synaptic transmission model of del Castillo & Katz [26], yielding Δ/w ≈ 0.2–0.4 [27]. In contrast, the maximum testing accuracy in Figure 4E1 corresponds to S = 0.55 and Δ/w = 1.1, and to achieve the level of sparsity observed in the brain (e.g., S = 0.85), a much larger gap is required, Δ/w = 4.8. This discrepancy is likely to be reduced by combining several SICs of Eqs. (2), which would provide a more appropriate model of brain neurons.
Appendix D: Deep learning implementation of SICs and comparison to traditional pruning methods—
SICs can be incorporated into traditional deep learning workflows by adding them directly into the loss function, enforcing them after each weight update step, or augmenting the training data. Specifically, ℓ1 norm constraint is already widely used in deep learning and is typically included as a regularization term in the loss function [30]. The fixed threshold constraint, h, can be directly embedded into the neuron model itself. The robustness constraint, κ, can be implemented by augmenting training examples, weights, or neuron inputs with noise [24,31]. The sign and gap inequality constraints can either be incorporated into the loss function using parametrization, penalty, or barrier methods [32], or they can be enforced after each weight update via weight projection (clipping) [33], similar to the way this is done by the sparse learning rule presented here. Our theoretical results suggest that the gap + ℓ1 combination will lead to a promising balance between accuracy and sparsity. However, the practical effectiveness of the approach may depend heavily on the details of implementation.
The described methods of inducing sparsity through constraints parallel pruning strategies commonly used in deep learning [34]. Specifically, our sparse learning rule under the gap constraint resembles Dynamic Sparse Training with Rewiring [34]. This is a class of methods that begin with randomly initialized sparse networks and dynamically reallocate unimportant weights to more promising locations during training. Our theoretical solution suggests that the best implementation of this strategy would involve stochastically setting connection weights that fall within the gap regions either to zero or to the outer edges of the gaps.
References
- [1].Thomson AM and Lamy C, Front Neurosci 1, 19 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Donoho DL, IEEE Transactions on information theory 52, 1289 (2006). [Google Scholar]
- [3].Davies M, Wild A, Orchard G, Sandamirskaya Y, Guerra GAF, Joshi P, Plank P, and Risbud SR, Proceedings of the IEEE 109, 911 (2021). [Google Scholar]
- [4].Louizos C, Welling M, and Kingma DP, arXiv preprint arXiv:1712.01312 (2017). [Google Scholar]
- [5].McCulloch WS and Pitts W, The bulletin of mathematical biophysics 5, 115 (1943). [DOI] [PubMed] [Google Scholar]
- [6].Rosenblatt F, Cornell Aeronautical Laboratory Report 85-460-1 (1957). [Google Scholar]
- [7].Cover TM, IEEE Trans. EC 14, 326 (1965). [Google Scholar]
- [8].Edwards SF and Anderson PW, J. Phys. F: Metal Phys 5, 965 (1975). [Google Scholar]
- [9].Sherrington D and Kirkpatrick S, Physical Review Letters 35, 1792 (1975). [Google Scholar]
- [10].Mézard M, Parisi G, and Virasoro MA, Spin glass theory and beyond: An Introduction to the Replica Method and Its Applications (World Scientific Publishing Company, 1987), Vol. 9. [Google Scholar]
- [11].Gardner E and Derrida B, J. Phys. A: Math. Gen 21, 271 (1988). [Google Scholar]
- [12].Bouten M, Engel A, Komoda A, and Serneels R, Journal of Physics A: Mathematical and General 23, 4643 (1990). [Google Scholar]
- [13].Brunel N, Hakim V, Isope P, Nadal JP, and Barbour B, Neuron 43, 745 (2004). [DOI] [PubMed] [Google Scholar]
- [14].Chapeton J, Fares T, LaSota D, and Stepanyants A, Proc Natl Acad Sci U S A 109, E3614 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Brunel N, Nat Neurosci 19, 749 (2016). [DOI] [PubMed] [Google Scholar]
- [16].Rubin R, Abbott LF, and Sompolinsky H, Proc Natl Acad Sci U S A 114, E9366 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [17].Zhang D, Zhang C, and Stepanyants A, J Neurosci 39, 6888 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [18].Hebb DO, The organization of behavior; a neuropsychological theory (Wiley, New York, 1949). [Google Scholar]
- [19].Rosenblatt F, Principles of neurodynamics; perceptrons and the theory of brain mechanisms (Spartan Books, Washington, 1962). [Google Scholar]
- [20].Novikoff AB, SRI Menlo Park, CA (1963). [Google Scholar]
- [21].Natarajan BK, SIAM journal on computing 24, 227 (1995). [Google Scholar]
- [22].Holtmaat A and Svoboda K, Nat Rev Neurosci 10, 647 (2009). [DOI] [PubMed] [Google Scholar]
- [23].See Supplemental Material at http://link.aps.org/supplemental/10.1103/918k-x6np, which includes Ref. [35] for details.
- [24].Zhang C, Zhang D, and Stepanyants A, eNeuro 8 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [25].Dale H, Proc R Soc Med 28, 319 (1935). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Del Castillo J and Katz B, J Physiol 124, 560 (1954). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Markram H, Lubke J, Frotscher M, Roth A, and Sakmann B, J Physiol 500 ( Pt 2), 409 (1997). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].LeCun Y, http://yann.lecun.com/exdb/mnist/ (1998).
- [29].Mower JP, BMC bioinformatics 6, 1 (2005). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [30].Hoefler T, Alistarh D, Ben-Nun T, Dryden N, and Peste A, Journal of Machine Learning Research 22, 1 (2021). [Google Scholar]
- [31].Shorten C and Khoshgoftaar TM, Journal of big data 6, 1 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [32].Luenberger DG, Ye Y, Luenberger DG, and Ye Y, Linear and Nonlinear Programming, 397 (2016). [Google Scholar]
- [33].Arjovsky M, Chintala S, and Bottou L, in International conference on machine learning (PMLR, 2017), pp. 214. [Google Scholar]
- [34].Cheng H, Zhang M, and Shi JQ, IEEE T Pattern Anal (2024). [Google Scholar]
- [35].van Vreeswijk C and Sompolinsky H, Neural Comput 10, 1321 (1998). [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
