Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2023 Jul 24;120(31):e2305273120. doi: 10.1073/pnas.2305273120

Modeling and design of heterogeneous hierarchical bioinspired spider web structures using deep learning and additive manufacturing

Wei Lu a,b, Nic A Lee a,c, Markus J Buehler a,b,d,e,1
PMCID: PMC10401013  PMID: 37487072

Significance

We report a graph-focused deep learning technique to capture the complex design principles of graph architectures—here exemplified for 3D spider webs—and use the model to generate a diverse array of de novo bioinspired structural designs. The work lays the foundation for spider web generation and explores bioinspired design using rigorous principles. A set of innovative spider web designs is constructed and manufactured, consisting of varied designs with diverging architectural complexity. In future work, this method can be applied to other heterogeneous hierarchical structures including a broad class of architected materials and hence can offer opportunities for fundamental biological understanding and to meet a set of diverse design opportunities via the use of generative artificial intelligence for materials applications.

Keywords: hierarchical materials, transformer, generative deep learning, bioinspired, mechanics

Abstract

Spider webs are incredible biological structures, comprising thin but strong silk filament and arranged into complex hierarchical architectures with striking mechanical properties (e.g., lightweight but high strength, achieving diverse mechanical responses). While simple 2D orb webs can easily be mimicked, the modeling and synthesis of 3D-based web structures remain challenging, partly due to the rich set of design features. Here, we provide a detailed analysis of the heterogeneous graph structures of spider webs and use deep learning as a way to model and then synthesize artificial, bioinspired 3D web structures. The generative models are conditioned based on key geometric parameters (including average edge length, number of nodes, average node degree, and others). To identify graph construction principles, we use inductive representation sampling of large experimentally determined spider web graphs, to yield a dataset that is used to train three conditional generative models: 1) an analog diffusion model inspired by nonequilibrium thermodynamics, with sparse neighbor representation; 2) a discrete diffusion model with full neighbor representation; and 3) an autoregressive transformer architecture with full neighbor representation. All three models are scalable, produce complex, de novo bioinspired spider web mimics, and successfully construct graphs that meet the design objectives. We further propose an algorithm that assembles web samples produced by the generative models into larger-scale structures based on a series of geometric design targets, including helical and parametric shapes, mimicking, and extending natural design principles toward integration with diverging engineering objectives. Several webs are manufactured using 3D printing and tested to assess mechanical properties.


Many hierarchical material architectures are found in biological materials (15), such as in bone (2, 6, 7), wood (8), sea sponges and diatoms (913), honeycomb structures (1416), and various spider webs (1728). Like all biological structures, spider webs have been well adapted to their natural environments over hundreds of millions of years of evolution (29, 30) and a diversity of web structures exist across spider species. The underlying strong silk filament and complex hierarchical architectures ranging from protein scale amino acid chains to macroscale web geometries, provide the structure with 1) exceptional mechanical and biological properties, 2) multiple incorporated functions, and 3) diverse geometries (29, 31, 32). With emerging additive manufacturing tools (3336), it is now possible to generate physical representations of many hierarchical designs; albeit, the synthesis of design solutions remains a challenge, especially when it comes to generating bioinspired architectures that are rigorously derived from its biological design principles (3740).

Spider webs (Fig. 1) exhibit remarkable mechanical properties, including strength, extensibility, and toughness despite their lightweight nature. Moreover, spider silk is biodegradable, biocompatible, and produced from abundant, renewable resources (32). Spider web structures are also highly adaptive and provide integrated functions such as capturing prey, protection from predators, and structural stabilization under external reactions from wind, projectiles, and shifting foundations. In operando, spiders modify webs for multiple needs and self-monitor webs by sensing vibrational information to locate defects and external reactions. These monitoring processes then facilitate prompt responses and proper adjustments to the web architecture (21). It is known that the primary web architecture and functions are typically constructed within the first few days, and silk is then recycled and produced for structural repairs thereafter (32). An orb web is the archetypical 2D spider web architecture, which consists of spiral silks and radial threads (41), while 3D webs such as cobwebs, tangle webs, and funnel webs, display more complex structures, higher degrees of randomness, and uncertainty regarding interrelationships among web entities. In addition, there exist local variations with distinct geometries and mechanical properties indicated in 3D spider webs (42). Both 2D and 3D web geometries naturally vary according to disparities in construction, orientation to the surrounding environment, prey-catching mechanisms (4345), and defense mechanisms (4446).

Fig. 1.

Fig. 1.

Overview of the work, including details on graph representation strategies. (A) Summary of the overall approach reported in this paper. Based on a spider web scanned using laser tomography (2426, 29, 32), we develop a graph model, which is then modeled, and transformed into, bioinspired, synthetic structures. As shown in panels B and C for two webs, the structures feature a high degree of heterogeneity via coloring, we show the variation of edge length, clustering coefficient (47), and valence (number of neighbors for each node) (further analysis see SI Appendix, Fig. S2). (DF) Development of web construction representations using inductive sampling (48). Samples of large webs (D) are developed into a large training set (F) via inductive sampling, considering up to fourth nearest neighbors. (This process is repeated for all nodes in each of the large webs shown in panel D.) Panel G shows the length distribution of the resulting graphs. Panel H shows the hierarchical structuring used here, by breaking down a complex graph (level 4) into a local graph assembly (level 3), a local web segment as defined by inductive sampling (level 2), individual silk segments (level 1, defined by a finite length fiber with a start and end point), and the elementary fiber (level 0). At the ultrascale, silk consists of a molecular-level nanocomposite, which is not considered here.

Owing to the exceptional properties and appealing features, spider webs have been researched in various fields, such as in the study of biomaterial platforms (49, 50), structural engineering (51, 52), battery electrodes (53), and art and music (5, 24, 26, 46). To increase the accessibility of spider web geometries for various downstream design applications, a digitalization technique has been developed in earlier work to transform the spider web structure to graph- data (25). The webs are constructed by spiders in a water-trapped cube frame in the laboratory, and then high-resolution 2D images of spider webs are acquired via a sheet laser with a moving-rail set-up for imaging processing. An image-to-line algorithm is implemented to translate spider web images into a bead-spring model described in ref. (25) using 3D graph data from ref. (26).

Although spider webs have been imaged in earlier work (25, 52), models for the construction of 3D webs do not yet exist, preventing the systematic translation of web design principles into engineering solutions. Since spider webs have a high degree of geometric diversity, and complex variations in scale and functions, it is hard to capture the structure–property relationships through observation-based model building. Although a digitalization technique has been developed for spider web structures (25), high computational costs are required, especially for large-scale 3D webs with more convoluted structures and richer sets of design features.

To address these shortcomings, we propose a deep learning strategy to model 3D spider webs and use the learned representations to construct de novo, engineered webs using generative deep learning methods. This method innately captures the variations and mutations discovered from synthetic web design that can benefit bioinspired design (Fig. 1A). While many graph generative models are used for molecular design (5458), other common issues in current models including graph size constraints, lack of options for training and generating both node labels and edge connectivity, lack of generalizability regarding input data, and limited learning ability for highly complex structures present challenges in the generation of spider web structures. Thus, we propose a set of deep graph generative models for synthetic web design, with geometric parameters conditioned to better capture the inherent behavior and properties, and with permutation invariance considered to improve generation efficiency. Three models are proposed, each with different model architectures (diffusion model, or autoregressive transformer) and different neighbor representations (sparse and full neighbor, or adjacency, matrix) for performance comparison.

Graphs are ubiquitous data representations and suitable for a variety of scientific and engineering tasks, such as molecules, truss structures, social networks, transportation systems, and networks of neurons (59, 60). Graph data are increasingly used for multiple tasks including clustering, classification, regression, and generation in various areas, such as bioinformatics (6163), natural language processing (64, 65), structural design (52, 66), and recommendation systems (67, 68). This is mainly due to its potential and scalability to describe complex data structures, the interactions among entities, and information flow for real-word systems. Moreover, graphs offer universality and versatility for data implementation among fields (57, 69). Sequences are another flexible data format used in deep learning models, which allows sequence-to-graph data translation but normally requires specific orderings of nodes (70). Two types of graph datasets are commonly accessed: 1) molecule-centric graph data that consist of atom information (normally type, positions, and charges) and SMILES (Simplified Molecular Input Line Entry System) strings for graph structural representation, such as ZINC, QM9, and GEOM (59, 71); and 2) generic graph data represented by node coordinates and edge connectivity, which we apply here. Other examples also include the Enzymes, CiteSeer datasets, as well as synthetic datasets such as Grid and Ego (58). Compared with these datasets, spider web graphs typically have much larger sizes (hundreds to thousands of nodes, tens of thousands of edges), higher structural complexity, and feature a much higher degree of variability of node and edge arrangements.

Graph Generation Methods

Synthetic graph generation is a critical task in understanding and modeling complex structures or realistic network systems, discovering new patterns and relationships for structures or materials, and optimizing structures toward targeted properties (57, 58). In the generation process, graphs are typically learned through the underlying distribution p(G) over graph entities (e.g., nodes and edges), where p and G denote respectively data distribution and graph data input, G=(V,E) specifies the undirected graph, with V={v1,v2,,vn} and E=eij=(vi,vj|vi,vjV} representing node and edge sets for the graph, respectively; n is the total number of nodes; and each edge is connected by node i and j . Once models are trained, new graphs are generated using sampling (72). Traditional graph generation methods that apply mathematical probabilistic models such as Erdős-Rényi and Stochastic Block Models are relatively simple to implement but restricted by their learning capacity for realistic graphs (59) that feature intricate relationships and property conditions. Here, deep graph generation models outperform traditional graph generation strategies since they learn the underlying characteristics directly from data and allow larger and more flexible data as input. A general process of deep graph generation typically involves processing data, identifying the model architecture, training, sampling, and lastly evaluating synthetic graphs (58). We follow a similar strategy here.

Graph generation has wide impacts and applications, and deep learning techniques offer superior learning capacity as mentioned in the previous section. However, challenges exist in training deep models to generate high-quality graphs, owing in part to the size and complexity of graph data, long-range node, and edge dependencies, the difficulties of achieving discrete variable learning, the computational demand due to innate node orderings of isomorphic graphs, and limited availability of domain-agnostic data (57, 69). Different types of deep graph generation models have been proposed, each with distinct characteristics and learning capabilities. The types of model architectures that are commonly used are variational autoencoders (VAEs), generative adversarial networks (GANs), autoregressive models (ARs), and diffusion models (58, 73). An illustration for each architecture is shown in SI Appendix, Fig. S1, and existing generators are listed in SI Appendix, Table S1. SI Appendix, section S1, Overview of graph generation models, in SI Appendix section, provides an in-depth discussion of each method.

Results and Discussion

We first focus on the types of physical graph structures studied here, specifically spider webs. Fig. 1A shows the overall approach of digitizing spider web structures and represents them as graphs. Based on data gathered from tomography scans of spider webs (2426, 29, 32), we develop a graph model, which is then modeled, and transformed into, new bioinspired, synthetic structures. As shown in Fig. 1 B and C, the structures feature a high degree of spatial heterogeneity. We show this via coloring, featuring a variation of fiber edge length, clustering coefficient (47), and number of neighbors at each node. SI Appendix, Fig. S2 provides additional statistical analysis of key geometric properties of the web graphs, complementing the visual analysis shown in Fig. 1 B and C. SI Appendix, Fig. S2A shows the length of each edge, projected to its connected nodes and averaged, and SI Appendix, Fig. S2B shows the clustering coefficient. SI Appendix, Fig. S2C depicts the histogram of the reciprocal density of neighbors (calculated as the distance to each neighbor divided by the number of neighbors, where low values indicate high density). SI Appendix, Fig. S2D depicts the geodesic distance for each node from an arbitrary point on the web’s edge (74), which may also provide a measure of signal transduction. Its variance gives a metric connectivity and efficiency of travel across the web (high variance means more localized regions; low variance means more consistent density and connectivity). SI Appendix, Fig. S2E shows the normalized second eigenvector statistics of the graph Laplacian projected on a per node basis (75), where more similar values indicate “neighborhoods” or anatomy and high variance indicates more isolated regions of anatomy. The detailed analyses of the structural anatomy of the webs provide key evidence that spider webs are highly heterogeneous and that in order to understand their construction principles, we must find ways to learn these in appropriate representations.

Fig. 1 DG shows the development of web construction representations using inductive sampling (48). Samples of large webs (Fig. 1D) are developed into a large training set (Fig. 1F) via inductive sampling (Fig. 1E) considering up to fourth nearest neighbors, and their length distribution is indicated in Fig. 1G. Fig. 1H shows the hierarchical structuring used here, by breaking down a complex graph (level 4) into a local graph assembly (level 3), a local web segment as defined by inductive sampling (level 2), individual silk segments (level 1, defined by a finite length fiber with a start and end point), and the elementary fiber (level 0). In the strategy pursued here, we seek ways to mimic the natural spider web construction principles up to level 2 in the hierarchy but use synthetic strategies to then assemble these structures into bioinspired materials at level 3 and 4. It is noted that we also conducted experiments to confirm that the algorithm proposed here can model and generate graph structures of the entire web (as shown, for instance in Fig. 1D); however, this is of less interest since we want to identify construction principles and then use these to develop synthetic de novo web structures. Moreover, we have only a limited number of unique large webs, hence, not enough data to learn from. This is why we use inductive sampling to capture the spatial heterogeneity identified in Fig. 1 B and C.

Three deep learning models are used in this paper (details see Materials and Methods), based on either a diffusion architecture or a generative pretrained transformer approach. Both models utilize self- and cross-attention as a mechanism to represent long-range feature dependencies. Fig. 2 AD summarizes the diffusion algorithm, which translates noise to graphs (Fig. 2A). Fig. 2B shows the U-net architecture used here, along with details on the combined cross-attention architecture to realize conditional denoising (Fig. 2C) with respect to both, the conditioning variables c and the time step t. Fig. 2D shows the forward Markov and trained reverse Markov chain that is learned by the U-net architecture. Fig. 2 EG shows the autoregressive transformer architecture.

Fig. 2.

Fig. 2.

Deep learning architectures used in this work, featuring attention-based diffusion models and autoregressive transformer architectures, also featuring the attention mechanism. (AD) Summary of the diffusion algorithm used to translate noise to graphs (A). Panel B shows the U-net architecture, along with details on the combined cross-attention architecture to realize conditional denoising (C) with respect to both, the conditioning variables C and time step t. Panel D shows the forward Markov and trained reverse Markov chain that is learned by the U-net architecture to predict a conditioned denoising process, whereas a multiheaded attention mechanism is used to learn long-range relationships (Eqs. 13 and 14). (E) Summary of the transformer architecture, representing an autoregressive architecture (E and F) that produces solutions iteratively from a start token during inference. During training (G), the target represents web samples, conditioned by the associated graph properties. The model is trained to predict the proper output. A start token is added at the beginning of the graph descriptor sequence. During generation, the start token is first fed into the model and the output is predicted from it. This process is repeated until the full graph is produced.

Both general modeling strategies use a similar graph representation structure (SI Appendix, Fig. S3). Each inductively generated sample is encoded as described in the schematic, using either a full adjacency matrix or a sparse neighbor representation. SI Appendix, Fig. S4 provides a summary of the properties used to condition the generative models, featuring a total of 7 geometric properties defined in Eqs. 14. The plots show histograms of each of the variables. Details on the dataset, including data construction and normalization, are included in Materials and Methods.

Fig. 3 AC shows samples of generated web sections using the sparse analog diffusion model (Fig. 3A), the discrete diffusion model (Fig. 3B), and the autoregressive transformer architecture (Fig. 3C). Fig. 3 D and E compares the performance of these models respectively, via self-correlation of conditioning variables C (x axis, labeled GT, over the entire test set {Ci}test,GT ) with measured properties of the generated graphs (y axis, labeled Predicted, {Ci}predicted ). The resulting R2 values, measuring how the conditioned graph generation parameters relate with measured ones from generated graphs, are 0.89, 0.84, and 0.83, which shows that the sparse analog diffusion model has the best performance. It is emphasized that since each of the test samples is unique (since it was sampled from a unique node in the original experimental spider web data); the testing conducted and shown in Fig. 3 E and F is significant in terms of validating the models’ predictive generalization capacity. This is because the models have never seen the particular combinations of conditioning parameters used in these tests, but as the results show, the generative algorithm can successfully produce webs that meet these design demands well.

Fig. 3.

Fig. 3.

Sampling results using all three models, including validation experiments to confirm that the conditioning produces graphs with required properties. Sample generated web sections using A, the sparse analog diffusion model, B, the diffusion model with full neighbor representation, and C, the autoregressive transformer architecture. Panels DF show results of testing the correlation of conditioning variables (x axis, labeled GT, over the entire test set {Ci}test,GT ) with measured properties of the generated graphs in a test set of webs not used for training (y axis, labeled predicted, {Ci}predicted ). The analysis is conducted over the entire distribution of samples as depicted in SI Appendix, Fig. S4. The results are shown for the three models: D, a sparse analog diffusion model. E, analog diffusion model (nonsparse), and F, autoregressive transformer model (nonsparse). The R2 values (from top to bottom) are identified as 0.89, 0.84, and 0.83. The sparse analog diffusion model (panel D, and shown in detail in Fig. 2 AD) features the best performance. Since each of the samples is unique, the testing conducted and shown in Fig. 3 DF are significant in terms of predictive generalization capacity.

With these models, we can generate a variety of complex graph sections and use these to assemble larger-scale webs. These are intended to amalgamate natural design principles captured through deep learning with synthetic design objectives; in our study, these are mathematically parameterized to yield multilevel architectures. We use generated graph sections to design mathematically parameterized distributions of webs in 3D space, as shown in Fig. 4. The figure summarizes various aspects of the algorithm and the results of larger-scale generated spider web–based graphs. At the onset, each generative model produces small, local inductive graph sections that have a potentiality to be assembled into larger-scale architectures. There are various dimensions to the process, including how individual sections are integrated. Fig. 4A shows a shuffling algorithm by which the distance matrix is shuffled symmetrically to obtain more diverse stacking results, as multiple graphs are integrated via operations in the z-space of coordinates and adjacency matrix. Indeed, this strategy helps us to randomize the inductively sampled graphs and achieve greater diversity in how nodes of stacked graphs are connected. This becomes clear as we look at Fig. 4B, which depicts an example of how two identical graphs are stacked, forming a larger graph. In this process, coordinates in the overlapping region are either averaged or taken from the second graph; and coordinates of identical graphs are shifted by dx, dy, and dz (in the example shown on the right, extreme choices of displacements are used to visually represent the new connections, and new graph, formed). Fig. 4C shows an example of a larger-scale stacking, repeated multiple times, and defining dx, dy, and dz to form a helix. The resulting graph is shown on the right. Fig. 4D shows another example, here without shuffling, forming a larger helical graph. The center graphs in panels c and d show the coordinates of the nodes, over node numbers.

Fig. 4.

Fig. 4.

Algorithm utilized to generate larger-scale de novo graphs, based on the concept of stacking graph samples into larger graph assemblies. Panel A shows a shuffling algorithm by which the node positions and adjacency matrix are shuffled symmetrically. While this operation is invariant to the representation of the graph, it allows us to obtain more diverse stacking results and hence helps to randomize the inductively sampled graphs and achieve greater diversity of how nodes of stacked graphs are connected. Panel B shows an example of how graphs are stacked in z space, forming a larger graph (here, shown for two identical graphs as the basis). Coordinates in the overlapping region are either averaged or taken from the second graph; and coordinates of identical graphs are shifted by dx, dy, and dz (in the example shown on the right, extreme choices of displacements are used to visually represent the new connections, and new graph, formed). Panel C shows an example of a larger-scale stacking, repeated multiple times, and defining dx, dy, and dz to form a helix (resulting graph on the right). Panel D shows another example, without shuffling, forming a larger helical graph. The center graphs in panels C and D show the coordinates of the nodes, over node numbers. Panels E and F show examples of continuously sampled graphs, producing complex architectures. In the two examples, we use the autoregressive transformer model (E) and the analog diffusion model (F) to realize graphs. At each stacking step, a new graph is sampled, conditioned based on a randomly generated conditioning vector c.

Generation of larger synthetic web geometries can be achieved by using either a single graph as a basis or a set of graphs. Alternatively, we can use the generative models to continuously produce new graphs (either randomly conditioned or conditioned to meet certain target needs). Fig. 4 E and F shows examples of continuously sampled graphs, producing complex architectures. In these examples, we use the autoregressive transformer model (Model 3, Fig. 4E) and the analog diffusion model (Model 1, Fig. 4F) to realize graphs. At each stacking step, a new graph is sampled, conditioned based on a randomly generated conditioning vector c. SI Appendix, Fig. S5 depicts a gallery of designs, showing the architectural complexity and variety that can be achieved using the algorithm. Different placement methods are used to shift and transform the positioning of graphs in space at each generative step.

As an implementation of the generative model, Fig. 5 A and B demonstrates further how spiderweb geometries can be generated along input paths in order to create more complex geometries. For example, parametric equations (76) can be used to create a closed bounding path which is then infilled with web geometries generated using the nonsparse diffusion model. The resulting structure comprises a truss-like path oriented along the bounding curve with connections formed based on the input spiderweb graph information. A closed parametric bounding curve is defined by a set of three equations designating the position of points in cartesian space (details see Materials and Methods). These points were then joined into a curve along which a web could be generated. The parametric curve described by this function is shown in Fig. 5A and loops in on itself in three distinct regions before closing. The output complex web geometry developed by the diffusion model along this path is shown in Fig. 5B. The generated web formations were meshed and constructed via additive manufacturing in order to observe their behavior as physical objects. Fig. 5 C and D displays two distinct web geometries generated along the same parametric bounding curve. Both objects were manufactured via polyjet 3D printing with a maximum bounding dimension of 15 cm and a strut radius of 0.4 mm (details see Materials and Methods). These designs, which embody the design principles of spider webs and integrate geometric functions, present both innovative and tangible examples of bioinspired designs. They have the potential to inspire additional, deeper investigation into natural design principles across diverse domains. Moreover, as a part of our future work, the characteristics and advantages of designs incorporating spider web design principles could be further examined through tests or simulations that compare the original geometries with their web-composite counterparts.

Fig. 5.

Fig. 5.

Complex web formations were generated along paths defined by bounding curves. A system of parametric equations in cartesian space defined a closed bounding curve (A) that served as an input geometry for the nonsparse analog diffusion model, outputting a lattice-like web formed along the bounding curve (B). Panels C and D show webs generated along parametric bounding curves with a strut radius of 0.4 mm and 3D printed via polyjet. Variations of thin (C) and thick (D) path radii were used to generate distinct geometries.

Helical geometries were additionally generated in order to provide a form that would be suitable for tensile testing. Three distinct helices were used to generate three web patterns which were merged with block regions that could be clamped during mechanical testing. Fig. 6 shows the generated models and manufactured 3D prints during the mechanical testing process (recorded testing videos are provided in SI Appendix, see Movies S1S3). Movie S4 shows 3D renderings of the structure shown in Fig. 5A, as well as H2 and H3 in Fig. 6B; STL files (77) of all five designs are included as SI Appendix). The geometries yielded distinct force-extension behaviors, and the impact of individual struts breaking can be observed in the output data. Samples H1, H2, and H3 vary in coil number (2:2.5:4), and sample H3 composite with wider-radius web sections compared with H1 and H2. The strut breakings are observed near the middle along all helix designs (Fig. 6B), which suggests these designs, incorporating web sections, may undergo relatively evenly distributed forces when being stretched along the indicated direction; however, more detailed analysis is required to gain deeper insights into the exact behavior. Based on the force-strain plot (Fig. 6C), H1 and H2 show similar stretching behavior, and minor local breaks are noted during the process, which is possibly because the design sections have random fiber segment arrangements, and materials are 3D-printed layer by layer during additive manufacturing. The samples tend to deform until 10% strain (H1 and H2) and up to 25% for H3, indicating that increased deformability can be engineered by designing the helical microstructure. The increased deformability may be attributed to factors such as the web-scale of composite sections, or the pitch of helical geometries. Considering similar cross-section areas of the three samples as indicated in Fig. 6A, H1 and H2 present higher strength and stiffness than H3, which may potentially result from the more densely connected web structure of H3. Yet more research would be needed to confirm the above presumptions. Moreover, to gain more insight or characteristics of the web-inspired designs, for instance, how spider webs contribute to actual designs and how the geometry variation influences the structures that incorporate spider web design principles, more detailed comparisons, and robust analyses, should be further anticipated by conducting additional experiments that involve systematic designs variations.

Fig. 6.

Fig. 6.

Helical web geometries are meshed and merged with blocks to provide a region that could be attached to a mechanical testing apparatus. Cross-sectional areas are calculated in slices perpendicular to the force vector of the mechanical test and are color coded in panel A. Slices with multiple distinct regions had their areas calculated separately. The extension of samples during the tensile test is shown in panel B, with output data displayed in panel C. Recorded testing videos are provided in SI Appendix.

Conclusion and Outlook

We showed that spider webs are highly heterogeneous structures (Fig. 1 B and C and SI Appendix, Fig. S2) that offer complex structural design cues. To capture these, we successfully developed, trained, and applied a set of deep generative modeling methods for hierarchical bioinspired de novo spider web designs, based on experimentally scanned and digitalized 3D web structures. Three deep learning models are constructed, analyzed, and compared for generation performance: a sparse analog diffusion model, a nonsparse discrete diffusion model, and an autoregressive generative pretrained transformer model that also captures the full adjacency matrix. Among these, the sparse analog diffusion model outperforms others in terms of the correlation of graph conditioning variables (Fig. 3 DF). In other applications, for instance, for very large graphs, this model will have additional benefits as it is computationally more effective and requires less memory due to the underlying sparse design. Sparsity is seen in many physical graph structures beyond spider webs, so we foresee many fruitful applications of this method.

We further developed a mathematical parametrized assembling method for bioinspired graph design, by defining operations in the z space of graph representations (Fig. 4). Based on the generated web sections, we provide a gallery of larger-scale 3D web designs (Figs. 4 E and F, 5, and 6 and SI Appendix, Fig. S5), depicting not only realistic spider web properties but also architectural diversity and complexity (e.g., helix or attractor-like design principles). Some designs are 3D printed with simple mechanical tests conducted, as reported in Fig. 6 (Movies S1S4).

The principal contributions of this work are as follows:

  • We generated a spider web dataset based on experimental data (25, 42, 52), processed through an inductive method to provide a diverse collection of graph data that can be used to train the models, using inductive graph sampling (48) (Fig. 1E). The resulting spider web data have uniform graph sizes, and a set of geometric properties is prepared for conditioning and evaluation (SI Appendix, Fig. S4) to model, capture, and synthesize new samples that capture the rich heterogeneous feature set that natural webs have.

  • The results shown in Fig. 1 B and C and SI Appendix, Fig. S2, and in SI Appendix, Fig. S4 for the inductively sampled webs, are important as they provide fundamental insight into the design principles exploited by spiders, revealing that they modulate local structures to achieve a variety of distinct geometric, and by extension, mechanical, and other properties in the web. Spider webs are not homogenous structures but consist of a complex assembly of local variations. These natural structural designs are used here to construct bioinspired structures and could themselves be a subject of further investigation to deeper understand spider web architectures.

  • Using the general framework of attention models, we reported three deep graph generative models that show high generalizability and allow flexible data input (Fig. 2). The models overcome some of the earlier domain-specific limitations and are applicable to other generic graph input data, including for very large graphs due to the efficient computational strategy implemented. The foundation of the models based on attention/transformer strategies intrinsically allows sequences and multiple data types as input, which broadens the data type range and modalities, and can easily be integrated with other language models (78, 79).

  • We show the general applicability of both, attention-based diffusion models (Fig. 2 AD) and autoregressive transformer architectures (Fig. 2 EG), used here to generate diverse and novel web samples. The use of language models offers a generalizable approach toward complex hierarchical system modeling, ranging from language (78) to images (80, 81) to dynamical processes (82, 83).

  • The models show high learning capacity, generalizability, and scalability (Fig. 3). Both node label and edge connectivity of graph data can be learned effectively through training, with conditioning available to integrate graph properties. Besides, both categorical features (e.g., node index and neighbor list) and continuous features (e.g., nodal coordinates) are considered. Due to the computational efficiency, graph generation is possible even for large-sized graph data (e.g., for spider webs: thousands of nodes, tens of thousands of for spider webs).

  • The strategies reported here can be applied for data augmentation for spider webs and other large-scale graph data, with lower computational costs than current methods. Moreover, future studies on spider webs could be processed with 3D printing techniques and laboratory mechanical tests, to better understand the structural–property relationships, optimize structural performance, and learn natural design intelligence.

This work lays the foundation for spider web generation and implements fundamental methods to explore bioinspired design (Figs. 46). A set of innovative spider web designs is constructed, consisting of varied de novo designs with diverging architectural complexity. Extending the scope, this method can be applied for other similar hierarchical structures and offers design opportunities. The approach described here realized a systematic design strategy to capture a series of hierarchical levels (Fig. 1H) and then generate architectural designs. The integrated convolutional–attention approach used in the U-Net architecture of the diffusion model provides the most effective mechanism of capturing multilevel features and could provide a platform for other similar scientific machine learning applications.

The size of the web samples used to train the model, mimicking features up to level 2 in the hierarchical makeup, can be adapted by changing the depth of the inductive sampling. By including higher orders of inductive sampling depths, we can generate larger local web samples. Ultimately, these variations can be explored as design principles in the process used here. Other systematic variations of the design process could include the introduction of gradients of properties along a spatial direction, e.g., the helix length axis, to achieve additional material parameter structuring in line with the universality–diversity principle that provides a platform to achieve distinct mechanical and other properties from the same building blocks; here, elementary silk fibers architected into a variety of forms using the principles outlined in Fig. 1H. Several explorations of de novo design concepts are shown in Figs. 46, including additively manufactured samples that were exposed to mechanical testing.

Whereas the work reported here focused on targeting conditioning parameters C, all three models allow for flexibility in generating increasingly diverse or more “creative” samples. For instance, we can use ancestral Gumbel-top-k sampling (84), for instance, for the transformer architecture to achieve a greater diversity of sampled webs. In the diffusion model, we can adapt the denoising time-steps and use classifier-free guidance (85) during sampling. These strategies have been shown to be effective in other generative models. Another natural dimension that could be explored is masking. All models allow for masking certain regions of an existing web that should or should not be adapted, providing a mechanism to generate or redesign only parts of a web. The use of a generative pretrained transformer strategy with a decoder-only makeup provides further evidence for the universal applicability of these models beyond language (such as done in GPT-3/4, etc.), to include complex physical systems (79, 86, 87).

The results reported in this paper provide a platform to explore structural–property relationships of spider webs and other graph architectures, including structural performance optimization, design space exploration, and facilitates learning web design principles from natural intelligence; then, we could leverage the understanding to employ them in practical structural applications for structural purposes (e.g., trusses and bridge components). Future works include computational works and analysis for spider webs with augmented datasets, improving the de novo design algorithm of spider webs for practical applications, investigating the characteristics of designs that incorporate spider web design principles, as well as mechanical tests incorporating 3D printing techniques for a more comprehensive synthetic design assessment. In addition, the developed deep generative artificial intelligence (AI) models featuring high generalizability and learning capacity for large-scaled graph generation for spider webs, which allow graph conditioning, both categorical and continuous label learning, and domain-agnostic data input, have the potential to be applied beyond spider webs to a broad range of bioinspired designs that exhibit similar hierarchical properties, thus providing abundant opportunities for research.

Materials and Methods

Dataset Construction and Conditioning Parameters C.

Building on the hierarchical structure of spider webs, in order to capture invariant construction principles and the high degree of heterogeneity (Fig. 1 B and C and SI Appendix, Fig. S2), we use inductive representative sampling as suggested in ref. 48 to generate smaller subgraphs and improve the construction of local structures (demonstrated in Fig. 1 DF). Data construction details see SI Appendix, section S2.1.

We calculate seven geometric properties to describe the characteristics of each web sample (distribution statistics in SI Appendix, Fig. S4). The parameters include the number of nodes in the graph, NN , and the mean of the edge lengths:

l¯=1NEi=1..NEdxi2+dyi2+dzi2, [1]

where d(x,y,z)i is the length of the x,y , and z component of edge i , and NE is the total number of edges in the graph. Similarly, we calculate the mean of the x,y, and z components of the edge lengths:

dx¯=1NEi=1..Ndxi,dy¯=1NEi=1..Ndyidz¯=1NEi=1..Ndzi, [2]

to capture residual directional features of the edges. The node degree is defined as

ζ=NENN, [3]

These features are summarized in a 7-dimensional feature vector Ci (used for conditioning), for each sample i:

Ci=l¯,dx¯,dy¯,dz¯,NN,NE,ζ. [4]

The maximum length of nodes considered is denoted as N , whereas Nmaxi(NN,i) , considering all samples i in the dataset. For the cases considered here, N=64 . The conditioning features are normalized to lay between −1…1, whereas each feature within Ci is treated separately. The resulting distributions are shown in SI Appendix, Fig. S4.

Model 1: Analog Diffusion Model with Sparse Neighbor Representation.

In this approach, we describe graph representations as defined in SI Appendix, Fig. S3. Since we use a sparse representation, the input structure consists of a list of embeddings of length N , padded with zeros for graph samples of length less than N . The representation z is defined by a sequence of elements zj for each node (where j=1..N) , defined as

zj=xj,yj,zj,N1,j,N2,j,..,NNeighmax,jT, [5]

where Neighmax denotes the maximum number of neighbors considered, and Nk,j refers to the node number of neighbor k of node j (if a node has fewer neighbors than Neighmax , we pad the excess entries with 0). Neighmax = 6, since the spider web graphs considered have no more than six neighbors at each node. With zj defining each row, the sparse matrix representation, with the size of [3+Neighmax , N ], is defined as

z=z1,z2,,zN, [6]

The coordinate components xj,yj,zj in zj are normalized to be between −1…1; the neighbor components are normalized in the same way, which reflects the use of an analog representation of the discrete neighbor lists and formulating the problem as a continuous denoising problem (details in SI Appendix, section S2.2). SI Appendix, Table S2 summarizes the parameters for neural network architecture.

Denoising Process.

Fig. 2D depicts the denoising process, where the top defines a Markov chain operator q that adds Gaussian noise step-by-step (following a defined noise schedule that defines how much noise εi is added at each step i), translating the physical solution (either, the stress-deformation response or the encoded image), z0, into pure noise, zF:

zi+1=q(zi). [7]

The deep neural network is trained to reverse this process, by identifying an operator p that maximizes the likelihood of the training data. Once trained, it provides a means to translate noise to solutions and thereby realizing the transition illustrated in Fig. 2A, in a step-by-step fashion as indicated in the lower row of Fig. 2D:

zi-1=p(zi). [8]

We use the L2 distance to define the loss, measuring the error of the actual added noise εi and the predicted added noise εi′. Hence, the trained diffusion model can predict the added noise, which allows us to realize a numerical solution to the denoising problem and hence be used to generate the next iteration of the denoised sequence:

zi-1=zi-εi. [9]

In Eq. 9, the sequence zi at step i is transformed by removing the noise εi . This process is performed iteratively, whereas the neural network predicts, given the current state zi , the noise to be removed at a given time step ti in the denoising process (Fig. 2D).

More details of the denoising process are elaborated in SI Appendix, section S2.3.

Model 2: Diffusion Model with Full Neighbor Matrix Representation.

The model is constructed similarly to Model 1, except for the representation z . Here, z captures node positions and a full adjacency matrix, where the sequence of elements zj for each node, defined as

zj=xj,yj,zj,N1,j,N2,j,..,NT, [10]

where N denotes the maximum number of nodes considered, and Nk,j denotes whether node j has a neighbor k:

Nk,j=1: if node k is a neigbor of node j-1: else, [11]

With zj defining each row, the full matrix representation, with the size of [3+N , N ], is defined as

z=[z1,z2,,zN]. [12]

Just like in the previous model, the diffusion model is trained to learn a conditional denoising process, but now on larger representations. SI Appendix, Table S3 summarizes the parameters used to construct the neural network architecture.

Model 3: Autoregressive Transformer Architecture with Full Adjacency Matrix Representation.

Fig. 2 EG depicts the transformer architecture, representing an autoregressive decoder-only architecture that produces solutions iteratively from a start token during inference and using cross-attention with the conditioning features Ci , where Ci is the output of a feed-forward layer that expands the dimension from [7, 1] to [7, dC]. The key mathematical operation is the masked attention mechanism (79, 88), defined as

AttentionQ,K,V;M=softmaxQKT+MdkV. [13]

The multiheaded attention calculation is implemented by using parallelly stacked attention layers. We divide the input into segments (in embedding dimension) and compute the scaled dot-product attention over each segment in parallel. The model to jointly attend to information from different representation subspaces at different positions:

MultiHeadQ,K,V=Concathead1,,headhWO, [14a]
headi=AttentionQWiQ,KWiK,VWiV, [14b]

where the projections are parameter matrices WiQRdmodel×dq , WiKRdmodel×dk , WiVRdmodel×dv and WORhdv×dmodel . In self-attention, all Q, K, V come from either input or output embeddings (or other sources) only, cross-attention calculations are performed with Q from the conditional embeddings and K, V from the encoded desired graph properties, computed from the input embedding via a fully connected feed forward network from the feature vector Ci . During training, the target represents web samples, conditioned by the associated graph properties Ci whereas the model is trained to predict the output. Causal masking using a triangular masking matrix M is used in the self-attention step so that the model can only attend to tokens to the left (i.e., previous tokens).

As shown in Fig. 2G, a start token T is added at the beginning of the graph descriptor sequence, so that

z=[T,z1,z2,,zN]. [15]

During generation, the start token T is first fed into the model and the output is predicted from it. During sampling iterations, this process is repeated until the full graph is produced (Fig. 2 F and G). The graph descriptor z is identical to the definition above in Model 2. SI Appendix, Table S4 summarizes the parameters for the transformer architecture.

Training Process and Other Hyperparameters.

All code (89) is developed in PyTorch (90). All machine learning training uses the Adam optimizer (91) (learning rate = 0.0002).

De Novo Graph Construction.

To construct larger de novo graphs, we use the three models described above to generate smaller graph samples Gi and then combine them to form larger graphs G , where

G=fG1,G2,,Gn, [16]

whereas each graph is defined by z , a matrix of size [3+N , N ] for de novo graph construction, we operate in representations of graphs with full adjacency matrix; otherwise, need to convert representation in advance.

The function f denotes how individual graphs Gi are assembled. Two graphs Gi and Gi+1 are combined to form a new graph G¯i+2 , and this process is repeated iteratively to form a new graph from the earlier ones,

Gi,Gi+1G¯i+2. [17]

Reflecting various choices of f , different strategies can be implemented to connect graphs Gi and Gi+1. If the same graph is used, symmetric shuffling is applied (Fig. 4A, details see SI Appendix, section S2.4). Since all graphs generated by the models are centered around (0, 0, 0), to realize spatially distributed graphs, the coordinates x,y,z of the second graph Gi+1 are iteratively shifted by dx, dy, and dz, according to some function:

fcx,y,z=[x+Δfc,x,y+Δfc,y,z+Δfc,z]. [18]

Generally, the functions Δfc,x/y/z depend on the iteration number in the graph generation process. Examples of coordinates produced from these calculations are shown in Fig. 4 C and D, center panel. Coordinates in the overlapping region (marking in Fig. 4B) are either averaged from Gi and Gi+1 or taken from the second graph, Gi+1.

In the cases where we generate helices and closed curves, the vector functions fcx,y,z(i) are defined in SI Appendix, section S2.5.

Additive Manufacturing and Mechanical Testing.

Samples of webs were 3D printed for visualization and mechanical testing using a Stratasys Objet 500 Connex V3 polyjet 3D printer (Stratasys Inc., Revohot Israel) (Figs. 5 C and D and 6). Prints were manufactured using polyjet printing with the Stratasys RGD 450 resin and dissolvable support (SUP706B) and cleaned in a base bath.

Tensile testing was performed using an Instron Universal Testing Machine (Instron, Norwood, MA) with a 5 kN load cell. Samples were strained at a rate of 0.5 mm/min until failure. Movies S1S3 show the three tests (speedup of 50×). More details of 3D printing and tests see SI Appendix, section S2.6.

Supplementary Material

Appendix 01 (PDF)

Movie S1.

Tensile deformation experiment of H1, 50x speedup.

Download video file (35.3MB, mp4)
Movie S2.

Tensile deformation experiment of H2 50x speedup.

Download video file (29.9MB, mp4)
Movie S3.

Tensile deformation experiment of H3, 50x speedup.

Download video file (16.9MB, mp4)
Movie S4.

Rendering of various 3D geometry files generated in this paper.

Download video file (40.9MB, mp4)

Acknowledgments

We acknowledge support by NIH (5R01AR077793-03), the Office of Naval Research (N000141612333 and N000141912375), AFOSR-MURI (FA9550-15-1-0514), and the Army Research Office (W911NF1920098). Related support from the MIT-IBM Watson AI Lab, MIT Quest, and Google Cloud Computing, is acknowledged.

Author contributions

W.L., N.A.L., and M.J.B. designed research; W.L., N.A.L., and M.J.B. performed research; N.A.L. and M.J.B. contributed new reagents/analytic tools; W.L., N.A.L., and M.J.B. analyzed data; and W.L., N.A.L., and M.J.B. wrote the paper.

Competing interests

The authors declare no competing interest.

Footnotes

This article is a PNAS Direct Submission.

Data, Materials, and Software Availability

Code, dataset, STL Files data have been deposited in GitHub; Zenodo (https://github.com/lamm-mit/GraphGeneration; https://doi.org/10.5281/zenodo.8003903) (77, 89).

Supporting Information

References

  • 1.Nepal D., et al. , Hierarchically structured bioinspired nanocomposites. Nat. Mater 22, 18–35 (2023), 10.1038/s41563-022-01384-1. [DOI] [PubMed] [Google Scholar]
  • 2.Sen D., Buehler M. J., Structural hierarchies define toughness and defect-tolerance despite simple and mechanically inferior brittle building blocks. Sci. Rep. 1, 35 (2011), 10.1038/srep00035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Fratzl P., Weinkamer R., Nature’s hierarchical materials. Prog. Mater. Sci. 52, 1263–1334 (2007), 10.1016/j.pmatsci.2007.06.001. [DOI] [Google Scholar]
  • 4.Guo K., Buehler M. J., Nature’s Way: Hierarchical Strengthening through Weakness. Matter 1, 302–303 (2019), 10.1016/j.matt.2019.07.011. [DOI] [Google Scholar]
  • 5.Giesa T., Spivak D. I., Buehler M. J., Reoccurring patterns in hierarchical protein materials and music: The power of analogies. Bionanoscience 1, 153–161 (2011), 10.1007/s12668-011-0022-5. [DOI] [Google Scholar]
  • 6.Tang H., Buehler M. J., Moran B., A constitutive model of soft tissue: From nanoscale collagen to tissue continuum. Ann. Biomed. Eng. 37, 1117–1130 (2009), 10.1007/s10439-009-9679-0. [DOI] [PubMed] [Google Scholar]
  • 7.Uzel S. G. M., Buehler M. J., Molecular structure, mechanical behavior and failure mechanism of the C-terminal cross-link domain in type I collagen. J. Mech. Behav. Biomed. Mater. 4, 153–161 (2011), 10.1016/j.jmbbm.2010.07.003. [DOI] [PubMed] [Google Scholar]
  • 8.Zhou S., Jin K., Buehler M. J., Understanding plant biomass via computational modeling. Adv. Mater. 33, e2003206 (2021), 10.1002/adma.202003206. [DOI] [PubMed] [Google Scholar]
  • 9.Kushwaha B., et al. , Mechanical and acoustic behavior of 3D-printed hierarchical mathematical fractal menger sponge. Adv. Eng. Mater. 23, 2001471 (2021), 10.1002/adem.202001471. [DOI] [Google Scholar]
  • 10.Aizenberg J., Sundar V. C., Yablon A. D., Weaver J. C., Chen G., Biological glass fibers: Correlation between optical and structural properties. Proc. Natl. Acad. Sci. U.S.A. 101, 3358–3363 (2004). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Garcia A. P., Pugno N., Buehler M. J., Superductile, wavy silica nanostructures inspired by diatom algae. Adv. Eng. Mater. 13, B405–B414 (2011), 10.1002/adem.201080113. [DOI] [Google Scholar]
  • 12.Garcia A. P., Sen D., Buehler M. J., Hierarchical silica nanostructures inspired by diatom algae yield superior deformability, toughness, and strength. Metall Mater. Trans. A Phys. Metall. Mater. Sci. 42, 3889–3897 (2011), 10.1007/s11661-010-0477-y. [DOI] [Google Scholar]
  • 13.Buehler M., Diatom-inspired architected materials using language-based deep learning: Perception, transformation and manufacturing. arXiv [Preprint] (2023). 10.48550/arXiv.2301.05875 (Accessed 15 January 2023). [DOI]
  • 14.Bader C., et al. , Computational methods for the characterization of Apis mellifera comb architecture. Commun. Biol. 5, 854 (2022), 10.1038/s42003-022-03328-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Franklin R., Niverty S., Harpur B. A., Chawla N., Unraveling the mechanisms of the Apis mellifera honeycomb construction by 4D X-ray microscopy. Adv. Mater. 34, e2202361 (2022), 10.1002/adma.202202361. [DOI] [PubMed] [Google Scholar]
  • 16.Zhang Q., et al. , Bioinspired engineering of honeycomb structure–Using nature to inspire human innovation. Prog. Mater. Sci. 74, 332–400 (2015), 10.1016/j.pmatsci.2015.05.001. [DOI] [Google Scholar]
  • 17.Su I., Buehler M. J., Spider silk: Dynamic mechanics. Nat. Mater. 15, 1054–1055 (2016), 10.1038/nmat4721. [DOI] [PubMed] [Google Scholar]
  • 18.Qin Z., Buehler M. J., Spider silk: Webs measure up. Nat. Mater. 12, 185–187 (2013), 10.1038/nmat3578. [DOI] [PubMed] [Google Scholar]
  • 19.Lepore E., Marchioro A., Isaia M., Buehler M. J., Pugno N. M., Evidence of the most stretchable egg sac silk stalk, of the European spider of the year Meta menardi. PLoS One 7, e30500 (2012), 10.1371/journal.pone.0030500. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Keten S., Buehler M. J., Atomistic model of the spider silk nanostructure. Appl. Phys. Lett. 96, 153701 (2010), 10.1063/1.3385388. [DOI] [Google Scholar]
  • 21.Cranford S. W., Tarakanova A., Pugno N. M., Buehler M. J., Nonlinear material behaviour of spider silk yields robust webs. Nature 482, 72–76 (2012), 10.1038/nature10739. [DOI] [PubMed] [Google Scholar]
  • 22.Liu D., et al. , Spider dragline silk as torsional actuator driven by humidity. Sci. Adv. 5, eaau9183 (2019), 10.1126/sciadv.aau9183. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Tarakanova A., Buehler M. J., A materiomics approach to spider silk: Protein molecules to webs. JOM 64, 214–225 (2012), 10.1007/s11837-012-0250-3. [DOI] [Google Scholar]
  • 24.Su I., et al. , Interactive exploration of a hierarchical spider web structure with sound. J. Multimodal User Interfaces 16, 71–85 (2022), 10.1007/s12193-021-00375-x. [DOI] [Google Scholar]
  • 25.Su I., et al. , Imaging and analysis of a three-dimensional spider web architecture. J. R. Soc. Interface 15, 20180193 (2018), 10.1098/rsif.2018.0193. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Su I., et al. , Sonification of a 3-D spider web and reconstitution for musical composition using granular synthesis. Computer Music J. 44, 43–59 (2020), 10.1162/COMJ_a_00580. [DOI] [Google Scholar]
  • 27.Su I., Buehler M. J., Mesomechanics of a three-dimensional spider web. J. Mech. Phys. Solids 144, 104096 (2020). [Google Scholar]
  • 28.Su I., et al. , In situ three-dimensional spider web construction and mechanics. Proc. Natl. Acad. Sci. U.S.A. 118, e2101296118 (2021). 10.1073/pnas.2101296118/-/DCSupplemental.Published. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Su I., Buehler M. J., Mesomechanics of a three-dimensional spider web. J. Mech. Phys. Solids 144, 104096 (2020), 10.1016/j.jmps.2020.104096. [DOI] [Google Scholar]
  • 30.Garrison N. L., et al. , Spider phylogenomics: Untangling the Spider Tree of Life. PeerJ 2016, e1719 (2016), 10.7717/peerj.1719. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Gu G. X., et al. , Three-dimensional-printing of bio-inspired composites. J. Biomech. Eng. 138, 021006 (2016), 10.1115/1.4032423. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Su I., et al. , In situ three-dimensional spider web construction and mechanics. Proc. Natl. Acad. Sci. U.S.A. 118, e2101296118 (2021), 10.1073/pnas.2101296118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Milazzo M., et al. , Additive manufacturing approaches for hydroxyapatite-reinforced composites. Adv. Funct. Mater. 29, 1903055 (2019), 10.1002/ADFM.201903055. [DOI] [Google Scholar]
  • 34.Parandoush P., Lin D., A review on additive manufacturing of polymer-fiber composites. Compos. Struct. 182, 36–53 (2017), 10.1016/j.compstruct.2017.08.088. [DOI] [Google Scholar]
  • 35.Ngo T. D., Kashani A., Imbalzano G., Nguyen K. T. Q., Hui D., Additive manufacturing (3D printing): A review of materials, methods, applications and challenges. Compos. B. Eng. 143, 172–196 (2018), 10.1016/J.COMPOSITESB.2018.02.012. [DOI] [Google Scholar]
  • 36.Dimas L. S., Buehler M. J., Modeling and additive manufacturing of bio-inspired composites with tunable fracture mechanical properties. Soft Matter 10, 4436–4442 (2014), 10.1039/C3SM52890A.. [DOI] [PubMed] [Google Scholar]
  • 37.Nepal D., et al. , Hierarchically structured bioinspired nanocomposites. Nat. Mater. 2022, 1–18 (2022), 10.1038/s41563-022-01384-1. [DOI] [PubMed] [Google Scholar]
  • 38.Mirzaeifar R., Dimas L. S., Qin Z., Buehler M. J., Defect-tolerant bioinspired hierarchical composites: Simulation and experiment. ACS Biomaterial Sci. Eng. 1, 295–304 (2015), 10.1021/ab500120f. [DOI] [PubMed] [Google Scholar]
  • 39.Milazzo M., Anderson G. I., Buehler M. J., Bioinspired translation of classical music into de novo protein structures using deep learning and molecular modeling. Bioinspir Biomim. 17, 015001, 10.1088/1748-3190/ac338a (2022). [DOI] [PubMed] [Google Scholar]
  • 40.Lin S., et al. , Predictive modelling-based design and experiments for synthesis and spinning of bioinspired silk fibres. Nat. Commun. 6, 6892 (2015), 10.1038/ncomms7892. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Benjamin S. P., Zschokke S., Benjamin S. P., Zschokke S., Webs of theridiid spiders: Construction, structure and evolution. Biol. J. Linn. Soc. 78, 293–305 (2003). [Google Scholar]
  • 42.Buehler E. L., Su I., Buehler M. J., WebNet: A biomateriomic three-dimensional spider web neural net. Extreme Mech. Lett. 42, 101034 (2021), 10.1016/j.eml.2020.101034. [DOI] [Google Scholar]
  • 43.Yu H., Yang J., Sun Y., Energy absorption of spider orb webs during prey capture: A mechanical analysis. J. Bionic. Eng. 12, 453–463 (2015), 10.1016/S1672-6529(14)60136-0. [DOI] [Google Scholar]
  • 44.Harmer A. M. T., Blackledge T. A., Madin J. S., Herberstein M. E., High-performance spider webs: Integrating biomechanics, ecology and behaviour. J. R. Soc. Interface 8, 457–471 (2011), 10.1098/rsif.2010.0454. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Blamires S. J., Hou C., Chen L. F., Liao C. P., Tso I. M., Three-dimensional barricading of a predatory trap reduces predation and enhances prey capture. Behav. Ecol. Sociobiol. 67, 709–714 (2013), 10.1007/s00265-013-1493-x. [DOI] [Google Scholar]
  • 46.Mortimer B., Soler A., Siviour C. R., Zaera R., Vollrath F., Tuning the instrument: Sonic properties in the spider’s web. J. R. Soc. Interface 13, 20160341 (2016), 10.1098/rsif.2016.0341. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Saramäki J., Kivelä M., Onnela J. P., Kaski K., Kertész J., Generalizations of the clustering coefficient to weighted complex networks. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 75, 027105 (2007), 10.1103/PHYSREVE.75.027105/FIGURES/3/MEDIUM. [DOI] [PubMed] [Google Scholar]
  • 48.Hamilton W. L., Ying R., Leskovec J., Inductive representation learning on large graphs. arXiv [Preprint] (2017). 10.48550/arXiv.1706.02216 (Accessed 17 February 2023). [DOI]
  • 49.López Barreiro D., Yeo J., Tarakanova A., Martin-Martinez F. J., Buehler M. J., Multiscale modeling of silk and silk-based biomaterials—A review. Macromol. Biosci. 19, e1800253 (2019), 10.1002/mabi.201800253. [DOI] [PubMed] [Google Scholar]
  • 50.Bosia F., Buehler M. J., Pugno N. M., Hierarchical simulations for the design of supertough nanofibers inspired by spider silk. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 82, 056103 (2010), 10.1103/PhysRevE.82.056103. [DOI] [PubMed] [Google Scholar]
  • 51.Qin Z., Compton B. G., Lewis J. A., Buehler M. J., Structural optimization of 3D-printed synthetic spider webs for high strength. Nat. Commun. 6, 7038 (2015), 10.1038/ncomms8038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Lu W., Yang Z., Buehler M. J., Rapid mechanical property prediction and de novo design of three-dimensional spider webs through graph and GraphPerceiver neural networks. J. Appl. Phys. 132, 074703 (2022), 10.1063/5.0097589. [DOI] [Google Scholar]
  • 53.Jin Y., et al. , Bio-inspired spider-web-like membranes with a hierarchical structure for high performance lithium/sodium ion battery electrodes: The case of 3D freestanding and binder-free bismuth/CNF anodes. Nanoscale 9, 13298–13304 (2017), 10.1039/c7nr04912a. [DOI] [PubMed] [Google Scholar]
  • 54.Bai Q., et al. , Application advances of deep learning methods for de novo drug design and molecular dynamics simulation. Wiley Interdiscip Rev. Comput. Mol. Sci. 12, e1581 (2022), 10.1002/wcms.1581. [DOI] [Google Scholar]
  • 55.Elton D. C., Boukouvalas Z., Fuge M. D., Chung P. W., Deep learning for molecular design–A review of the state of the art. Mol. Syst. Des. Eng. 4, 828–849 (2019), 10.1039/c9me00039a. [DOI] [Google Scholar]
  • 56.Sanchez-Lengeling B., Aspuru-Guzik A., Inverse molecular design using machine learning: Generative models for matter engineering. Science 361, 360–365 (2018), 10.1126/science.aat2663. [DOI] [PubMed] [Google Scholar]
  • 57.Guo X., Zhao L., A systematic survey on deep generative models for graph generation. IEEE Trans. Pattern Anal. Mach. Intell. 45, 5370–5390, 10.1109/TPAMI.2022.3214832 (2023). [DOI] [PubMed] [Google Scholar]
  • 58.Faez F., Ommi Y., Baghshah M. S., Rabiee H. R., Deep graph generators: A survey. IEEE Access 9, 106675–106702 (2021), 10.1109/ACCESS.2021.3098417. [DOI] [Google Scholar]
  • 59.Hamilton W. L., Brachman R. J., Rossi F., Stone P., Graph Representation Learning (Morgan & Claypool Publishers, 2020). [Google Scholar]
  • 60.Wu L., Cui P., Pei J., Zhao L., Eds., Graph Neural Networks: Foundations, Frontiers, and Applications (Springer Singapore, 2022). [Google Scholar]
  • 61.Zhang X. M., Liang L., Liu L., Tang M. J., Graph neural networks and their current applications in bioinformatics. Front. Genet. 12, 690049 (2021), 10.3389/fgene.2021.690049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Zitnik M., Agrawal M., Leskovec J., “Modeling polypharmacy side effects with graph convolutional networks” in Bioinformatics, (Oxford University Press, 2018), pp. i457–i466. 10.1093/bioinformatics/bty294. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Gopinath K., Desrosiers C., Lombaert H., “Adaptive graph convolution pooling for brain surface analysis” in Information Processing in Medical Imaging: 26th International Conference, IPMI 2019, Hong Kong, China, June 2–7, 2019, Proceedings 26 (Springer International Publishing, 2019), pp. 86–98. [Google Scholar]
  • 64.Wu L., Graph Neural Networks for Natural Language Processing: A Survey. arXiv [Preprint] (2022). 10.48550/arXiv.2106.06090 (Accessed 17 February 2023). [DOI]
  • 65.Yao L., Mao C., Luo Y., “Graph convolutional networks for text classification” in Proceedings of the AAAI conference on artificial intelligence. 33, (2019), pp. 7370–7377. [Google Scholar]
  • 66.Whalen E., Mueller C., Toward reusable surrogate models: Graph-based transfer learning on trusses. J. Mechanical Design, Trans. ASME 144, 021704 (2022), 10.1115/1.4052298. [DOI] [Google Scholar]
  • 67.Song W., “Session-based social recommendation via dynamic graph aention networks” in WSDM 2019–Proceedings of the 12th ACM International Conference on Web Search and Data Mining (Association for Computing Machinery, Inc., 2019), pp. 555–563, 10.1145/3289600.3290989. [DOI] [Google Scholar]
  • 68.He X., “LightGCN: Simplifying and powering graph convolution network for recommendation” in SIGIR 2020–Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (Association for Computing Machinery, Inc., 2020), pp. 639–648, 10.1145/3397271.3401063. [DOI] [Google Scholar]
  • 69.Li Y., Vinyals O., Dyer C., Pascanu R., Battaglia P., Learning deep generative models of graphs. arXiv [Preprint] (2018). 10.48550/arXiv.1803.03324 (Accessed 17 February 2023). [DOI]
  • 70.Hu Y., Buehler M. J., Deep language models for interpretative and predictive materials science. APL Mach. Learn. 1, 010901 (2023), 10.1063/5.0134317. [DOI] [Google Scholar]
  • 71.Hoogeboom E., Satorras V. G., Vignac C., Welling M., Equivariant diffusion for molecule generation in 3D. arXiv [Preprint] (2022). 10.48550/arXiv.2203.17003 (Accessed 17 February 2023). [DOI]
  • 72.Huang H., Sun L., Du B., Fu Y., Lv W., GraphGDP: Generative diffusion processes for permutation invariant graph generation. arXiv [Preprint] (2022). 10.48550/arXiv.2212.01842 (Accessed 17 February 2023). [DOI]
  • 73.Liu C., et al., Generative diffusion models on graphs: Methods and applications. arXiv [Preprint] (2023). 10.48550/arXiv.2302.02591 (Accessed 17 February 2023). [DOI]
  • 74.Crane K., Weischedel C., Wardetzky M., Geodesics in heat: A new approach to computing distance based on heat flow. ACM Trans. Graph 32, 1–11 (2013), 10.1145/2516971.2516977. [DOI] [Google Scholar]
  • 75.Mejia D., Ruiz-Salguero O., Cadavid C. A., Spectral-based mesh segmentation. Int. J. Interact. Des. Manuf. 11, 503–514 (2017), 10.1007/s12008-016-0300-0. [DOI] [Google Scholar]
  • 76.Burden R. L., Faires J. D., Numerical Analysis (CENGAGE Learning, 2015). [Google Scholar]
  • 77.Lu W., Lee N., Buehler M. J., STL files: Modeling and design of heterogeneous hierarchical bioinspired spider web structures using deep learning and additive manufacturing. (June 5, 2023). 10.5281/zenodo.8003903. [DOI] [PMC free article] [PubMed]
  • 78.Bubeck S., et al., Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv [Preprint] (2023). 10.48550/arXiv.2303.12712 (Accessed 23 March 2023). [DOI]
  • 79.Hu Y., Buehler M. J., Deep language models for interpretative and predictive materials science. APL Mach. Learn. 1, 010901 (2023), 10.1063/5.0134317. [DOI] [Google Scholar]
  • 80.Yu J., Vector-quantized image modeling with improved VQGAN. arXiv [Preprint] (2021). https://doi.org10.48550/arxiv.2110.04627 (Accessed 17 February 2023).
  • 81.Rombach R., Blattmann A., Lorenz D., Esser P., Ommer B., High-resolution image synthesis with latent diffusion models. arXiv [Preprint] (2021). 10.48550/arxiv.2112.10752 (Accessed 17 February 2023). [DOI]
  • 82.Buehler M. J., Modeling atomistic dynamic fracture mechanisms using a progressive transformer diffusion model. J. Appl. Mech. 89, 121009 (2022), 10.1115/1.4055730. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Buehler M. J., Predicting mechanical fields near cracks using a progressive transformer diffusion model and exploration of generalization capacity. J. Mater. Res. 38, 1317–1331 (2023), 10.1557/s43578-023-00892-3. [DOI] [Google Scholar]
  • 84.Kool W., van Hoof H., Welling M., Ancestral Gumbel-Top-k sampling for sampling without replacement. J. Mach. Learn. Res. 21, 1–36 (2020).34305477 [Google Scholar]
  • 85.Ho J., Salimans T., Classifier-free diffusion guidance. arXiv [Preprint] (2022). 10.48550/arxiv.2207.12598 (Accessed 17 February 2023). [DOI]
  • 86.Hu Y., Buehler M. J., End-to-end protein normal mode frequency predictions using language and graph models and application to sonification. ACS Nano 16, 20656–20670 (2022), 10.1021/ACSNANO.2C07681/SUPPL_FILE/NN2C07681_SI_004.PDF. [DOI] [PubMed] [Google Scholar]
  • 87.Micheli V., Alonso E., Fleuret F., Transformers are sample-efficient world models. arXiv [Preprint] (2022). 10.48550/arXiv.2209.00588 (Accessed 17 February 2023). [DOI]
  • 88.Vaswani A., “Attention is all you need” in Advances in Neural Information Processing Systems (Neural Information Processing Systems Foundation, 2017), pp. 5999–6009. [Google Scholar]
  • 89.Lu W., Lee N., Buehler M. J., Modeling and design of heterogeneous hierarchical bioinspired spider web structures using deep learning and additive manufacturing. Github. (2023). https://github.com/lamm-mit/GraphGeneration. [DOI] [PMC free article] [PubMed]
  • 90.Paszke A., et al. , PyTorch: An Imperative Style (High-Performance Deep Learning Library, 2019). [Google Scholar]
  • 91.Kingma D. P., Ba J., Adam: A method for stochastic optimization. arXiv [Preprint] (2014). 10.48550/arXiv.1412.6980 (Accessed 17 February 2023). [DOI]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix 01 (PDF)

Movie S1.

Tensile deformation experiment of H1, 50x speedup.

Download video file (35.3MB, mp4)
Movie S2.

Tensile deformation experiment of H2 50x speedup.

Download video file (29.9MB, mp4)
Movie S3.

Tensile deformation experiment of H3, 50x speedup.

Download video file (16.9MB, mp4)
Movie S4.

Rendering of various 3D geometry files generated in this paper.

Download video file (40.9MB, mp4)

Data Availability Statement

Code, dataset, STL Files data have been deposited in GitHub; Zenodo (https://github.com/lamm-mit/GraphGeneration; https://doi.org/10.5281/zenodo.8003903) (77, 89).


Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES