Skip to main content
3D Printing and Additive Manufacturing logoLink to 3D Printing and Additive Manufacturing
. 2025 Feb 13;12(1):1–10. doi: 10.1089/3dp.2023.0309

From Words to Worlds: Exploring Generative 3D Models in Design and Fabrication

Valdemar Danry 1,, Cenk Guzelis 2, Lingdong Huang 1, Neil Gershenfeld 3, Pattie Maes 1
PMCID: PMC11937759  PMID: 40151679

Abstract

The integration of artificial intelligence (AI) into the design and fabrication process has opened up novel pathways for producing custom objects and altered the traditional creative workflow. In this article, we present Depthfusion, a novel text-to-3D model generation system that empowers users to rapidly create detailed 3D models from textual or 2D image inputs, and explore the application of text-to-3D models within different fabrication techniques. Depthfusion leverages current text-to-image AI technologies such as Midjourney, Stable Diffusion, and DALL-E and integrates them with advanced mesh inflation and depth mapping techniques. This approach yields a high degree of artistic control and facilitates the production of high-resolution models that are compatible with various 3D printing methods. Our results include a biomimetic tableware set that merges intricate design with functionality, a large-scale ceramic vase illustrating the potential for additive manufacturing in ceramics, and even a sneaker-shaped bread product achieved by converting AI design into a baked form. These projects showcase the diverse possibilities for AI in the design and crafting of objects across mediums, pushing the boundaries of what is traditionally considered feasible in bespoke manufacturing.

Keywords: generative AI, artificial intelligence, fabrication, ceramic 3D printing, PolyJet 3D-printing

Introduction

Imagine a future where making anything is as simple as articulating it with words. A teenager, looking to complete her retro wardrobe, describes her ideal sneakers to an artificial intelligence (AI), which produces a 3D model that is sent to her printer and printed at home. Similarly, a chef conceives a unique set of tableware to elevate a culinary creation, and an AI assistant promptly brings this vision into the physical world through a ceramic 3D printer in his kitchen. Furthermore, a couple looking to build their dream home can simply describe to an AI assistant and it will be printed with a large 3D printer.

The advent of generative AI models has brought revolutionary possibilities to the world of design and manufacturing. Over the past few years, AI-based design assistance has made significant strides, with 2D image generation models like DALL-E,1,2 Midjourney, and Stable Diffusion3 already being employed by increasing numbers of architects in designing building layouts, structural optimization, and facade design from textual descriptions.4

While 2D generative models have been rapidly adopted for tasks like architecture design, the transition to 3D presents considerable challenges. Present models struggle with fine detail, efficient runtime, and giving designers sufficient creative control.5–8 Research shows that designers spend an average of four iterations for design queries using 2D generative models.4 Given the hour-long generation time of current models, this significantly limits the design workflow. Moreover, most text-to-3D model approaches have only been applied in contexts like video games with limited physical fabrication applications.4

To address these challenges, we introduce Depthfusion, a novel text-to-3D model system, which elevates artistic control, accelerates the production process, and enhances the resolution of outputs. Depthfusion leverages existing 2D generative model capabilities for initial design, combined with advanced mesh inflation and depth map techniques, to produce fine-detail 3D models optimal for fabrication. The code for Depthfusion is available on GitHub.1

We present the results of applying Depthfusion across various fabrication methods including PolyJet 3D printing and ceramic 3D printing, capable of producing everything from intricate tableware to a sneaker shoe made out of bread. Our applications extend to unconventional materials and methods, showcasing the system’s versatility in translating AI-generated designs into the physical realm.

Generative AI in Design

Text-to-image

Text-to-image generation, a task in computer vision involving generating synthetic images from textual descriptions, has garnered significant attention for its potential in various design applications.4 Deep learning models, especially generative adversarial networks (GANs) and diffusion models, are typically used for this task. Recently, diffusion-based generative models such as Dall-E, Midjourney, and Stable Diffusion have gained popularity owing to their capabilities in creating realistic 2D images from textual prompts.1–3

Diffusion-based models generate images by undergoing a diffusion process, starting from a noise distribution and progressively transforming into the target data distribution, guided by a trained neural network at each step.2,9–12 They can produce high-quality images as they gradually refine the image at each step of the process.

These models can also achieve image inpainting or fill in parts of an image based on the textual description, thereby enabling text-guided image editing work, such as removing unwanted objects or altering parts of an image.10 In the context of design, models like these can facilitate ideation, improvement of designs, creation of design variations, or execution of intricate design tasks.4,13 Furthermore, they enable iteration in design, offering a more interactive process.

Text-to-3D

The field of text-to-3D generation research primarily explores the creation of 3D models and scenes from textual descriptions, with applications in gaming and virtual reality. A significant early work in this domain is DreamFields,5 which utilizes neural radiance fields to generate 3D scenes from text prompts. Following this, more sophisticated systems such as DreamFusion,6 Point-E,14 Magic3D,8 Zero-1-to-3,15 and ProlificDreamer7 have been developed, each aiming to improve aspects such as speed, quality, and diversity of the generated 3D models. Gaussian splatting–based approaches16 significantly reduce the generation time to just a few minutes, enabling quicker iterations for designers. However, challenges such as maintaining high resolution, ensuring physical feasibility, and enabling user-friendly control and editing of the generated models still exist within this field.4

Research on text-to-image models for design, for instance, shows that designers typically refine their prompts 4 times before settling on a design4 and even then spend a considerable amount of time editing the image in tools like Photoshop. With most text-to-3D model approaches taking more than an hour to generate a 3D model, designers are not able to work iteratively.

Materials and Methods

Text-to-3D using Depthfusion

In order to generate 3D objects suitable for fabrication while also maximizing user control over design parameters, we have developed the open-source tool, Depthfusion.2 This tool seamlessly integrates text-to-image generator technologies such as Stable Diffusion, Midjourney, and Dall-E, with advanced mesh inflation and depth map generation techniques (see Fig. 1). These features allow for the fast production of high-resolution meshes, which can be easily modified and prepared for fabrication. The systems steps are outlined in the following sections:

FIG. 1.

FIG. 1.

Depthfusion system overview. (A) First, the user either writes a text description of the object they want to generate, or they upload a design. (B) If a prompt is submitted, an image is generated. (C) A mask image is generated to segment the object from the background. (D) Simultaneously, a depth map is estimated based on the generated image. (E) A simple mesh is inflated based on the outline of the mask image. (F) A final 3D model is produced from the inflated mesh displaced by the depth map.

Step 1: Generating an image for the design

The first step in the Depthfusion process involves generating or uploading a 2D image of your desired design seen from the side. In the user interface, image generation is achieved using Dall-E 2. Since this step heavily determines the final 3D object, users have the option to iterate and refine the generated image until they are satisfied with it. Due to limitations in the Depthfusion method, the 2D image has to be mirrorable (the same front and back) and be seen directly from the side. An object can easily be made mirrorable by adding “side view” or “front view” to the beginning of the prompt used in image generation. Another limitation is that Depthfusion cannot generate occluded parts from 3D images, for example, an arm chair where the cushion is not visible. To overcome this, users will either have to limit their designs to non-occluded objects or add the occluded parts manually by modifying the mesh (see Step 6).

Step 2: Segmenting the object from the background

Once the image for the design has been generated or uploaded, Depthfusion will automatically segment the object from the image’s background. This process involves creating a binary mask, with the object being represented as white pixels on a black background. The binary mask removes unnecessary background elements to ensure that only the main object is processed into a 3D model. To convert the original image into a binary mask, we first use the open-source python library “Rembg.”17 Since Rembg might have identified multiple objects, we use the “eliminate floating regions” function in the open source python library “OpenCV”3 to only have the biggest object be visible in the mask.

Step 3: Generating a depth map image

The next step involves generating a depth map for the design using DeepBump.4 This pre-trained normal map and depth generator model translates 2D images into different layers of depth for the 3D model. Depth maps are grayscale images that represent the distance between the surface of an object in a scene and the viewpoint or camera from which the scene is observed. Depth maps are crucial for creating visually appealing and detailed 3D designs.

Step 4: Inflating the mesh

The next step in the Depthfusion process involves inflating a mesh based on the mask of the object prepared earlier. The inflation process creates an initial 3D geometric description of the object based on the 2D outline provided by the image mask. Mesh inflation is reminiscent of inflating a 2D pattern into a 3D balloon-like structure. The mesh inflation method used here is based on the work by Baran et al.,18 which is used to calculate initial depth values for each pixel in the mask. These values are then used to displace vertices in the initial 3D mesh from the outline of the mask. The result is then copied and mirrored across the z-axis and merged with the original inflation. The full details for implementing the inflation algorithm can be found in Dvorožňák et al.18

Step 5: Applying depth map and exporting the mesh

In the composition stage, an AI-generated depth map is applied to the inflated 3D mesh in the open-source 3D software tool Blender. This application process not only enhances the visual aesthetics of the model but also provides it with a realistic and intricate texture that mirrors the nuances of the original design. The depth map might in some cases, especially where the mesh is very thin such as for handles and slim nozzles, displace and warp the mesh unnaturally. To overcome this challenge, the depth map can be modified manually by decreasing depth map brightness on thin parts of the object to tone down the amount of displacement. The amount of overall displacement can also be modified in the displacement modifier in Blender (see Step 6). Next, the final model is exported.

Step 6: Refining the design

The final step of the Depthfusion process involves refining and fine-tuning the design. Unlike most existing 3D generation systems, Depthfusion allows users to easily modify the output design using standard 3D editing software or by revisiting earlier steps and altering the base image, adjusting the mask or depth map, or even by changing the inflation process parameters in order to generate different variations of the initial design. For instance, the base image in Step 1 can be altered using painting or inpainting techniques or manipulated using image editing software. The depth map can be adjusted to change the texture or shape of the 3D model, or the inflation process parameters can be adjusted to create a different mesh form.

In most 3D modeling software such as Blender, users have the ability to sculpt changes to the design. This can range from adding details to smoothing out a particular area, or even resizing the object to better fit the intended usage. Tools within Blender also allow for the adjustment of the design’s orientation and scale, make it hollow, change its thickness using the solidify modifier, and give the artist additional artistic control in achieving the final shapes and size of the design.

3D fabrication methods

The transformative power of 3D printing technology in manufacturing and design is characterized by various techniques, each distinct in its approach and materials used. Among these, the most notable are stereolithography, PolyJet, and Ceramic 3D Printing and also include approaches such as baking.

PolyJet 3D printing

PolyJet printing, developed by Stratasys, is a sophisticated 3D printing technique that operates similarly to inkjet document printing, but instead of jetting drops of ink onto paper, PolyJet printers jet layers of curable liquid photopolymer onto a build tray.19 The process creates parts with a high level of detail and smooth surface finishes, and it also allows for the printing of parts in multiple materials and colors simultaneously.

In comparison to other 3D printing approaches, one of the standout features of PolyJet is the ability to create parts with varying material properties with millimeter precision, allowing for printing in a large spectrum of colors. The technology is therefore particularly useful for creating realistic models that are visually and tactually similar to the initial design. Other 3D printing approaches such as fused deposition modeling or selective laser sintering (SLS) do typically not have millimeter precision and are limited in their ability to print in multiple materials and colors within a single part.

Ceramic 3D printing

Ceramic 3D printing is the layered positioning of ceramic materials to create objects that, after printing, undergo processes such as drying and firing to achieve their final properties. There are multiple ceramic 3D printing technologies, including binder jetting, direct ink writing, SLS, and liquid depositing modeling (LDM) specifically adapted for ceramics.20

LDM is a variation of extrusion-based 3D printing, specifically adapted for materials like clay or other fluid-dense materials. In LDM 3D printing, the material is kept in a liquid state and is extruded through a nozzle. The printer lays down layers of the material, which then dries and hardens to create solid ceramic objects.

LDM technology is particularly well suited for creating large-scale objects and architectural works. It is capable of printing with various types of clay and mixtures, allowing for experimentation and flexibility in creating ceramic items. One of the advantages of using LDM for clay printing is the ability to create complex geometries that would be difficult or impossible to achieve with traditional ceramic-making methods. However, the method is also limited to geometries that do not have heavy overhangs or fine hanging structures where their weight could cause the print to break. Some way of mitigating this includes making supports out of clay to put under the overhangs as they are printed by the machine.

Baking 3D structures

While not commonly associated with typical industrial 3D printing processes, baking can be viewed as an alternative, organic approach to fabrication, particularly for culinary applications. The concept of 3D baking involves creating food items that have been shaped or structured in three dimensions, which can often mean a combination of traditional cooking practices with molding or shaping techniques.

Innovative chefs and food technologists have experimented with creating positive molds using 3D printing technology, which are then used to create negative molds for food preparation. For instance, a chef might design a unique chocolate sculpture by first 3D printing a model of the desired shape. A food-safe material such as silicone can then be cast around the 3D-printed model to create a negative mold. The final step involves filling this negative mold with chocolate or dough and allowing it to set, which can sometimes involve baking or cooling rather than the heat treatment associated with traditional baking.

Integration of AI and machine learning in this area is relatively nascent but has large potential for automating design to fabrication workflows for specialty food items, optimizing recipes and baking parameters based on desired outcomes, and personalizing food experiences.

Results

AI-generated biomimetic tableware using PolyJet 3D printing

Using Depthfusion together with PolyJet 3D printing, we designed and fabricated a series of common tableware objects, including a vase, a teapot, teacups, and saucers. The collection of objects were exhibited at the MIT Media Lab5 and can be seen in Figure 2.

FIG. 2.

FIG. 2.

PolyJet 3D printing results of AI-generated biomimetic tableware. (A) The full collection of biomimetic tableware designed through Depthfusion. (B) Teapot design functionally in use. (C) Close-up of the teapot texture and deformation. (D) Close-up of the vase texture and deformation. (E) Close-up of the teacup.

To design these objects, we used Depthfusion in conjunction with the text-to-image service, Midjourney. First, we experimented with different prompts in using Midjourney’s V2 and V3 models with prompts such as “Side view of a teapot made out of bones” and “Side view of a body parts vase” with keywords such as “studio lighting,” “design award,” and “organic design.” After some experimentation, we settled on the final prompts “Side view of a surreal teapot made out of body parts,” “Side view of a surreal vase made out of body parts,” “Side view of a surreal teacup made out of body parts,” and “Side view of a surreal tea saucer made out of body parts.” The image outputs can be seen in Figure 3. Next, using Depthfusion, a depth map, normal map was generated and applied to an inflation of the object in the images in Blender. After inspecting the models in Blender, the depth map values were touched up in Photoshop to make smaller bone-like structures pop out more. Since Depthfusion creates one solid mesh, both the teacup and the teapot needed to have holes cut in the top and be given a certain thickness. This was done by manually removing the top faces of the mesh around the openings and then using a solidify modifier in Blender.

FIG. 3.

FIG. 3.

AI-generated 2D images from design exploration using Midjourney. (A) Outputs from the prompt “Side view of a surreal teacup made out of body parts.” (B) Outputs from the prompt “Side view of a surreal teapot made out of body parts.” (C) Outputs from the prompt “Side view of a surreal vase made out of body parts.”

To fabricate the tableware, each model was loaded into Stratasys’ 3D printing software, GrabCAD, and printed on the Stratasys J55 printer with standard settings. Each print took 1–3 days. After completion, each print was submerged in a bucket of water on an orbital shaker for 1–2 days to remove the water-soluble supports. The final structures came out with high details and no visible loss from the original designs as seen in Figure 4.

FIG. 4.

FIG. 4.

Printing the AI-generated biomimetic tableware. (A) PolyJet 3D printing in progress. (B) Removing water soluble supports. (C) The vase with water soluble supports dissolved.

AI-generated vase using ceramic 3D printing

To further extend our exploration of utilizing AI in design and fabrication, we also focused on a different 3D printing material—ceramics. In a separate experiment, we used Depthfusion to generate a design for a large vase that was then fabricated using a LDM ceramic printer, as can be seen in Figure 5.

FIG. 5.

FIG. 5.

Ceramic 3D-printed vase made out of electrical bills. (A) The 2D image design generated with Midjourney and the prompt “Side view of vase made out of electrical bills.” (B) The vase printed with a ceramic 3D clay printer and left to dry.

For the design, we again employed the text-to-image service, Midjourney, this time with the prompt “Vase made out of electricity bills, studio lighting.” The unconventional and abstract nature of this prompt gave us a fascinating output, which can be seen in Figure 6. In adapting this design for 3D printing, we again employed Depthfusion to generate a 3D model. In Blender, the top of the final vase 3D model was removed manually to give the object an opening on the top, and a solidifier modifier was applied to the design to give it thickness. The final mesh was exported to a file type compatible with the ceramic printer. As mentioned in “Text-to-3D Using Depthfusion” section, due to limitations in the Depthfusion system, objects with occluded elements such as the opening and inside of the vase have to be manually edited.

FIG. 6.

FIG. 6.

The steps in making the vase. (A) A number of different designs are explored in Midjourney. (B) An image is selected and turned into a 3D model using Depthfusion. (C) A WASP ceramic clay 3D printer prints the vase layer by layer.

Fabrication of the ceramic vase was carried out using the WASP LDM Extruder on a WASP Delta 2040 3D printer. This particular additive manufacturing device is known for its ability to print with dense fluid-dense materials such as clay. The print settings as well as extrusion speed were adjusted according to the desired design and the characteristics of the ceramic material we were using. The 3D printing process was closely monitored due to the complex nature of clay printing, which can present issues during the actual print process. Once printed, the vase was slowly dried for approximately a week to prevent cracking and deformation. Afterward, it was fired in a kiln at a high temperature. The final output resulted in a delicately detailed, ceramic vase.

AI-generated loafer sneaker using 3D-printed baking molds

To further push the boundaries of how AI can be embedded into more traditional crafting processes, we used Depthfusion to design a bread shoe and baked it as can be seen in Figure 7.

FIG. 7.

FIG. 7.

Baked bread mold results of AI-generated loafer sneaker. (A) The design 2D image generated with Midjourney. (B) The bread loafer sneaker baked from a mold 3D model generated with Depthfusion.

As in the other experiments, we used the text-to-image service, Midjourney (V4), with the prompt “sneaker made out of bread.” The generated image can be seen in Figure 7. This design was then converted into a 3D model using Depthfusion, with minor adjustments made in Blender to enhance certain details. Next, the 3D model was used to print a positive mold using a standard PLA 3D printer. A positive mold is essentially a 3D print of the design, and it serves as a reference for the creation of the negative molds. Using this positive sneaker mold, we created two negative molds by pressing wet clay onto both sides of the mold, to capture its texture, pattern, and shape. We then waited 24 h for the clay to dry.

The dough was prepared following a traditional brioche recipe and then carefully packed into an opening on top of the two negative clay molds held together with a clamp. The filled molds were then baked in an oven at 200οC (392οF) for ∼60 min (Fig. 8). The resulting bread loaf, shaped like a loafer sneaker, maintained the high level of detail of the original AI-generated design, demonstrating the potential of synthesizing AI and traditional craft methods to create unique and unconventional designs.

FIG. 8.

FIG. 8.

The steps in baking the loafers. (A) A positive mold generated with Depthfusion and printed with a PLA printer. (B) One of the two negative clay molds that will contain the dough. (C) The two negative molds being clamped together and put into the oven.

Discussion

The integration of AI-generated designs and various 3D fabrication methods, as presented in this article, has uncovered both opportunities and challenges in the field of design and manufacturing. Our experiments with AI-generated biomimetic tableware, a ceramic vase, and a loafer sneaker baked out of bread illustrate not only the innovative applications of AI in creating intricate designs but also the versatility of fabrication methods to bring these designs to life.

The PolyJet 3D printing process proved to be particularly efficacious for producing the biomimetic tableware with a high level of detail and a smooth surface finish. This precise control offered by PolyJet technology aligns well with Depthfusion’s capabilities to generate detailed designs and shows promise for producing complex, multi-material objects. The use of biomimetic designs, which can be hard to manually or algorithmically craft, opens up a conversation about the incorporation of organic and natural forms into everyday objects, potentially elevating the aesthetics of utilitarian items and creating a new design language steeped in the imitation of biological structures. However, most of current PolyJet materials are not food-safe, thus limiting the practical use of the printed items.

In contrast, the ceramic vase experiment highlights potential of making large-scale ceramic 3D printing that is food-safe and biodegradable. However, LDM ceramic 3D printing presents a unique set of challenges compared with traditional 3D printing techniques, due to the inherent properties of clay, such as not being able to print small details, needing constant monitoring and manual support structures, as well as drying and firing processes. Nonetheless, the successful creation of a 1-m-tall vase underscores the ability of AI to assist in the design of substantial, sculptural pieces that can be executed with a high degree of fidelity despite the potentially unwieldy nature of clay as a medium.

The process of baking a loafer sneaker from bread further pushes the boundaries of what can be considered “3D printing” by employing traditional culinary methods in combination with modern technology. The successful outcome of this experiment is a testament to the adaptability of AI-generated designs across various materials and processes, including those outside typical industrial fabrication techniques. This venture into the culinary arts suggests that AI-generated designs may find applications in personalized cuisine, custom bakery products, and perhaps even in the broader field of food technology.

The results of these experiments suggest that there is a promising synergy between AI-generated designs and 3D fabrication methods. However, there are also several limitations to consider. Although Depthfusion provides greater artistic control and the ability to quickly generate high-resolution 3D models, it is only capable of generating symmetrical designs that can be mirrored over the z-axis. Moreover, it does not account for internal or hidden structures and holes that may be important in certain designs. Additionally, the depth map technique used to create 3D details might collapse small or thin structures, resulting in a loss of intricacy in the final object. Furthermore, since Depthfusion relies on 2D images, elements such as reflections and translucency cannot be accurately translated into the 3D models.

Form and Ornamentation

Integrating AI-generated designs with 3D fabrication methods revisits the historical discourse on form versus ornament in design and architecture. Here, form typically refers to the spatial and structural aspects of a design, whereas ornament pertains to the aesthetic embellishments that do not contribute to the physical stability of a structure but enhance its visual and sensory appeal. Traditionally, the debate has oscillated between prioritizing form for its functional and structural integrity21 versus valuing ornament for its symbolic, cultural, and aesthetic contributions.22,23 In a more modern context where a lot of designs are made computationally, ornamentation is often neglected as it requires meticulous planning and parameter setting, which can be both time-consuming and technically demanding, especially within parametric design frameworks.

The use of generative AI design systems such as Depthfusion in the design process introduces an innovative approach to this long-standing debate. While every detail of ornamentation must be meticulously defined and controlled by a set of parameters in parametric design, generative AI design models can autonomously generate ornamentation as an integral part of the design process “for free,” without requiring explicit instructions for each decorative element.

This ability of AI not only challenges the traditional dichotomy between form and ornament but also repositions ornamentation within the architectural discourse. It suggests that ornament no longer has to be a post-design consideration or an additional layer applied onto a form. Instead, form and ornament can emerge simultaneously and organically through the AI-driven design process, potentially leading to a more seamless integration of structural and aesthetic considerations, echoing and extending the modernist principle of “form follows function.”24

Ethical Considerations

Ethical considerations in the context of AI-driven design and fabrication are crucial to understand and address, especially as these technologies become more pervasive.

One significant issue is the potential for bias to be encoded in AI models. If a model is trained predominantly on data that reflect certain aesthetics or cultural biases, it may reproduce and reinforce these biases in its outputs. For instance, a generative design model may preferentially produce objects that conform to a particular cultural or historical aesthetic, consequently excluding or misrepresenting minority groups,25 including people with disabilities who may have different design needs. As an example, prompting a model to generate images of “Americans at work” shows you people shuffling papers around, while prompting a model to generate images of “Native Americans at work” shows you people in residential schools or in boarding schools.26

Ownership and intellectual property rights pose another challenge. With the ability to easily replicate physical objects using AI and 3D printing, the boundaries of copyright may be rendered increasingly porous. The ease of replication has the potential to disrupt traditional notions of ownership, similar to what has been observed in the digital realm with media files. This could have significant implications for designers, inventors, and companies that rely on proprietary designs as a cornerstone of their business model.27

Moreover, there is a potential concern regarding creativity and the homogenization of design. Research indicates that as AI takes on more of the design process and ambiguity is reduced, the final designs tend to become less creative.28 If this trend continues, there might be a drive toward the mass production of designs with diminishing diversity, thus impacting the richness and variety of cultural artifacts.

Lastly, the implications of democratizing design and manufacturing through AI and 3D printing bring forth both opportunities and challenges in terms of resource use, sustainability, and environmental impact. The ability for anyone to generate and fabricate objects on demand could potentially lead to overconsumption and wastefulness, since as generation of objects become easier, more single-time use objects are more likely to be produced. For instance, while a sneaker made out of bread might demonstrate the technical capabilities of generative models like Depthfusion, it could at the same time be criticized as being a superfluous use of resources since it has no proper function or intent to be used beyond its production. In this sense, AI-driven design and generation could lead to excessive production and consequent wastefulness.

On the other hand, AI-driven design and generation could also lead to better resource use, sustainability and environmental impact. For instance, in 2010, global plastics production totaled 265 million tons CO2, with a large percentage of this production becoming waste.29 Common reasons for this waste include overstocking, products becoming obsolete, customer returns, product damages, or recalled/defective products.30 In this context, the integration of AI-generated designs with sustainable fabrication methods could present a promising avenue for mitigating the environmental impact traditionally associated with mass production in factories. For instance, sustainable printing techniques such as ceramic 3D printing with clay and the ability to reuse plastics from previous prints in 3D printers31 exemplify how innovative approaches can substantially reduce resource consumption and waste. By leveraging locally sourced materials either from previous prints or from nature and producing goods on demand, the reliance on large-scale manufacturing and the associated logistics, which contribute significantly to carbon emissions, can be decreased. This shift toward more sustainable manufacturing processes could underscore the potential of AI and 3D printing to contribute positively to environmental stewardship and the circular economy rather than negative use and overconsumption.

As we move toward a future where the barriers to creating physical objects are significantly lowered, it becomes crucial to foster a culture that considers the ecological footprint of their creations, opting for sustainable materials and minimizing waste wherever possible, as well as critically engage with the limitations of these models and how they might end up dictating design.

Conclusions

In conclusion, the experiments and insights provided in this article highlight the immense potential and practicality of generative 3D models in design and fabrication. Results from using our 3D model generation system, Depthfusion, demonstrate the feasibility of seamlessly transitioning from AI-generated concepts to physical objects across a variety of materials and scales, from delicate PolyJet 3D-printed tableware and large ceramic vases to unconventional bread sneakers. As we advance in this field, it is essential to address the ethical, legal, and societal challenges accompanying these technologies. Ensuring inclusive data to prevent biases, rethinking intellectual property laws in light of easily replicable designs, and ensuring that human creativity and diversity are not stifled by homogeneous machine-generated solutions are just a few of the areas that require careful consideration and proactive measures.

Acknowledgments

The authors would like to acknowledge the Council for the Arts at MIT for their generous funding that supported part of this work. They also extend their gratitude to Timea Tihanyi, Rich Miner, Hideo Mabuchi, and Nelly-Charlott Schneider for their invaluable assistance and contributions to the experimentation with Depthfusion.

Authors’ Contributions

V.D.: Conceptualization (lead), software (lead), writing—original draft (lead), methodology (lead), and writing—review and editing (equal). C.G.: Conceptualization (supporting), writing—original draft (supporting), and writing—review and editing (equal). L.H.: Software (supporting). N.G.: Writing—review and editing (equal) and supervision (equal). P.M.: Writing—review and editing (equal) and supervision (equal).

Author Disclosure Statement

The authors declare that there are no competing interests.

Funding Information

This work was funded in part by the Council for the Arts at MIT (CAMIT), the MIT Center for Bits and Atoms, and the MIT Media Lab.

 

References

  • 1. Ramesh A, Pavlov M, Goh G, et al. Zero-Shot Text-to-Image Generation. In: International conference on machine learning PMLR; 2021;8821–8831. [Google Scholar]
  • 2. Ramesh A, Dhariwal P, Nichol A, et al. Hierarchical text-conditional image generation with clip latents. arXiv Preprint 2022. arXiv:220406125. [Google Scholar]
  • 3. Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2022;10684–10695. [Google Scholar]
  • 4. Ploennigs J, Berger M. Ai art in architecture. AI Civ Eng 2023;2(1):8. [Google Scholar]
  • 5. Jain A, Mildenhall B, Barron JT, et al. Zero-Shot Text-Guided Object Generation with Dream Fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2022;867–876. [Google Scholar]
  • 6. Poole B, Jain A, Barron JT, et al. Dreamfusion: Text-to-3D using 2D diffusion. arXiv Preprint 2022arXiv:220914988. [Google Scholar]
  • 7. Wang Z, Lu C, Wang Y, et al. ProlificDreamer: High-fidelity and diverse text-to-3D generation with variational score distillation. arXiv Preprint 2023. arXiv:230516213. [Google Scholar]
  • 8. Lin C-H, Gao J, Tang L, et al. Magic3d: High-Resolution Text-to-3D Content Creation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023; pp. 300–309. [Google Scholar]
  • 9. Ho J, Saharia C, Chan W, et al. Cascaded diffusion models for high fidelity image generation. J Mach Learn Res 2022;23(1):2249–2281. [Google Scholar]
  • 10. Nichol A, Dhariwal P, Ramesh A, et al. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv Preprint 2021. arXiv:211210741. [Google Scholar]
  • 11. Kim G, Kwon T, Ye JC. Diffusionclip: Text-Guided Diffusion Models for Robust Image Manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022; pp. 2426–2435. [Google Scholar]
  • 12. Saharia C, Chan W, Saxena S, et al. Photorealistic text-to-image diffusion models with deep language understanding. Adv Neural Inform Proces Syst 2022;35:36479–36494. [Google Scholar]
  • 13. Hakimshafaei M. Survey of Generative AI in Architecture and Design. University of California, Santa Cruz; 2023. [Google Scholar]
  • 14. Nichol A, Jun H, Dhariwal P, et al. Point-e: A system for generating 3D point clouds from complex prompts. arXiv Preprint 2022. arXiv:221208751. [Google Scholar]
  • 15. Liu R, Wu R, Van Hoorick B, et al. Zero-1-to-3: Zero-Shot One Image to 3D Object. In: Proceedings of the IEEE/CVF International Conference on Computer Vision 2023; pp. 9298–9309. [Google Scholar]
  • 16. Yi T, Fang J, Wu G, et al. GaussianDreamer: Fast generation from text to 3D Gaussian splatting with point cloud priors. arXiv Preprint 2023. arXiv:231008529. [Google Scholar]
  • 17. Gatis D. RemBG. GitHub repository; 2024. [Google Scholar]
  • 18. Dvorožňák M, Sýkora D, Curtis C, et al. Monster mash: A single-view approach to casual 3D modeling and animation. ACM Trans Graph 2020;39(6):1–12. [Google Scholar]
  • 19. Patpatiya P, Chaudhary K, Shastri A, et al. A review on PolyJet 3D printing of polymers and multi-material structures. Proc Instit Mechan Eng C 2022;236(14):7899–7926. [Google Scholar]
  • 20. Zocca A, Colombo P, Gomes CM, et al. Additive manufacturing of ceramics: Issues, potentialities, and opportunities. J Am Ceram Soc 2015;98(7):1983–2001. [Google Scholar]
  • 21. Loos A. Ornament and Crime. Gato Negro Ediciones; 2014. [Google Scholar]
  • 22. Semper G. Style in the Technical and Tectonic Arts, or, Practical Aesthetics. Getty Publications; 2004. [Google Scholar]
  • 23. Riegl A, Castriota D. Problems of Style: Foundations for a History of Ornament. Princeton University Press; 2018. [Google Scholar]
  • 24. Sullivan L. Form Follows Function. De la tour de bureaux artistiquement; 2010. [Google Scholar]
  • 25. Stap D, Araabi A. ChatGPT Is Not a Good Indigenous Translator. In: Proceedings of the Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP) 2023; pp. 163–167. [Google Scholar]
  • 26. Palmer A, Arppe A, Hermes M, et al. Computational linguistics, language technologies and the international decade of indigenous languages: Academic and community interactions. Colo Nat Resources Energy Env’t L Rev 2023;34:77. [Google Scholar]
  • 27. Lipson H, Kurman M. Fabricated: The New World of 3D Printing. John Wiley & Sons; 2013. [Google Scholar]
  • 28. Epstein Z, Schroeder H, Newman D. When happy accidents spark creativity: Bringing collaborative speculation to life with generative AI. arXiv Preprint 2022. arXiv:220600533. [Google Scholar]
  • 29. Dormer A, Finn DP, Ward P, et al. Carbon footprint analysis in plastics manufacturing. J Cleaner Produc 2013;51:133–141. [Google Scholar]
  • 30. Roberts H, Milios L, Mont O, et al. Product destruction: Exploring unsustainable production-consumption systems and appropriate policy responses. Sustainable Production and Consumption 2023;35:300–312. [Google Scholar]
  • 31. Mikula K, Skrzypczak D, Izydorczyk G, et al. 3D printing filament as a second life of waste plastics—a review. Environ Sci Pollut Res Int 2021;28(10):12321–12333. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from 3D Printing and Additive Manufacturing are provided here courtesy of Mary Ann Liebert, Inc.

RESOURCES