Skip to main content
Patterns logoLink to Patterns
. 2024 Jun 24;5(8):101011. doi: 10.1016/j.patter.2024.101011

Why brain organoids are not conscious yet

Kenneth S Kosik 1,
PMCID: PMC11368692  PMID: 39233695

Summary

Rapid advances in human brain organoid technologies have prompted the question of their consciousness. Although brain organoids resemble many facets of the brain, their shortcomings strongly suggest that they do not fit any of the operational definitions of consciousness. As organoids gain internal processing systems through statistical learning and closed loop algorithms, interact with the external world, and become embodied through fusion with other organ systems, questions of biosynthetic consciousness will arise.

The bigger picture

Brain organoids bear an uncanny resemblance to a miniaturized brain. They can be prepared from anyone by taking a few skin or blood cells, converting them to stem cells, and differentiating them to neurons with simple methods that extend their growth into three dimensions. Although only a few millimeters in size, they make a large diversity of brain cell types arranged in patterns that partially capture the developmental anatomy of the brain and emit a steady stream of electrical signals. Inevitably, the laboratory growth of brain-like tissue will engender questions about sentience and consciousness, from which point questions about the moral status of brain organoids will arise. Therefore, while acknowledging that brain organoid research may eventually lead to the creation of consciousness in the laboratory, this possibility is not the case with either current technology or technologies of the near future.


Brain organoids are only a few millimeters in size, yet they make a large diversity of brain cell types arranged in patterns that partially capture the developmental anatomy of the brain and emit a steady stream of electrical signals. Inevitably, the laboratory growth of brain-like tissue will engender questions about sentience and consciousness; this possibility has no basis in current technology.

Main text

I’m in the middle. I’m the partition. I’ve two surfaces and no thickness. Perhaps that’s what I feel, myself vibrating. I’m the tympanum. On the one hand the mind, on the other the word. I don’t belong to either.—Samuel Beckett (Three Novels: Molloy, Malone Dies, The Unnamable)

It may seem an ill-posed question to opine on whether or not brain organoids are conscious when scant agreement exists on what consciousness is. Little has changed since William James called consciousness “the most difficult of philosophical tasks” (1890), even up to the present in a consensus glossary, which simply defaulted to the tautology of replacing the term consciousness with experience.1 What has changed is that the search is no longer strictly philosophical; it has become scientific, adorned with numerous theories and a sense of confidence that its elusive biological basis can be revealed.2 Do brain organoids fit any scientific theory or framework of consciousness? These three-dimensional cultures of neural cells derived from pluripotent stem cells can uncannily mimic some features of the brain’s organization, development, cellular diversity, and neurodevelopmental pathology.3,4,5 The use of assembloids that fuse two region-specific organoids such as the developing thalamus and cortex can increase the complexity of organoid connectivity.6 Brain organoids spontaneously exhibit complex spiking activity and local field potentials with features such as a log-normal distribution of short-latency coupling strengths and regional differences in interspike interval probability distributions.7

Despite these impressive achievements, any discussion of brain organoids and consciousness must first recognize their very severe shortcomings and the barriers involved in overcoming them. Brain organoids are not a single thing—their differentiation can be directed toward different brain structures and cell types, their cell maturation is stunted, and their anatomy relative to the brain is distorted. The evolutionary expansion of the human sub-ventricular zone during development that resulted in the enlargement of the human cortical upper layers is not replicated in human brain organoids. The architectonic destination of precursors in the sub-ventricular zone is layer 2–3 cortical, within which a highly complex intralaminar microcircuitry of hierarchical graphs mediates cortical computation.8 More generally, structural connectivity alone is insufficient to predict the dynamical behavior of neural circuits, a problem that relates to the small size of brain organoids and the areal scale at which the brain operates.9 The claim that brain organoids correspond to 20 weeks’ in vivo brain development or more obscures the fact that brain development is controlled by heterochronic genes, which makes assigning a small piece of organoid neural tissue to a developmental stage highly problematic.10 Their derivation from ectodermal origins leaves them without microglia and vasculature, in the absence of which there is often an acellular center. In addition, even under the most controlled conditions, some variation occurs from organoid to organoid. While rapid progress is being made toward solving many of these limitations, organoids, as we now know them, fall far short of consciousness as outlined below. A great deal of work remains to replicate brain anatomy in organoids. Whether a more perfect organoid will achieve consciousness by some definition remains an open question. So, how well do brain organoids as currently configured fit theories of consciousness?

Of historic interest in the trend toward conferring legitimacy upon the study of consciousness, Francis Crick’s final paper, written on his deathbed and published in 2005, fingered the claustrum as the seat of consciousness. This region is densely connected to widely separated sensory inputs throughout the cortex that presumably bind together the multi-faceted aspects of an experience and fits within a theme referred to as the “neural correlates of consciousness.” These explanations rely on the identification of a discrete brain anatomy that mediates consciousness. This degree of anatomical precision and connectivity has not been achieved in brain organoids, nor does its discrete localization explain consciousness.

A more recent approach is to find neural mechanisms that can explain aspects of consciousness. These theories of consciousness have been grouped into four categories: higher-order theories (HOTs), global neuronal workspace theories (GNWTs), integrated information theory (IIT), and re-entry and predictive processing theories. HOTs are rooted in the work of John Hughlings Jackson (1835–1911) concerning the hierarchical organization of the brain and later descriptions of brain organization known historically as the association cortices that include regions not involved in primary sensory or motor processing. In what Jackson called “dissolution,” higher centers disinhibited lower centers to bring about an action. In modern versions of this notion, vast cerebral cortical territories collaborate to form the “meta-representations” of consciousness according to HOTs. While the pre-frontal cortex is emphasized in this model, the theory depends not so much upon a neural correlate of consciousness but upon an integrative function within so-called “higher” centers.

GNWTs directly access consciousness by positing some subset of sensory input as being “broadcast” in a widespread workspace where the input is “ignited,” amplified, and sustained as a neuronal representation through network hubs in the fronto-parietal lobes.11 GNWT can collaborate with centers of attention or working memory to achieve consciousness. Functional magnetic resonance imaging is consistent with a global workspace in that widespread cortical activity, mostly in the fronto-parietal and medial temporal lobes, is observed while conscious of a visual stimulus but remains locally confined to the visual cortex in an unconscious individual.11 In neurodegenerative conditions such as Alzheimer’s disease and fronto-temporal dementia, consciousness can more insidiously erode due to spreading pathology through network hubs.12

IIT differs quite significantly from HOTs and GWTs by not using the brain or, more specifically, the neural correlates of consciousness as a starting point. Rather, it begins with a set of axioms believed to be universal features of consciousness as well as physical mechanisms and deploys these axioms as a cause-and-effect structure operating through a grid of logic gates to derive claims about the properties of a physical substrate of consciousness.13 The method conveniently computes a single value known as Φ, a measure of the global state of consciousness. Such substrates need not even be biological material. However, in the human, a posterior brain “hot zone” appears critical as measured by the perturbational complexity index (PCI), an algorithm that uses brain responses to transcranial magnetic stimulation (TMS) measured as the degree of compressibility of an input string according to Lempel-Ziv complexity.14,15

Predictive processing covers a great deal of conceptual territory before it even begins to wrangle with consciousness. Predictions, drawn from learning and intrinsic encoding of statistical regularities in the outside world, generate inferential hypotheses or priors that differentially align with sensory input. A mismatch or prediction error is propagated up the hierarchy to adjust higher-level hypotheses in a process of prediction error minimization updated at various timescales and stages within the perceptual hierarchy of various brain regions where an internal model is embedded.16 Performing the alignment is enhanced when the organism acts upon the environment or imagines an action upon the environment to create its own experience and generate meaning. This view is known as active inference and its embodiment known as enactivism.17,18 As an explanation of consciousness, predictive processing relies on inferential updating of perceptual hypotheses about the world and their probabilities assessed through interactions between top-down and bottom-up information flow. The foundational work for these concepts was described by Stephen Grossberg and Gail Carpenter as adaptive resonance theory.19 Top-down expectations of bottom-up inputs consistent with observed distinct oscillatory signatures operating mainly via the gamma band in feedforward processing and mainly via the alpha and beta bands in feedback processing create resonant states of bidirectional information flow that result in internally considered hypotheses as the best probabilistic fit to expectations.20 These resonant states trigger learning of cognitive representations that serve as a type of consciousness. The detection of bidirectional phase coherency in an organoid is very challenging, and an anatomical substrate for top-down expectations of bottom-up inputs, possibly in the granular and supra-granular cortical layers, is anatomically ill defined in the organoid.

Given the strong reliance on a well-organized brain anatomy, it is difficult to see how any of these theories of consciousness would be satisfied by a brain organoid in their current state, with the possible, albeit dubious, exception of IIT. However, the use of TMS necessary to generate the data to compute PCI in an organoid is challenging, and no such data are currently available. Reformulating the PCI computation for an electronic grid array capable of recording and stimulation may obviate the need for TMS to generate a PCI readout called zap and zip.21

As brain content increases through experience, cumulative experience becomes learning, which may include a representation of a learned experience, and so at some level of learning, a link to consciousness emerges. Some investigators have claimed that cultured neurons and organoids can learn. In one study, cultured neurons “learned” to play Pong, and in another, a brain organoid “learned” speech recognition using a Japanese vowel database.22,23 The only possible input from the external world to an organoid is via electrode stimulation. Learning was based upon the delivery of stimulus patterns in sequences that can shape the firing pattern of the organoid. The concepts of learning in each of these experiments differed. In the Pong experiment, which was performed in a planar culture, not an organoid, a closed-loop system updated the culture based on the success of its play by delivering an electrical stimulus from the multi-electrode array. How a neuronal culture interprets the stimulus in the context of reinforcement is difficult to fathom. All that is concluded, then, is that stimulation patterns can alter neuronal connectivity in a tenuously related manner to the stimulus with weak statistical support. On the other hand, learning of Japanese vowels utilized unsupervised statistical learning from training data to shape the functional connectivity within the organoid that served as “an adaptive living reservoir” for higher-dimensional computation. Whether the altered connectivity corresponds to learning, defined as learning the statistical regularities in an actively sampled environment, again had weak statistical support. Once learning can be convincingly demonstrated—and that remains to be definitively demonstrated—learning a task might provide some content for a conscious experience if one accepts that learning can, but does not necessarily, instantiate consciousness.

Among the most salient feature of a brain organoid is its complete detachment from a body, unlike a fetal brain, which is integrally connected to the body from the moment its heartbeat delivers a blood supply, the moment its gut sends signals of hunger, and the moment of quickening, when motor activity acts upon the environment in a feedback system. A brain organoid obviously has no history of any somatic experience—neither motor nor sensory. The disembodied organoid presents a fatal flaw to the presence of consciousness in an organoid for proponents of embodied cognition. This view emphasizes the significance of an interaction between the agent’s physical body and the environment to compute the concepts behind the potpourri of terms attributed to consciousness, including representation, sentience, abstraction, perception, agency, and feeling.24 A direct connection between visceral inputs, known as interoception, and conscious vision has been reviewed and the case made that visceral inputs offer a first-person perspective on consciousness and may serve to filter or facilitate information flow to consciousness.25 In the case of the brain organoid, not even a historical memory of a body exists, unlike the “brain-in-vat.”26 And yet, an organoid’s neural activity resembles the patterns we associate with encoding experience such as phase locking of neuronal oscillations to spiking activity.7 Brain organoids spontaneously develop structured activity in which low-dimensional units form a backbone structure that marks population bursts, while a second, highly variable set of units may be available for plastic associations with the more rigid units.27 This pre-configured state, present in the absence of any experience, suggests a framework prepared to encode experience when it arises. We can then pose the question of whether a framework capable of encoding experience but devoid of experience is conscious. Can consciousness exist without content?

Like a brain, fundamentally, an organoid is a firing pattern. Experience and the implementation of our will are all instantiated in waveforms that capture the still-opaque properties of consciousness that have not yet become sufficiently disentangled to discriminate the specifics of brain activity that confer consciousness. The brain organoid offers a useful heuristic into this problem. Patterns of stimulus delivery must do more than create a mirror of the input trained to perform some action; otherwise, a neuronal firing pattern is no different from saying a camera has vision. In brain-computer interface experiments, paralyzed individuals learn to control an armature with their thoughts.28 Thinking about moving a paralyzed limb activates many neurons, including motor neurons that fire with an arm movement. A recording is obtained from a subset of those neurons that happen to be activated by nearby electrodes, and that activation signal is directly linked to the movement of an armature. These activated neurons may not be the ones that endogenously implement a motor action. Rather, activating those neurons is a learned response—the thought of the arm movement, a complex and widely distributed brain activity, is trained to activate the neurons from which a recording was obtained, and activation of those neurons will move the armature. The trained neurons simply reflect an action instantiated in the brain as arm movement, setting up an enactivism pathway. The learning lies in the connectivity between the neurons that generate the thought and the small set of neurons linked to the armature. In the brain, complex processing between an input/output loop allows the brain to wire a representational network of the world based on the actions of the body. Updating the internal circuitry in a closed-loop design introduces an internal neuronal space for modifying an output along the lines of predictive coding. György Buzsáki has referred to this concept as the brain from the inside out.29 In the absence of any motor output circuitry, a brain organoid cannot project an action upon the world as an agent of that action. The brain organoid lies in a representational limbo, not as an “island of awareness,” for there is nothing to be aware of, but as a cipher or computational package ready for the trappings of embodiment that could create the abstraction of an inside and an outside required for consciousness.30

As engineered functionality comes to enhance brain organoid capabilities through robotic attachments operating within closed-loop circuitries and in association with a more brain-like circuitry, the possibility that consciousness could emerge might be entertained more seriously. Such cyborg innovations might integrate AI systems; however, current AI recursion methods used in large language models and diffusion models remain far from replicating brain function and are currently less relevant to organoid consciousness. Nevertheless, AI has engendered its own vigorous discussion of in silico consciousness, a topic nicely reviewed in Butlin et al.31

However, the challenge of its rigorous detection remains. One view on organoid consciousness termed the precautionary principle argues that the question of whether organoids possess a substrate that fits any credible theory of human consciousness should be answered.32 A counter-argument is that function-based consciousness, i.e., fit with theory, should not serve exclusively as the basis for adopting precautions but rather requires an assessment of its moral status. The judgment of moral status is related to the ontological principle, i.e., the potential for development into a human or an entity with a consciousness that resembles a human.33 The current state of brain organoid technology makes their consciousness by either criterion highly implausible.34

The ethics of human brain organoid transplantation into animals is often predicated on the question of organoid consciousness or the acquisition of consciousness as the organoid integrates into animal brain circuitry.35 However, the implausibility of consciousness in organoids as we now know them begs the reverse question of how the human brain organoid affects the animal’s consciousness. When transplanted into the somatosensory cortex of newborn rats, human stem cell-derived cortical organoids integrated into sensory and motivation-related circuits.36 Transplanted organoids received thalamocortical and corticocortical inputs that produced sensory responses in human cells, and optogenetic activation of the organoid could drive reward-seeking behavior. This observation suggests that the processing of a sensory input to a rat brain can partially occur in human neurons and the encoding structure within the human organoid can be loaded with rat experience. Presumably the experience that can be loaded into a human organoid will differ greatly depending on the recipient species. For better or worse, the commercial entertainment sector is already there, with species blending of brains appearing in current cinema such as in The Lobster (2015) and Poor Things (2023). Such portrayals that venture far into the absurd, well beyond anything remotely scientific, will influence public opinion on research. So, being armed with knowledge of the underlying limitations of organoids is a responsibility of scientists.

Acknowledgments

This article was not funded by any private or public funding sources.

Declaration of interests

K.S.K. is an employee and holds appointments as the Harriman Professor in Neuroscience and co-director of the Neuroscience Research Institute at University of California, Santa Barbara; is a member of the Alzheimer’s Disease Cooperative Study Epstein Family Research Scientific Advisory Board at University of California, San Diego; is a co-founder and director of the Tau Consortium at the Rainwater Charitable Foundation; has equity in and is a board member of Minerva Therapeutics; and receives consultancy fees from Expansion Therapeutics.

Biography

About the author

graphic file with name gr1.jpg

Kenneth S. Kosik, MA, MD, is a neuroscientist who served as professor at the Harvard Medical School from 1996 to 2004, when he became the Harriman Professor of Neuroscience and Co-Director of the Neuroscience Research Institute at the University of California, Santa Barbara. He co-authored “Outsmarting Alzheimer’s Disease,” co-founded the Learning and the Brain Conference for educators, and conducted seminal research in Alzheimer's disease genetics and cell biology. His work in Colombia on familial Alzheimer’s disease has appeared in The New York Times, BBC, CNN, PBS, and CBS 60 Minutes. His 2016 University of California Santa Barbara commencement address is archived here https://www.youtube.com/watch?v=Z8OR81ucHyY.

References

  • 1.Storm J.F., Klink P.C., Aru J., Senn W., Goebel R., Pigorini A., Avanzini P., Vanduffel W., Roelfsema P.R., Massimini M., et al. An integrative, multiscale view on neural theories of consciousness. Neuron. 2024;112:1531–1552. doi: 10.1016/j.neuron.2024.02.004. [DOI] [PubMed] [Google Scholar]
  • 2.Seth A.K., Bayne T. Theories of consciousness. Nat. Rev. Neurosci. 2022;23:439–452. doi: 10.1038/s41583-022-00587-4. [DOI] [PubMed] [Google Scholar]
  • 3.Berg J., Sorensen S.A., Ting J.T., Miller J.A., Chartrand T., Buchin A., Bakken T.E., Budzillo A., Dee N., Ding S.L., et al. Human neocortical expansion involves glutamatergic neuron diversification. Nature. 2021;598:151–158. doi: 10.1038/s41586-021-03813-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Lancaster M.A., Knoblich J.A. Organogenesis in a dish: modeling development and disease using organoid technologies. Science. 2014;345 doi: 10.1126/science.1247125. [DOI] [PubMed] [Google Scholar]
  • 5.Paulsen B., Velasco S., Kedaigle A.J., Pigoni M., Quadrato G., Deo A.J., Adiconis X., Uzquiano A., Sartore R., Yang S.M., et al. Autism genes converge on asynchronous development of shared neuron classes. Nature. 2022;602:268–273. doi: 10.1038/s41586-021-04358-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Andersen J., Revah O., Miura Y., Thom N., Amin N.D., Kelley K.W., Singh M., Chen X., Thete M.V., Walczak E.M., et al. Generation of Functional Human 3D Cortico-Motor Assembloids. Cell. 2020;183:1913–1929.e26. doi: 10.1016/j.cell.2020.11.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Sharf T., van der Molen T., Glasauer S.M.K., Guzman E., Buccino A.P., Luna G., Cheng Z., Audouard M., Ranasinghe K.G., Kudo K., et al. Functional neuronal circuitry and oscillatory dynamics in human brain organoids. Nat. Commun. 2022;13:4403. doi: 10.1038/s41467-022-32115-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Peng Y., Bjelde A., Aceituno P.V., Mittermaier F.X., Planert H., Grosser S., Onken J., Faust K., Kalbhenn T., Simon M., et al. Directed and acyclic synaptic connectivity in the human layer 2-3 cortical microcircuit. Science. 2024;384:338–343. doi: 10.1126/science.adg8828. [DOI] [PubMed] [Google Scholar]
  • 9.Wang X.J., Kennedy H. Brain structure and dynamics across scales: in search of rules. Curr. Opin. Neurobiol. 2016;37:92–98. doi: 10.1016/j.conb.2015.12.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Lavazza A. Human cerebral organoids and consciousness: a double-edged sword. Rev. 2020;38:105–128. doi: 10.1007/s40592-020-00116-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Dehaene S., Changeux J.P. Experimental and theoretical approaches to conscious processing. Neuron. 2011;70:200–227. doi: 10.1016/j.neuron.2011.03.018. [DOI] [PubMed] [Google Scholar]
  • 12.Huntley J.D., Fleming S.M., Mograbi D.C., Bor D., Naci L., Owen A.M., Howard R. Understanding Alzheimer's disease as a disorder of consciousness. Alzheimers Dement. 2021;7 doi: 10.1002/trc2.12203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Oizumi M., Albantakis L., Tononi G. From the phenomenology to the mechanisms of consciousness: Integrated Information Theory 3.0. PLoS Comput. Biol. 2014;10 doi: 10.1371/journal.pcbi.1003588. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Casali A.G., Gosseries O., Rosanova M., Boly M., Sarasso S., Casali K.R., Casarotto S., Bruno M.A., Laureys S., Tononi G., Massimini M. A theoretically based index of consciousness independent of sensory processing and behavior. Sci. Transl. Med. 2013;5 doi: 10.1126/scitranslmed.3006294. [DOI] [PubMed] [Google Scholar]
  • 15.Lempel A., Ziv J. On the complexity of finite sequences. IEEE Trans. Inf. Theor. 1976;22:75–81. doi: 10.1109/TIT.1976.1055501. [DOI] [Google Scholar]
  • 16.Millidge B., Seth A.K., Buckley C. Predictive Coding: a Theoretical and Experimental Review. arXiv. 2022 doi: 10.48550/arXiv.2107.12979. Preprint at. [DOI] [Google Scholar]
  • 17.Friston K. The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 2010;11:127–138. doi: 10.1038/nrn2787. [DOI] [PubMed] [Google Scholar]
  • 18.Seth A.K. A predictive processing theory of sensorimotor contingencies: Explaining the puzzle of perceptual presence and its absence in synesthesia. Cognit. Neurosci. 2014;5:97–118. doi: 10.1080/17588928.2013.877880. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Grossberg S. The link between brain learning, attention, and consciousness. Conscious. Cognit. 1999;8:1–44. doi: 10.1006/ccog.1998.0372. [DOI] [PubMed] [Google Scholar]
  • 20.van Kerkoerle T., Self M.W., Dagnino B., Gariel-Mathis M.A., Poort J., van der Togt C., Roelfsema P.R. Alpha and gamma oscillations characterize feedback and feedforward processing in monkey visual cortex. Proc. Natl. Acad. Sci. USA. 2014;111:14332–14341. doi: 10.1073/pnas.1402773111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Massimini M., Boly M., Casali A., Rosanova M., Tononi G. A perturbational approach for evaluating the brain's capacity for consciousness. Prog. Brain Res. 2009;177:201–214. doi: 10.1016/S0079-6123(09)17714-2. [DOI] [PubMed] [Google Scholar]
  • 22.Putnam H. Id., Reason Truth and History. Cambridge University Press; 1981. Brains in a vat; pp. 1–21. [Google Scholar]
  • 23.van der Molen T., Spaeth A., Chini M., Bartram J., Dendukuri A., Zhang Z., Bhaskaran-Nair K., Blauvelt L.J., Petzold L.R., Hansma P.K., et al. Protosequences in human cortical organoids model intrinsic states in the developing cortex. bioRxiv. 2023 doi: 10.1101/2023.12.29.573646. Preprint at. [DOI] [Google Scholar]
  • 24.Shapiro L., Spaulding S. In: The Stanford Encyclopedia of Philosophy (Summer 2024 Edition) Zalta E.N., Nodelman U., editors. 2021. Embodied Cognition.https://plato.stanford.edu/archives/sum2024/entries/embodied-cognition/ [Google Scholar]
  • 25.Tallon-Baudry C., Campana F., Park H.D., Babo-Rebelo M. The neural monitoring of visceral inputs, rather than attention, accounts for first-person perspective in conscious vision. Cortex. 2018;102:139–149. doi: 10.1016/j.cortex.2017.05.019. [DOI] [PubMed] [Google Scholar]
  • 26.Kagan B.J., Kitchen A.C., Tran N.T., Habibollahi F., Khajehnejad M., Parker B.J., Bhat A., Rollo B., Razi A., Friston K.J. In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron. 2022;110:3952–3969.e8. doi: 10.1016/j.neuron.2022.09.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Smirnova L., Caffo B., Johnson E.C. Reservoir computing with brain organoids. Nat. Electron. 2023;6:943–944. doi: 10.1038/s41928-023-01096-7. [DOI] [Google Scholar]
  • 28.Chadwick E.K., Blana D., Simeral J.D., Lambrecht J., Kim S.P., Cornwell A.S., Taylor D.M., Hochberg L.R., Donoghue J.P., Kirsch R.F. Continuous neuronal ensemble control of simulated arm reaching by a human with tetraplegia. J. Neural. Eng. 2011;8 doi: 10.1088/1741-2560/8/3/034003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Buzsáki G. Oxford University Press; 2019. The Brain from inside Out. [Google Scholar]
  • 30.Bayne T., Seth A.K., Massimini M. Are There Islands of Awareness? Trends Neurosci. 2020;43:6–16. doi: 10.1016/j.tins.2019.11.003. [DOI] [PubMed] [Google Scholar]
  • 31.Butlin P., Long R., Elmoznino E., Bengio Y., Birch J., Constant A., Deane G., Fleming S.M., Frith C., Ji X., et al. Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. arXiv. 2023 doi: 10.48550/arXiv.2308.08708. Preprint at. [DOI] [Google Scholar]
  • 32.Birch J., Browning H. Neural Organoids and the Precautionary Principle. Am. J. Bioeth. 2021;21:56–58. doi: 10.1080/15265161.2020.1845858. [DOI] [PubMed] [Google Scholar]
  • 33.Zilio F., Lavazza A. Consciousness in a Rotor? Science and Ethics of Potentially Conscious Human Cerebral Organoids. AJOB Neurosci. 2023;14:178–196. doi: 10.1080/21507740.2023.2173329. [DOI] [PubMed] [Google Scholar]
  • 34.Croxford J., Bayne T. The Case Against Organoid Consciousness. Neuroethics. 2024;17:13. doi: 10.1007/s12152-024-09548-3. [DOI] [Google Scholar]
  • 35.Kataoka M., Gyngell C., Savulescu J., Sawai T. The Ethics of Human Brain Organoid Transplantation in Animals. Neuroethics. 2023;16:27. doi: 10.1007/s12152-023-09532-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Revah O., Gore F., Kelley K.W., Andersen J., Sakai N., Chen X., Li M.Y., Birey F., Yang X., Saw N.L., et al. Maturation and circuit integration of transplanted human cortical organoids. Nature. 2022;610:319–326. doi: 10.1038/s41586-022-05277-w. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Patterns are provided here courtesy of Elsevier

RESOURCES