Abstract
Recently, consciousness research has gained much attention. Indeed, the question at stake is significant: why is the brain not just a computing device, but generates a perception from within? Ambitious endeavors trying to simulate the entire human brain assume that the algorithm will do the trick: as soon as we assemble the brain in a computer and increase the number of operations per time, consciousness will emerge by itself. I disagree with this simplistic representation. My argument emerges from the “atomism paradox”: the irreducible space of the consciously perceived world, the endospace is incompatible with the reducible and decomposable architecture of the brain or a computer. I will first discuss the fundamental challenges in current consciousness models and then propose a new model based on the fractality principle: “the whole is in each of its parts”. This new model copes with the atomism paradox by implementing an iterative mapping of information from higher order brain structures to smaller scales on the cellular and molecular level, which I will refer to as “fractalization”. This information fractalization gives rise to a new form of matter that is conscious (“bright matter”). Bright matter is composed of conscious particles or units named “sentyons”. The internal fractality of these sentyons will close a loop (the “psychic loop”) in a recurrent fractal neural network (RFNN) that allows for continuous and complete information transformation and sharing between higher order brain structures and the endpoint substrate of consciousness at the molecular level.
Keywords: fractal, information sharing, recurrent, network, global workspace
This illusion is real: fallacies trying to undo the paradox of consciousness
In recent years, consciousness research, or better phrased, the scientific models explaining consciousness have tremendously advanced. The task of finding these models is not trivial since they have to be grounded in the well-documented theories of cognitive neuroscience and neural network research. For the most part, adding consciousness to these conventional models is not considered necessary or useful in neuroscience. Currently, two of these models are well suited to build on: Baars’ global workspace and Tononi and Edelman’s dynamic core hypothesis (1–7). In these two models, an integrative mechanism transforms a neural input into a conscious moment that resides at a particular locus in the brain. However, at some point of integration, one has to define the cellular or even molecular substrate and locus of this conscious moment. The definition of this endpoint substrate of consciousness is one of the most difficult tasks in consciousness research.
When it comes to models that cope with the mystery or even paradox of consciousness I will go right to issue at stake: there are no paradoxes in any proposed model if questions are answered within a terminology that presupposes consciousness. This terminology uses a circular reasoning that evades any paradox and compromises a thorough discussion of consciousness. Therefore, I will not discuss models that consider consciousness as an illusion. The illusion is only there because our consciousness determines what is perceived as reality. Without consciousness there is no perceived reality, hence there is no illusion. I will also not embark on models that replace one feature of consciousness with another one. This includes simplified field theories assuming the existence of some sort of physical field as the substrate of consciousness because our consciousness can imagine what a “field continuum” looks or “feels” like. Not to be misunderstood: there are sophisticated models within the realm of field theories. However, I will not resort to models that “smear” consciousness over a network of neurons without providing any kind of medium for remote information sharing among the neurons participating in this network. All of these models openly or non-openly assume that consciousness emerges from a physical field that behaves like a continuum filling the gaps between neurons, placating us with a “gut feeling” of distributiveness in unity. Many of these models are at risk to explain away the paradox of consciousness with fallacies that originate in the very paradox of consciousness itself. Explaining it away by simplicity is not helpful if we strive to truly understand consciousness and develop new network models that cope with its paradoxical nature.
Why is consciousness intrinsically paradoxical? The paradox is what is commonly referred to as the “mind-body” or “binding” problem (8–9). Elements such as visual objects and sounds perceived in consciousness are entities distributed in a space that is also consciously perceived. This space looks as if it is in front of our eyes, but it is actually created behind them. The question at stake is: where behind our eyes is this space in our brains? I will refer to this space as “mind space” or “endospace”. The endospace will be defined as a space embedding a set of elements perceived in consciousness. The “binding” problem arises when one tries to map this set of elements onto brain units such as neurons or synapses, or any kind of potential molecular substrate of consciousness. The mapping operation of objects perceived in mind onto brain units down to the molecular level is inevitable if one seeks to identify the physical substrate of consciousness.
The binding problem is not trivial because neuroscience has attempted for more than one century to understand how exactly this mapping operation works: which neuronal activity underpins the contents of our memories, thought, and emotion, everything that makes us conscious. But why is it paradoxical? Let us take an object that pops up in our consciousness, a car in traffic on a street, for example. To understand its neuronal computation we may start simple and propose that the car is there because a particular neuron is active. If so, then there is nothing wrong with proposing that there are many other neurons computing cars. And of course, there are also neurons computing streets, houses, pedestrians and so on. All the neurons need to do is to be active, or “to fire action potentials”. Each action potential, however, is a separate event. In the moment of firing, there is no direct or remote effect on any other action potential. Therefore, there is nothing to be shared; there is no medium for consciousness that unifies each action potential to an undivided experience. Or simply put, the incompleteness of information in each neuron does not allow us to consciously perceive two or more things such as cars and houses at the same time because the computational processes (active neurons) are separated in space.
Of course, I am not the first one who acknowledges this paradox. Philosophers have struggled over centuries with the attempt to accommodate the perception of a unity of consciousness with a physical world that falls apart into molecules and atoms. In particular, neuroscience has done a great job in offering a solution to the paradox of consciousness by showing that on the level of neurons, information is actually somehow combined. The output action potentials of many neurons are combined as input potentials in the dendritic trees of the recipient neuron. If the combined potential is high enough, the recipient neuron fires, conducts its action potential to another neuron, and the process starts again (10). The McCulloch-Pitts threshold function and Hebbian summation rule wonderfully explain how a neural network integrates information and is able to learn. However it cannot explain what happens with all the information once contained in dendritic inputs before being summed up in one all-or-nothing signal. It is simply gone in an action potential crunch. All the subtle diversity of the dendritic input is good for is to get the timing right for reaching the threshold of firing. Trying to guess the dendritic input from the action potential output is like trying to guess how many batteries of unknown voltage are linked in series to get a particular output voltage: it is not possible. On the neuronal level (not on the level of the network or the immediate dendritic input) there is no information integration and sharing, there is only summation and irreversible reduction of many bits of information to just one.
Once again, this problem has been recognized before and solutions have been offered. The dendritic tree has been suggested as the seat of consciousness because the pre- and post-synaptic input formation is still there, and its summation may actually be modulated by an act of will, as the result of a conscious operation, be it reasoning or emotion (11–16). Unfortunately, that does not solve the core problem of an atomistic view of consciousness: each pre- or post-synaptic potential above or underneath a dendritic spine is as isolated as action potentials are. How is information integration and sharing achieved without destruction by irreversible summation?
Most recent models speculate that the electromagnetic field does the trick (2, 11, 17–26). It is caused by any post-synaptic potential and extends over the dendritic tree, over neurons, and finally, over the whole brain. However, in any physical field, a local field potential is like an atomistic event. It is influenced by other field potentials, but this no more than a summation at a point in space. Hence, when translated into a chemical event that may affect a neuron, such as the opening of an ion channel the summation of electromagnetic field potentials is indistinguishable from the summation of post-synaptic potentials. We do not gain anything, we rather have to explain how summed up but weak fields can affect the opening characteristics of remote ion channels. What is needed is an integration rule that does not just sum up signals in form of post-synaptic dendritic and axonal action potentials. If we want unity in consciousness, a mapping operation is needed that preserves the wholeness of information while integrating its parts.
Dendritic fractal antennas: a nest for integrated information in consciousness
Dendritic trees are the antennas of neurons. With thousands of input synapses coming from other neurons, the dendritic tree is where information is processed and integrated. This information is encoded in a map, with each activated synapse being a “bit” in a string of incoming action potentials. Many researchers thought of the dendritic tree as the site where consciousness happens: Sir John Eccles, Nancy Woolf, Roger Orpwood, Steven Sevush, Stephen LaBerge to name but a few (11–12, 14–16). However, there is only little known of how the activation pattern in the dendritic tree is encoded by the network activity of input neurons. In other words, what is the transformation operation mapping the input pattern of neural network onto a dendritic pattern of neuron within the network? One rule we have learned so far is that if information is to be perceived in consciousness it has to be complete: we can only be aware of perceptions that are integrated in one physical unit such as a cell or parts of it. This completeness implies that the transformation that maps the neural network activity onto the dendritic tree is all-inclusive and convergent. The complete network activity is somewhat downscaled onto the dendritic tree of single cells.
To understand this downscaling operation we have to discuss how incoming action potentials are actually mapped onto and integrated in the dendritic tree of the recipient or client neuron The over-the-threshold integration rule of post-synaptic potentials is a timing function that allows for the generation of a new action potential by the recipient neuron if the post-synaptic action potentials arrive at the trigger zone at the same time. Therefore, the trigger zone is a coincidence detector, which is activated when the dendritic delay or transport times (dt) of two post-synaptic signals are identical (Fig. 1A). This is easy when the injection sites of two synaptic inputs have the same distance to the trigger zone: the action potential is generated when the two incoming signals arrive at the same time, they are coincident. Signals that arrive later (such as signal 3 in Fig. 1A) simply do not contribute to the over-the-threshold summation. However, in recurrent networks, the total looping time (lt) is the sum of the dendritic signal delay (dt) and the signal traveling time in the recurrent network (recurrent signal delay time or rt) (Fig. 1A). Only if the total looping times of two or more recurrent signals are identical, over-the-threshold summation happens and the client neuron fires. The client neuron then distributes its action potential to many other neurons in the network and receives recurrent input from these network neurons until coincidence by identical looping times stabilizes an orbit of recurrent back-projections, which I will term “psychic loops” (Fig. 1B).
Figure 1. Algebra of recurrent fractal neural networks (RFNNs) and psychic loops.
A, B. Model neuron with identical signal delay times (dt) in the dendritic tree. The synaptic inputs are injected at identical distance to the trigger zone. B. Model neuron with distinct signal delay times in the dendritic tree. In recurrent networks, backprojections are amplified when the total looping time (lt), the sum of dt and the recurrent delay time (rt) of each signal is identical. In this case, the signal with the longer distance to the trigger zone is backprojected earlier and coincides with that injected at a shorter distance but at a later time point. Phase-locking of the recurrent signals by coincidence at the trigger zone (rt2+dt2=rt1+dt1) is possible because the signal velocity in the dendritic tree is 100-times slower than that of axonal signal transport between neurons. The connectivity of these neurons is reciprocal and inverse (more distal neurons backproject into more proximal synapses). C. Model neuron with reciprocal, inverse, and fractal connectivity. It is fractal because the distance function between dendritic spines and networked neurons is self-similar. Recurrent backprojections that have identical looping times amplify themselves. They are called “psychic loops”.
Because signals propagating down the dendritic tree are 100-times slower than axonal transport within the network, total signal looping times of a close and a distant network can be identical depending on where on the dendritic tree the recurrent signal is injected in to the client neuron (13, 27–28). This leads to an inverse relationship (or connectivity) between the signal delay times in the dendritic tree and the recurrent signal delay times of the psychic loops. In recurrent networks with inverse connectivity, a downscaling operation implies that a more distant neuron plugs into a synapse proximal to the cell body or trigger zone of the client neuron (Fig. 1B). Or simply put: The farther the signal travels outside of the client neuron, the shorter it travels inside of the neuron when the signal returns. The branching architecture of the network and that of the dendritic tree of the client neuron are congruent: if you transform (expand and rotate) the map of active synapses in the dendritic tree of the client neuron it will be similar to the map of active neurons in the neural network connected to it. This intrinsic self-similarity of a contraction mapping operation is found in particular geometrical objects: fractals. Recurrent neural networks with fractal branching architectures are termed “recurrent fractal neural networks” or RFNNs, a network model I invented about ten years ago (13).
Fractals is a term introduced by Benoit Mandelbrot in 1975 for recursive objects with self-similarity on each scale, and therefore, consisting of parts that are similar to the whole. They originate in ideas of many mathematicians of the 17th–19th century, although the use of fractals goes back to ancient Middle Eastern and African art. Fig. 2A shows a selection of fractal objects that can be transformed into each other using the equations given beneath these objects. The objects 1 (Sierpinski triangle) and 3 (dendritic fractal) are shapes that have found a widely distributed application in devices everyone uses: cell phones. After their invention by Nathan Cohen in 1985, fractal antennas have found their use in many cell phones. They make long antennas obsolete because of their excellent resonance performance on a broad range of frequencies. All natural objects, regardless of inanimate or animate, are fractal on some scale. Similar to fractal antennas in cell phones, dendritic trees in neurons are dendritic fractal antennas receiving a wide spectrum of frequencies of synaptic activity (Object 3 in Fig. 2A).
Figure 2. Fractal transformation and RFNNs.
A. Transformation of three self-similar geometric objects. Object 1, Sierpinski triangle; object 2, hexagonal lattice, and object 3, a dendritic tree. It should be noted that object 3 also transforms directly in object 1. The Sierpinski triangle illustrates the contraction mapping operation which lays the foundation of fractal logic: signals in the downscaled triangles never merge, but converge in a Cauchy type sequence. Therefore, the whole is always preserved in each of its parts, a prerequisite for the reversibility of the downscaling operation. Equations underneath of geometric objects describe transformation functions to convert self-similar or fractal objects into each other. B. Paychic loop between neurons in an RFNN and the fractal dendritic tree of a single neuron within this network. The fractal geometry of the RFNN is similar to that of a Sierpinski triangle (Object 1 in A) with clusters of neurons (here a number of 3) on various scales. The psychic loop transforms this geometry into that of fractal dendritic trees (Object 3 in A) with clusters of branches (also a number of 3 in our example) in single neurons within the RFNN.
To describe the geometric principles of fractals it is sufficient to understand the following fractal scaling function, which is based on a particular relationship between the number of substructures (or pieces) m and their size reduction r:
| Equation 1 |
D = self-similarity or fractal (Hausdorff) dimension This formula can be translated into dendritic and neural network branching structures (13): D = log(a)/log(1/r), with a = number of branches/node
From the scaling function, it can be readily seen that equivalent fractal structures, such as the Sierpinski triangle (Object 1 in Fig. 2A) and a dendritic network (Object 3 in Fig. 2A) can be transformed into each other. This is achieved by linear affine transformation as calculated from the equations underneath of the fractal objects. The reason for this transformability is the construction principle based on contraction mappings: the whole is repeatedly mapped onto each of its parts. Each mapping operation is a transformation achieved by downscaling (size reduction factor r) and a rotation (rotation angle θ). The simplest form of this transformation function (f) is given for the dendritic fractal (Object 3) in Fig. 2A. If we embed this function into an iterative functional system, which will repeat the contraction mapping operation (linear affine transformation) to smaller and smaller sizes, we obtain the Hutchinson operator. The Hutchinson operator (H(S)) copies downscaled versions of the image Sn onto itself so that the resulting image Sn+1 is a union of fi(Sn):
| Equation 2 |
In fractal image compression, this Hutchinson operator is used to achieve compression ratios of 50:1 in image storage, which is similar to jpeg compression (29). Higher rates (>150) can be achieved when images such as natural sceneries contain intrinsic self-similarity. If we consider each neuron in a network as a “pixel” in an image (neuron fires = black pixel =[1], neuron does not fire = white pixel = [0]), then the Hutchinson operator will map or copy the distribution of this “neuronal activity pattern” onto the synapses in the dendritic tree of each neuron (“dendritic activity pattern”).
If the brain is intrinsically fractal, we should find similar geometric features on various scales of brain morphology and neuronal architecture. Indeed, the fractal dimension, the scaling rule relating the number of parts to the length or area on each scale, is very similar for neural networks and dendritic trees (D=1.5–1.8 in projections onto 2-D planes)(13, 30–31). Since the fractal architecture can be found on several scales of brain morphology and connectivity, there are two questions arising. Firstly, what would fractal connectivity imply for brain function such as memory storage and retrieval; and secondly, is it possible to define fractal connectivity also for the molecular level in individual neurons? In other words, will the downscaling operation entail information compression onto a molecular substrate that is able to generate consciousness?
Return to sender brain mail: fractal networks and memory retrieval
One of the gigantic tasks in cognitive neuroscience is to explain how long-term memory storage and retrieval works. We have a pretty good idea how the hippocampus deals with short term memory, both in terms of neural network models and underlying biological circuitry. However, how do we suddenly recall an event that happened years ago? How does the brain know where to find and retrieve the stored information? Let us compare a brain to a computer. Computers are simple. Each operating system has some sort of file allocation table (FAT). This table allocates a file name to a location on a storage device such as the computer hard drive. All the operating system has to do is to use the FAT for the identification of the source address when a file is requested. Then the head of the hard drive moves to the position on the platter where the file is stored.
Finding the precise source address is the tough part when it comes to long-term memory in brains. Let us take the simplest case: local storage in specific memory neurons, so-called “grandmother cells”. This “localist” model assumes that bits of memory, like the face of my grandmother, are stored as a package in one cell. If I want to retrieve the face of my grandmother, all I need to do is to make this cell fire (32–36). The localist model is attractive because information can be compartmentalized, tagged, and retrieved as a whole. This is supported by activity recordings clearly showing that single neurons can hold quite a bit of information. On the other hand, episodic and autobiographic memory appears not to be just a sequence of pictures, which makes it difficult to imagine how a memory stream is retrieved from individual grandmother cells. Moreover, since most of the time these cells should be silent, how is a memory trace established that allows for easy access and retrieval? Or put in simpler words: how is the address of a particular grandmother cell found when memory is to be retrieved?
In contrast to localist models, “distributionist” or “connectionist” models propose that memory traces are globally distributed throughout the brain. This allows for local access to the entirety of memory regardless of where the focal spot of consciousness rests. One of the most beautiful descriptions comes from Dennis Gabor who described memory storage and retrieval as a hologram with the frequency spectrum as convoluted memory trace (37–38). Neuroscientific approaches locating memory traces seem to support and refute distributionist models at the same time. Surgical removal of cortical areas gradually diminishes memory regardless of the incision site, which supports the idea of distributed memory. Nevertheless, local responses of single neurons suggest that a memory trace is stored in discrete units allocated to distinct neurons. How can these models be reconciled?
One possibility is that any kind of target neuron in a neural network can be identified and activated without knowing its address. The client neuron attempting to retrieve a memory trace is literally blindfolded and will spam a variety of routes with activity until the one is found that gates the client to its targeted memory. How does the blind neuron know when the correct route is found? Let us switch gears and discuss an example illustrating the dilemma and its potential solution. Imagine the client neuron is like Robinson Crusoe stranded on an island named “neuronesia” (Fig. 3). Robinson tries to find his way out of neuronesia. He knows that in three directions off shore, there are other islands that can get him home. To reach them by boat Robinson wants to find the one with the shortest distance to neuronesia. He has three monkeys who can help him. Each monkey receives a bottle with a message “Return to sender”. The monkeys are asked to run straight to the shore, throw the bottle in the ocean and wait until it returns. Then they pick the bottle up and bring it back to Robinson. To Robinson’s surprise, all of the monkeys return their bottles at the same time to him. How will Robinson decide which direction to sail to reach the island with the shortest distance? He will take the direction the monkey came from with longest running distance to reach the shore. Why? Because the forward and backward signal delay in all three directions added up to the same time: the longer the bottle was carried on land, the shorter it was on the sea. Or in other words: there is an inverse distance relationship between the signal traveling time outside and inside of neuronesia.
Figure 3. The Robinson Crusoe problem of finding the shortest distance to escape from neuronesia.
The messages in the bottle are returned at the same time. Since the total signal delays are equal, there is an inverse distance relationship between the traveling time within neuronesia and outside on the ocean. A similar relationship is proposed for phase-locked reentrant loops between single (client) neurons and secondary neural networks connected to them (see Fig. 1). The signal delay on neuronesia is that of action potentials traveling down the dendritic tree of the client neuron, while that on the ocean is the axonal traveling time and the signal delay for looping through secondary neural networks until they re-enter the dendritic tree of the client neuron.
How does this story translate into a neural network model? Imagine that neuronesia is actually the client neuron that wants to connect to a peripheral network holding the desired memory trace. The traveling time of the bottle on the ocean is that of the signal sent by the client neuron to secondary neurons in a peripheral memory network and back. In Fig. 1A, this corresponds to the recurrent signal delay time (rt). The traveling time of the bottle carried by the monkey back to Robinson Crusoe is that of a post-synaptic action potential propagating down the dendritic tree, which corresponds to the dendritic signal delay time (dt) in Fig. 1A. Robinson is waiting at the integration sites where the post-synaptic signals coincide, e.g., the soma or trigger zone where the action potential is generated and the client neuron fires. A memory trace is identified when it partakes in an “integrate and fire” mechanism with other synaptic input on the dendritic tree of the client neuron. In the previous section, we have discussed these features as characteristics of RFNNs, only that we now extended the concept of psychic loops to memory traces. The inverse distance relationship between signal delay times within the dendritic tree of the client neuron and those of recurrent signals looping through the connected networks has another interesting consequence: The longer and more complex the apical dendritic tree, the further away the remote networks that phase lock with the proximal networks. In other words: more complex dendritic trees can connect to more complex memory networks and therefore, retrieve richer memories It is thus not surprising that the dendritic trees of neurons with recurrent connections are extended and complexly branched (39–40). It is also not surprising that reduction of the length and complexity of apical dendritic trees in the hippocampus during aging is concurrent with the loss of memory formation (41–42).
At this point, it should be noted that the term “recurrent” is sometimes used in a more narrow sense: axonal branches looping back into the dendritic tree of the neuron from which they originated are called “recurrent collaterals” (43). Typically, neurons with recurrent collaterals are pyramidal neurons with an extensively branched apical dendritic tree. Examples for these neurons can be found in the olfactory system and the hippocampus. In accordance with the original description by Hopfield, I use “recurrent” in a broader sense: Any feedback loop coming from secondary neurons, even if part of a remote memory network, can create a recurrent network. Often, this sort of network connectivity is called “reciprocal associative” and establishes signal back-projection in mutually connected feed-forward networks (44–45). Examples are circuits connecting the anterior olfactory cortex with the piriform cortex, and the dentate gyrus with the CA3 field.
Are these networks examples of brain circuits supportive of the inverse connectivity proposed for RFNNs? The piriform cortex as well as CA3 pyramidal neurons have recurrent collaterals looping back action potentials into their apical dendritic trees and receive feedback from secondary neurons in the olfactory cortex or dentate gyrus, respectively. Recently, the traditional trisynaptic circuit of the hippocampus: dentate gyrus (DG)-to-CA3-to-CA1 has been suggested to include a CA3-to-DG recurrent loop (39, 46). Fig. 4A shows that pyramidal CA3 neurons are special in that they send recurrent branches of their axon back onto their own apical dendritic tree, so-called recurrent Schaffer collaterals (RSCs) (47). The RSCs send input to the “middle part” of the dendritic tree, while the recurrent or back-projection input from the more distant DG is received close to the soma of the CA3 neuron (39, 47). Hypothetically, these two recurrent inputs, RSCs and CA3-to-DG would be able to integrate and form phase-locked signal loops. In real tissue, however, the situation is more complicated since many back-projected inputs are inhibitory (46). The real tissue situation is also more intricate because input from the entorhinal cortex (EC in Fig. 4A) is injected twice: into the DG and the CA3 neurons. Although the selection of “psychic loops” as depicted in Fig. 4A and B is quite possible with excitatory as well as inhibitory back-projections, the architectural model of RFNNs for the olfactory cortex and the hippocampus remains hypothetical.
Figure 4. Recurrent network in the hippocampus.

A. In the hippocampus, a trisynaptic circuit directs input signals from the entorhinal cortex (EC) to the dentate gyrus (DG), CA3, and eventually CA1. Recently, CA3-to-DG backprojections have been suggested. These backprojections could phase-lock with the feedforward (FF) DG-to-CA3 connections and form a psychic loop, a recurrent connection that retrieves a memory trace fed into consciousness. The DG would work as a router network for the CA3 client neuron. B. Canonical microcircuit in the striate cortex. In this model, the inhibitory connections (grey, interneuron) are weaker than the excitatory backprojections (black, pyramidal neurons of layers 2/3 and 5/6). C. The router network is hierarchically organized in that coarse features get refined the deeper the psychic loop connects to secondary networks holding a memory trace. This will increase the complexity of the reciprocal connections.
Other back-projection circuits that are interesting with respect to modeling of RFNNs in real brain tissue are the thalamo-cortical and cortico-cortical circuits based on canonical elements as shown in Fig. 4B (21, 48–49). As with the olfactory cortex, it is not known if the proposed mechanism of inverse connectivity is active in these networks. It should be noted, though, that the olfactory cortex as well as the hippocampus are conserved memory networks that are vital for our survival. The olfactory system is necessary to tell friend from enemy (or bait) using a memorized odorant signature when the visual system is not yet developed or fails. The hippocampus is crucial for our spatial orientation and remembering the location or food or shelter. Therefore, it is not surprising that memory consolidation is critical and synaptic plasticity and strengthening will rely on robust phase-locking of input signals.
Since the “Return to sender” signals have no address of the recipient, how will the client neuron ever find and phase-lock the right memory network to connect with? As we mentioned earlier, the client neuron is like a “blindfolded spammer”: if it receives sufficient external input the client neuron will send action potentials to all neurons in a “router network” (Fig. 4C). Important is that the router network does not hold the memory trace itself, only the route leading to it. If the current input pattern on the dendritic tree of the client neuron is in phase with the recurrent signals from the router network, it connects with other peripheral networks to retrieve the actual memory trace. In his fascinating book “Proust was a neuroscientist” (50), Jonah Lehrer uses a neat example of how current input is almost immediately connected to a pertinent memory trace. The protagonist in Marcel Proust’s novel “In Search of Lost Time” suddenly experiences his entire childhood memory because he bites into a butter cookie that tastes exactly like the ones his grandmother served him when he was a child. You could see this as the unique works of a “grandmother neuron” and its connection to a “butter cookie neuron”. However, it makes more sense to assume that the synaptic input signature of the butter cookie smell and taste fits to the recurrent input signature of a router network and in turn, is phase-locked with one particular route, which retrieves the memory trace representing the childhood experiences.
In neural network theory, this is commonly referred to as “pattern completion” (51). Haberly has proposed an intriguing model for reconstructing a complete from an incomplete signal input pattern of overlapping odorant signatures. These overlapping signatures emanate from the activation of G-protein coupled receptors that respond to a particular odor in a combinatorial manner rather than on a one odor-to-one receptor basis (52). In the Haberly model, recurrent, but incomplete odorant signatures repeatedly integrate with input layer neurons to strengthen a signal pattern that represents the complete odorant information of an object (51).
Our model is similar in that an externally generated signal input pattern to the client neuron is repeatedly integrated with the network activity of the router network by recurrent signal loops. For example, a face seen in a crowd generates an external input that is fed forward into the router network (Fig. 4C). This network does not hold the memory trace of a particular face, but it phase-locks into a sub-network recognizing shapes with repeatedly enclosed and symmetric contours. For example, seeing a particular shape for the first time the brain would not know if it looks at a face with two eyes or a house with two windows (Fig. 4C). Because this recognition pattern is fuzzy and incomplete the brain will recruit the next hierarchy of networks (e.g. houses, faces, cars etc.) until the network has found or been routed into a particular memory trace. In this case, input and recurrent signal pattern are identical and the object is recognized. In a way, the router network is like the FAT in your computer, only that memory traces, the stored files of the “past”, have no addresses. Instead they are “chosen” when recurrent input of the previous moment phase-locks with input of the current moment, the actual “presence” of the client neuron. Of course, while memories are collected throughout life, the router network will evolve and its architecture will become more complex. The rapidly growing complexity of our router networks will provoke the question of how client neurons stay connected with their memory traces and at the same time, remain the locus at which consciousness unfolds. Or in other words: how do these cells manage not to either lose memories or the ability to become conscious?
About the clout of a crowd in the cloud: single neurons rally in a conscious flash mob
Three researchers currently discuss the paradox of consciousness, the impossibility of its distributed representation in the brain, and the idea of single neuron consciousness as a possible solution: Steven Sevush, Jonathan Edwards, and myself (12–13, 29). Related to consciousness in individual neurons are also hypotheses that propose a continuous physical substrate between adjacent neurons such as Stuart Hameroff’s dendritic gap junction model creating a “conscious syncytium” or hyperneuron (53). Although these models of single or hyperneuon consciousness may differ or even disagree in technical details, the solution of the paradox is to assume that consciousness resides within a single cell or a syncytium of single cells. In these models, all of the information distributed over a neural network is, at some point in space and time, mapped onto a single neuronal entity. Even Cajal called pyramidal neurons “psychic corpuscles” implicating the function of single neurons for consciousness. Convergent information integration on the level of individual neurons solves the problem of information dissipation and provides a substrate for information sharing. Nevertheless, it simply seems unbelievable for most of us that the wealth of information perceived in our consciousness could possible fit into a single cell.
In a recent study, I have attempted to ease up the discomfort arising from a model that compresses the entire information perceived in consciousness into a single neuron (13). If we assume that in the dendritic tree of an individual neuron, 5,000 synaptic spines are simultaneously activated, and each activation event has a switch rate of 20 msec (50 Hz equivalent to the gamma frequency range in the electrical activity of the brain), then the input information received and processed in one second in an individual neuron amounts to 5 megabits. This is comparable to the bit transfer rate of a slow wireless network or a movie shown in DVD quality. Others have calculated the transfer rates of neural network information into single cells on the basis of larger numbers of synapses to account for the diversity of conscious experience including other qualia than just audiovisual experience (12). Considering that the response rates in consciousness are rather slow (>100 msec) and the moment-to-moment experience is comparable to an ongoing updating process, it appears not unreasonable that the information transfer through dendritic spines is fast and “rich” enough to feed the entire information required for the generation of a moment in consciousness into a single neuron.
What makes a crowd of neurons so bright that it becomes self-aware? What computes the fractal connectivity that makes the client neuron conscious? RFNNs are self-organized, so the client neuron selects the right fractal input pattern by first spamming the router network and then selecting reentrant back-projections, the psychic loops with the right spike timing. Since all of the neurons in an RFNN receive the same psychic loops, they all become conscious in an instant. Consciousness is not the privilege of single pontifical or grandmother neurons, many neurons suddenly show up sharing the same information, a conscious “flash mob” if you will. They are called for by phase-locking with neurons in the router network. And in the next moment, another group of neurons is conscious, the “bright spot” in the “global workspace” moves on. From this analogy it becomes clear that the RFNN model builds on previous neural network models Edelman and Tononi, published several landmark studies on the necessity of “information sharing” (Tononi) in a “reentrant network” (Edelman) as being crucial for conscious perception (1, 3, 21, 54–55). And it is also clear through Baars’ work that “global workspace” needs a selection mechanism that integrates the scenery of this workspace (2). RFNNs are compatible with these ideas in that they provide a mapping operation by which dissipated information is re-focused onto single neurons as the “bright spot of consciousness”.
Fig. 5A shows how our brains use RFNNs to generate consciousness by a fractalization process. The fractality generator (The cloud) processes information coming from other centers in the brain (ultimately the outside world) in a way that is fractally encoded over service networks in the cloud. For example, the router (find memory trace “butter cookie”) encodes information from the “audio-visual-olfactory (AVO) interpreter” (looks, smells, and tastes like butter cookie) and embeds it into an emotional landscape (feels good like grandmother) from the “emotionator”. This multi-fractal is shared information of the whole in all of its parts. The client neurons select this information based on synaptic connections in the dendritic trees. Only if the reentrant connection is phase locked by coincident summation of the firing threshold, the fractal distribution of input spines on the client neurons is a downscaled version of the output activity of cloud neurons. At this very moment, the memory trace has been retrieved and the information is consciously perceived in the client neuron. This operation is a cloud-onto-client contraction mapping and enables the selection of psychic loops. Because the client neuron receives “the whole in its parts” information from the cloud, the shared information is complete. The neuron becomes conscious and creates the outside world in its own endospace. At this point, the RFNN model still distinguishes cloud from client networks. However, it is possible that client neurons are parts of the cloud network. They are only special because the shared information in one neuron is equivalent to the complete network activity.
Figure 5. Cloud computation, fractality, and molecular consciousness (bright matter).
A. The router network is interconnected with AVO (auditory-visual-olfactory) interpreters and emotionators in a cloud computational setting of service programs. Each client neuron connects to the cloud individually, but only the one phase locking a psychic loop is conscious. Phase-locking happens when the connectivity between the networks in the cloud and the dendritic tree of the client neuron is fractal. The cloud is a fractality generator embedding the router activity into the AVO interpreter (“sees and smells butter cookie”) and then into the emotionator (“butter cookie feels good like grandma”). The client neuron probes (“spams”) the router for fractal connections that are able to phase-lock with its dendritic tree activity and can retrieve a memory trace (“childhood memory”). B. Molecular consciousness of bright matter. Within the dendritic tree, membrane domains of lipid rafts and ion channels (here calcium) form a fractal lattice (hexagonal lattice as shown in Fig. 1B), which represents the molecular transform of the downscaled neural network and dendritic tree fractal. This fractal lattice constitutes a sentyon, a conscious “particle”. Opening of ion channels within this sentyon creates a calcium wave. Sentyons or the corresponding calcium wave may represent the molecular endpoint substrate (bright matter) of consciousness.
Candidate neurons for the seat of consciousness that could also be part of the cloud are pyramidal neurons in the hippocampus or the olfactory system. Of course, this is a tentative assessment based on the technical requirements for generating an RFNN (fractal dendritic tree, reentrant connections) and a cloud service (router network encoding or identifying memory traces). Other candidate neurons for consciousness could be in brain areas processing emotions (amygdala) or AVO-related information (auditory, visual, and olfactory centers), which has been proposed by Crick, Koch, and others (21). Further, overarching recurrent connections could prepare specific brain areas for their function as client neurons distinct from service centers in the cloud. The list is endless, but with exception of counting numbers of input and output channels, and reentrant connections, there is not much of a rationale to name one and exclude another center of the brain as candidate substrates of consciousness.
In a recent analysis, the olfactory system has been favored as potential “minimal” substrate for consciousness (56). This analysis was based on a reductionist approach concluding that most of the other candidates for being a conscious brain tissue can be severely damaged or malfunctional, while still allowing for the experience of some sort of consciousness (although possibly lacking visual or other types of information). This assumption is consistent with our discussion in the preceding section: the olfactory system is extensively “wired” by re-entry connections, thus potentially generating an RFNN as the agent network for consciousness. Nonetheless, it would be wrong to assume that the olfactory system is the only potential locus of consciousness. Therefore, one may settle on the assumption that it constitutes a system with the minimum requirement for the generation of consciousness.
As said at the end of the first section: what is needed is a mapping operation that integrates the whole in each part of the network and a substrate that can implement this operation. If a particular cell or tissue in the brain can do exactly this, it may be the seat of consciousness. We have already suggested RFNNs in the hippocampus or olfactory system and pyramidal neurons in the CA3 field or olfactory/piriform cortex as candidate operation and neuronal substrate, respectively, and we do not get much further in speculating more on the issue of where consciousness is generated in the brain. Instead, our next focus will be how consciousness can be generated in a molecular substrate and how RFNNs can implement information compression on this substrate.
From recurrent networks to molecular substrates of consciousness: bright spots from bright matter
The quest for molecular substrates of consciousness has been going on for decades. Many substrates have been suggested including electromagnetic fields, ion channels, and microtubules, just to mention a few. However, recent results from neuroscience research on the regulation of dendritic information integration suggest that one of the molecular candidates for consciousness is calcium (57–59). This may come as a surprise since we assumed so far that the molecular substrate of consciousness should be something with a richer and more ordered structure. As we will see in this discussion, calcium is able to integrate a propagating neural computation with programming of a macromolecular substrate with intrinsic fractality. Further, exciting new models suggested by Peirera have shown that calcium is ideally suited to integrate information processing in neurons and glia, thus allowing for a molecular substrate of consciousness that extends beyond single cells (58–59).
Calcium has been implicated in the neurochemistry of synaptic signal transduction and memory formation for some time (10, 60–63). This may turn out to helpful when one attempts to link mental processes to actual molecular processing. However, we are more interested in the unique features of calcium for a physics of consciousness. Calcium influx into cells in the brain is triggered by action potentials that are generated in the post-synaptic membrane (58–59). The synchronous and cooperative opening of ion channels generates calcium waves that travel down the dendritic tree and superimpose at the neuronal cell body where they participate in the over-the-threshold summation, thus triggering a new action potential. This cooperativity is governed by system properties that spatially extend over the membrane surface area surrounding the calcium ion channels. System properties may include electromagnetic fields or inherent characteristics of the membrane. One of these characteristics I suggested several years ago to be important for the generation of consciousness is the ensemble behavior of membrane lipids (13).
The cell membrane is a so-called bilayer of lipid molecules, which means that the membrane consists of two leaflets of lipids that point two each other with their hydrophobic molecular portions. The hydrophobic interaction force between these lipid portions determines the packing density of the membrane lipids. It has been found that cholesterol binds tightly to other membrane lipids, which leads to the formation of membrane areas that are more densely and orderly packed than others (64–66). These so-called lipid microdomains or rafts are known to interact with particular membrane proteins such as calcium channels (67). Ion channels and the surrounding raft lipids form a molecular matrix or lattice, in which synchronous changes in the conformation of lipids (e.g., rotation around their molecular axis) may regulate the cooperative opening of ion channels and vice versa. The inner structure of this molecular lattice is suggested to show a fractal geometry equivalent to the hexagonal object 2 depicted in Figs. 2B and 4B. Ion channels in the middle of each hexagon will pump calcium to the inside of the cell, which will lead to a calcium wave shaped by the fractal geometry of the molecular lattice generating it (Fig. 4B). When this wave is chopped into wave packets, it seems that on each scale, the wave packets preserve a self-similar 1/f power spectrum. In a simplified way, this equates to a (calcium) “wave-to-particle dualism” that may unfold the three-dimensional world as we perceive it in consciousness. Moreover, since calcium waves originating from different cells, in particular astrocytes can overlap and superimpose their frequency spectra they may allow for reversible up- and downscaling of between the network and the cellular or molecular level.
The fractal behavior of the ion channel-membrane lipid lattice resembles Ising model networks, in which critical clusters of interaction emerge on each scale. This gives rise to the 1/f power law of activity frequencies. Traces of these activities can be observed for individual ion channel opening on the molecular scale, while on a brain-wide scale, they may be detectable as power law frequencies in electro encephalograms (EEG) (68). In an Ising model, collective behavior of interacting parts such as magnetic particles (spin up or down) or neurons (on or off) is based on simple neighboring rules (ferromagnetic/activating, anti-ferromagnetic/inhibiting, or non-interacting). Curiously, the Ising model is applicable to the opening state of ion channels (60, 69–70), which implies that the fractal neural network activity can be mapped onto the collective opening of ion channels in the dendritic tree. Therefore, fractality is preserved when the pattern of activity nodes in dendritic trees is mapped onto the opening/closing distribution of ion channels in a hexagonal lattice (Figs. 2B and 4B).
At this point, it appears timely to clarify that fractalization is not a one way process going from the neural network down to the molecular level. Fractals are scale invariant, fractalization can be understood as a reversible up- or down-scaling process. Hence, it is equally important to discuss the possibility that fractalization starts at the molecular level and then, scales up to that of the dendritic tree and eventually, the neural network. In this context, it is interesting to view the Ising network of ion channels and surrounding lipid rafts as cellular automata. Originally formulated by Konrad Zuse, Stanislav Ulam, and John von Neumann, cellular automata experienced a renaissance in Stephen Wolfram’s centennial work. Cellular automata are discrete units (cells) that behave according to a set of rules that determine the state of a cell by particular parameters of its immediate neighbors [e.g., neighbors being in the states [0] or [1]). Survival, motion, and self-replication of cellular automata are behavioral outcomes governed by these rules. Interestingly, some of these rules actually generate fractals. One could imagine that the cell membrane is a two-dimensional grid on which ion channels and lipid microdomains evolve according to the rules of cellular automata generating fractals. Based on this view, fractals are at the bottom scale and fractal neural networks are just up-scaled transformations and variations of fundamental cellular automata. This intriguing thought is certainly compatible with many studies discussing cellular automata at various scales from the molecular to the neural network level (71–74).
With respect to the paradox of consciousness, RFNNs stepped it up a notch by providing a contraction mapping operation that takes the fractal network activity to the molecular level. Like bird nests in branches of real trees, consciousness may be located where the branches of the dendritic tree meet (Fig. 4B). At the branching points, the dendritic input gets successively integrated onto a molecular lattice. However, while RFNNs provide a mechanism for data compression from the neural network to the dendritic tree scale there is still another compression required to reach the molecular level. There is also the necessity of a timing mechanism to phase lock each compression step in a recurrent psychic loop (network-to-dendrite-to-molecular substrate of consciousness and back) (Fig. 4B). It is quite possible that RFNNs will do the trick, for example, by mapping the post-synaptic spine activity in a fractal dendritic tree onto an ensemble of ion channels in a fractal hexagonal Ising lattice (Fig. 4B). After all, viable systems for fractal data compression require a stepwise transformation from a larger to a smaller scale. To achieve this, it is necessary that on each scale, the substrate itself is fractal.
The paradox of consciousness as discussed in the beginning of this review and a potential solution by RFNN-mediated fractalization down to the molecular scale leads to a truly astonishing hypothesis: the molecular substrate itself creates a new form of matter. This matter is composed of biological material, membrane lipids, calcium, and ion channels, and it is arranged in fractal geometry (Fig. 4B). When this happens, matter becomes conscious. It is important to realize that there is no point in looking at the molecular or ionic components of this “bright matter” in a reductionist view by decomposing it into individual molecules and atoms. Only in a particular arrangement, these molecules and atoms form a fractal superstructure, which I will term “sentyon”. The sentyon is the “particle” of a conscious moment, may it be a sound, a picture, or a smell. It is not a particle in the classical point-like sense, but rather a multidimensional particle comparable to two-dimensional anyons. It may be provocative, but only if we realize that we have to describe consciousness in terms of physical matter it will be possible to embed consciousness into a world of physically grounded laws and rules. This reminds one of the standard model in particle physics, only that the interaction rules of bright matter determine what we hear, see, or feel. It will certainly take some time to define these interaction rules. However, one fundamental characteristics of physical matter will have to be accounted for in any standard model of consciousness: energy conservation. Whenever non-conscious matter interacts with bright matter, or various sentyons of bright matter interact with each other, energy will be transformed but not generated from nowhere or lost in nothingness.
This thought will immediately take us to the key question: how much energy is in a conscious moment and where does it come from or go to? The quantification of energy in bright matter is not just an academic question; it will allow us to actually attach energy and mass to sentyons, a perquisite for the analysis of their requisite physical properties. The calculation of this energy will be based on an attempt to link the fractal model of information compression to the theory of information sharing. The Hutchinson operator described in equation 2 clearly shows that a larger amount of information (e.g., a set of pixels or addresses of active neurons) can be compressed into a smaller part of it (e.g., a subset of pixels or addresses of active dendritic spines). The difference of non-compressed and compressed information is a measure for the Kullback-Leibler divergence theorem.
| Equation 3 |
If each sub-set of pixels (x) in an image Sn (in the Hutchinson operator) encodes a part embedded in the whole (Sn+1), then L is the quantity of information necessary to reconstruct (or de-compress) the whole from each of its parts. P and q are probability functions to find x at a certain state (e.g., relative position) in an image. If the compression is loss-less (only redundant information has been removed during the compression process), L is the maximum amount of gained or shared information between p(x) and q(x). In a simplified version of this theorem, one would assume that for all x(n): p(x) = 1/n, then L = log2 p(x)/q(x). If we randomly pick a pixel x in image Sn and Sn+1, then the ratio of the probabilities (p(x)/q(x)) of finding x at the same relative position in the two images is equivalent to the compression factor between to two images. With a compression factor of 50–100, the information gain L equals to 5–7 bits. According to the Landauer erasure principle (75), this translates into an amount of energy calculated by:
| Equation 4 |
With k = 1.38 · 10−23 J/K and T = 310 K (37 °C), this translates into 1.5–2.1 10−20 J of energy.
This amount of energy is gained when a distribution of active neurons is compressed onto the distribution of active dendritic spines within a neuron, or this distribution is further compressed to that of open calcium channels within the cell membrane of the dendritic tree. Hence, if we assume that calcium is the endpoint substrate of consciousness, the amount of energy required for creating sentyons would be equivalent to that of opening calcium channels. A good approximation is given by the metabolic burden required for the removal of calcium from the inside of a cell via the respective ion pump, which is roughly the free energy derived from the hydrolysis of one molecule of ATP or 5 · 10−20 J/calcium ion. According to this calculation, transport of calcium is in the ballpark of the energy required for generating sentyons. Of course, the consciously perceived information will become richer when more sentyons are created, which implies the participation of more than one calcium ion in the creation of bright matter.
One may feel estranged by these calculations since they imply that all of our feelings and conscious perceptions are material things. If taken to the extreme one may ask if a conscious moment of 10−20 J weighs 10−35 g and if a sad moment may then be somewhat heavier than a happy one. One may also find a different way of calculating the properties of bright matter and the energy equivalent of creating sentyons. However, one will not escape the confrontation of fundamental physics with information integration in consciousness. This has previously been realized by the founders of information integration theory of consciousness, Edelman, Tononi and Balduzzi (3, 21, 54–55, 76). Their approach is similar, although more sophisticated when calculating the actually shared information in neural networks. In my model, simplification served the purpose of translating shared information into energy equivalents. Regardless of which approach the reader may embark on, the idea of integrated or shared information is certainly the starting point for a discussion of consciousness using physical parameters. We will see in the next section that this may also be the starting point for creating consciousness synthetically.
Consciousness under the microscope: synthetic bright matter and future perspectives
The previous sections have clearly shown that an explanation of consciousness will go far beyond current methods in neuroscience. In fact, our discussion leaves little room for hope that we will ever find the underpinnings of consciousness by just studying the working brain and dissect its tissue. If we aspire a complete understanding of consciousness we will have to make it observable by experimentation. One way of achieving this may be its artificial creation. Henry Markram, director of the Blue Brain Project, recently announced that the synthesis of an artificial human brain will be completed within the next ten years. And he implies that this includes artificial or synthetic consciousness. What is the Blue Brain Project? It is an ambitious effort to reverse engineer the human brain using IBM Blue Gene supercomputers. At the moment, these computers achieve 22.8 trillion (22.8×1012) operations per second. For comparison: the human brain with 100 billion (100×109) neurons achieves quadrillions (1015) of operations per second, quite a way to go for Blue Gene. Nevertheless, considering that the operational speed of computers doubles every two years, it seems not unreasonable to estimate that in ten years, the Blue Brain or an equivalent computing device will have reached the computational capacity of the human brain.
However, does it even matter how fast Blue Brain is? If anything comes out of our discussion of neural networks that cope with the paradox of consciousness it is that the network has to implement a particular logical operation on a bona fide molecular substrate. The reader may have a different opinion on the nature of the network model and the substrate, but certainly, a microchip cannot replace the intricate operations on a dendritic tree. Simulate, maybe, but reconstruct, no. Of course, there are philosophical considerations that the nature of the substrate does not matter as long as the algorithm used for the computation is identical. There is no way to argue against this view, but it is merely a matter of belief.
Fortunately, there is a way of implementing the natural molecular substrate of consciousness in a neural network linked to a computer. Hybrid computational elements termed “neurochips” have recently been developed and artificial chemical synapses have been built to construct an interface between a computer and the neurons cultivated in a dish (77–79). The astonishing consequence from RFNN model and single neuron consciousness is that 1) you only need one neuron grown in a dish to generate consciousness; and 2) you can replace the rest of the brain by a computer implementing a cloud that generates a fractal pattern of connectivity. The computer establishes the re-entry loop between the neuron output and the synaptic input in its dendritic tree. Although the computer itself is not based on a intrinsically fractal architecture, it can simulate a cloud that offers are large number of re-entry loops with different signal delay times. The biological neuron in the dish can then probe the cloud for routes of psychic loops that establish a stable back-projection network. If the model of RFNNs is correct the looping times as signal delays measured by the computer should represent the fractal branching architecture of the dendritic tree. Or in other words, the computer simulates a neural network that will be an up-scaled version of the distribution of active synapses on the neuron’s dendritic tree. The computation is not different from that in the human brain, only that you can actually eavesdrop into each operation done by the computer to generate the psychic loops with the cultured neuron. At the moment, researchers have fabricated about ten artificial synapses per neuron in a dish (80–81). A reasonable estimate is that this number has to be increased by a factor of 1,000 to become interesting for igniting consciousness in a single neuron. This is technically ambitious, but not impossible considering the rapid progress in nanotechnology.
In summary, RFNNs are neural networks that implement a contraction mapping operation on a global network compressing its activity information onto that of a local dendritic tree in a single neuron. The pattern of this activity is fractal, and so is the connectivity to the neural network. As the consequence of this connectivity, RFNNs are able to implement a logic that takes the information of the whole into each of its parts, from the network to the neuronal and then to the molecular level. A set of four equations is necessary to understand this downscaling transformation: 1. The scaling function (equation 1), which describes the geometry of the fractal on each scale; 2. The Hutchinson operator (equation 2), which implements the downscaling transformation and information compression; 3. The Kullback-Leibler theorem (equation 3), which calculates the shared information resulting from embedding the whole into each of its parts; and 4. The Landauer erasure principle (equation 4), which lays the foundation for calculating the amount of energy gained by embedding information into fractals. It seems that this downscaling operation preserves information at each scale of contraction mapping and therefore, is an algorithm that may cope with undivided consciousness.
Maybe most astonishing and yet, somehow expectable is that the endpoint substrate of consciousness has to be intrinsically fractal, a new form of bright matter composed of sentyons. Potential candidates for sentyons are fractal lattices made of membrane components such as lipid rafts associated with ion channels. Sentyons may create a calcium wave that translates a fractal into holographic information sharing. After all, if downscaling to the molecular level is finite and does not proceed to the subatomar and perhaps, quantum physical realm then the endospace and the world perceived in it has to unfold at the molecular scale. And maybe even more fascinating, RFNNs applied to single neurons cultivated in a dish and hooked up to a computer may actually create synthetic consciousness. Therefore, an understanding of consciousness by its artificial generation may not be that far off.
Acknowledgments
My special thanks go to Dr. Chris Nunn who has been a great discussion partner over more than ten years. In addition, I am thankful to the discussions with my friend Bruce Klassen and many members of the NPG consciousness forum who sharpened my ideas to the point that I felt confident to write them down in a paper. My gratefulness also goes to my post-doctoral mentor and current colleague, Dr. Robert K. Yu who stated that “your consciousness research is probably more important than your biochemistry” when reading my first article published at cogprints. However, I felt it was time to bring the biochemistry back into the equation.
References
- 1.Edelman G. Consciousness: the remembered present. Ann N Y Acad Sci. 2001 Apr;929:111–22. doi: 10.1111/j.1749-6632.2001.tb05711.x. [DOI] [PubMed] [Google Scholar]
- 2.Baars BJ. The brain basis of a “consciousness monitor”: scientific and medical significance. Conscious Cogn. 2001 Jun;10(2):159–64. doi: 10.1006/ccog.2001.0510. discussion 246–58. [DOI] [PubMed] [Google Scholar]
- 3.Tononi G, Edelman GM. Consciousness and complexity. Science. 1998 Dec 4;282(5395):1846–51. doi: 10.1126/science.282.5395.1846. [DOI] [PubMed] [Google Scholar]
- 4.Baars BJ, Franklin S. An architectural model of conscious and unconscious brain functions: Global Workspace Theory and IDA. Neural Netw. 2007 Nov;20(9):955–61. doi: 10.1016/j.neunet.2007.09.013. [DOI] [PubMed] [Google Scholar]
- 5.Baars BJ. Global workspace theory of consciousness: toward a cognitive neuroscience of human experience. Prog Brain Res. 2005;150:45–53. doi: 10.1016/S0079-6123(05)50004-9. [DOI] [PubMed] [Google Scholar]
- 6.Baars BJ. The conscious access hypothesis: origins and recent evidence. Trends Cogn Sci. 2002 Jan 1;6(1):47–52. doi: 10.1016/s1364-6613(00)01819-2. [DOI] [PubMed] [Google Scholar]
- 7.Edelman GM, Gally JA, Baars BJ. Biology of consciousness. Front Psychol. 2011;2:4. doi: 10.3389/fpsyg.2011.00004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Singer W. Consciousness and the binding problem. Ann N Y Acad Sci. 2001 Apr;929:123–46. doi: 10.1111/j.1749-6632.2001.tb05712.x. [DOI] [PubMed] [Google Scholar]
- 9.Revonsuo A. Binding and the phenomenal unity of consciousness. Conscious Cogn. 1999 Jun;8(2):173–85. doi: 10.1006/ccog.1999.0384. [DOI] [PubMed] [Google Scholar]
- 10.Golding NL, Staff NP, Spruston N. Dendritic spikes as a mechanism for cooperative long-term potentiation. Nature. 2002 Jul 18;418(6895):326–31. doi: 10.1038/nature00854. [DOI] [PubMed] [Google Scholar]
- 11.Laberge D, Kasevich R. The apical dendrite theory of consciousness. Neural Netw. 2007 Nov;20(9):1004–20. doi: 10.1016/j.neunet.2007.09.006. [DOI] [PubMed] [Google Scholar]
- 12.Sevush S. Single-neuron theory of consciousness. J Theor Biol. 2006 Feb 7;238(3):704–25. doi: 10.1016/j.jtbi.2005.06.018. [DOI] [PubMed] [Google Scholar]
- 13.Bieberich E. Recurrent fractal neural networks: a strategy for the exchange of local and global information processing in the brain. Biosystems. 2002 Aug-Sep;66(3):145–64. doi: 10.1016/s0303-2647(02)00040-0. [DOI] [PubMed] [Google Scholar]
- 14.Woolf NJ. Dendritic encoding: an alternative to temporal synaptic coding of conscious experience. Conscious Cogn. 1999 Dec;8(4):447–54. doi: 10.1006/ccog.1999.0385. [DOI] [PubMed] [Google Scholar]
- 15.Orpwood RD. A possible neural mechanism underlying consciousness based on the pattern processing capabilities of pyramidal neurons in the cerebral cortex. J Theor Biol. 1994 Aug 21;169(4):403–18. doi: 10.1006/jtbi.1994.1162. [DOI] [PubMed] [Google Scholar]
- 16.Eccles JC. Evolution of consciousness. Proc Natl Acad Sci U S A. 1992 Aug 15;89(16):7320–4. doi: 10.1073/pnas.89.16.7320. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Libet B. Conscious mind as a field. J Theor Biol. 1996 Jan 21;178(2):223–6. doi: 10.1006/jtbi.1996.0019. [DOI] [PubMed] [Google Scholar]
- 18.Lindahl BI, Arhem P. Mind as a force field: comments on a new interactionistic hypothesis. J Theor Biol. 1994 Nov 7;171(1):111–22. doi: 10.1006/jtbi.1994.1217. [DOI] [PubMed] [Google Scholar]
- 19.Riscalla LM. An electromagnetic field theory of consciousness. J Am Soc Psychosom Dent Med. 1974;21(2):40–51. [PubMed] [Google Scholar]
- 20.Pockett S, Bold GE, Freeman WJ. EEG synchrony during a perceptual-cognitive task: widespread phase synchrony at all frequencies. Clin Neurophysiol. 2009 Apr;120(4):695–708. doi: 10.1016/j.clinph.2008.12.044. [DOI] [PubMed] [Google Scholar]
- 21.Tononi G, Koch C. The neural correlates of consciousness: an update. Ann N Y Acad Sci. 2008 Mar;1124:239–61. doi: 10.1196/annals.1440.004. [DOI] [PubMed] [Google Scholar]
- 22.Travis FT, Orme-Johnson DW. Field model of consciousness: EEG coherence changes as indicators of field effects. Int J Neurosci. 1989 Dec;49(3–4):203–11. doi: 10.3109/00207458909084826. [DOI] [PubMed] [Google Scholar]
- 23.Jibu M, Hagan S, Hameroff SR, Pribram KH, Yasue K. Quantum optical coherence in cytoskeletal microtubules: implications for brain function. Biosystems. 1994;32(3):195–209. doi: 10.1016/0303-2647(94)90043-4. [DOI] [PubMed] [Google Scholar]
- 24.Searle JR. Consciousness. Annu Rev Neurosci. 2000;23:557–78. doi: 10.1146/annurev.neuro.23.1.557. [DOI] [PubMed] [Google Scholar]
- 25.John ER. A field theory of consciousness. Conscious Cogn. 2001 Jun;10(2):184–213. doi: 10.1006/ccog.2001.0508. [DOI] [PubMed] [Google Scholar]
- 26.John ER. The neurophysics of consciousness. Brain Res Brain Res Rev. 2002 Jun;39(1):1–28. doi: 10.1016/s0165-0173(02)00142-x. [DOI] [PubMed] [Google Scholar]
- 27.Agmon-Snir H, Segev I. Signal delay and input synchronization in passive dendritic structures. Journal of neurophysiology. 1993 Nov;70(5):2066–85. doi: 10.1152/jn.1993.70.5.2066. [DOI] [PubMed] [Google Scholar]
- 28.Stuart G, Schiller J, Sakmann B. Action potential initiation and propagation in rat neocortical pyramidal neurons. The Journal of physiology. 1997 Dec 15;505( Pt 3):617–32. doi: 10.1111/j.1469-7793.1997.617ba.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Wang G, Krishnamurthy K, Bieberich E. Regulation of primary cilia formation by ceramide. J Lipid Res. 2009 Apr 16; doi: 10.1194/jlr.M900097-JLR200. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Milosevic NT, Ristanovic D. Fractality of dendritic arborization of spinal cord neurons. Neurosci Lett. 2006 Apr 3;396(3):172–6. doi: 10.1016/j.neulet.2005.11.031. [DOI] [PubMed] [Google Scholar]
- 31.Gutierrez RC, Hung J, Zhang Y, Kertesz AC, Espina FJ, Colicos MA. Altered synchrony and connectivity in neuronal networks expressing an autism-related mutation of neuroligin 3. Neuroscience. 2009 Aug 4;162(1):208–21. doi: 10.1016/j.neuroscience.2009.04.062. [DOI] [PubMed] [Google Scholar]
- 32.Sun R. Accounting for the computational basis of consciousness: a connectionist approach. Conscious Cogn. 1999 Dec;8(4):529–65. doi: 10.1006/ccog.1999.0405. [DOI] [PubMed] [Google Scholar]
- 33.Gross CG. Representation of visual stimuli in inferior temporal cortex. Philos Trans R Soc Lond B Biol Sci. 1992 Jan 29;335(1273):3–10. doi: 10.1098/rstb.1992.0001. [DOI] [PubMed] [Google Scholar]
- 34.Wallich E. Wave-function and the concept of a nano-mental element of representation. Acta Biotheor. 1993 Jun;41(1–2):119–25. doi: 10.1007/BF00712780. [DOI] [PubMed] [Google Scholar]
- 35.Gross CG. Genealogy of the “grandmother cell”. Neuroscientist. 2002 Oct;8(5):512–8. doi: 10.1177/107385802237175. [DOI] [PubMed] [Google Scholar]
- 36.Bowers JS. On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychol Rev. 2009 Jan;116(1):220–51. doi: 10.1037/a0014462. [DOI] [PubMed] [Google Scholar]
- 37.Gabor D. Holographic model of temporal recall. Nature. 1968 Feb 10;217(5128):584. doi: 10.1038/217584a0. [DOI] [PubMed] [Google Scholar]
- 38.Gabor D. Improved holographic model of temporal recall. Nature. 1968 Mar 30;217(5135):1288–9. doi: 10.1038/2171288a0. [DOI] [PubMed] [Google Scholar]
- 39.Scharfman HE. The CA3 “backprojection” to the dentate gyrus. Prog Brain Res. 2007;163:627–37. doi: 10.1016/S0079-6123(07)63034-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Cannon RC, Wheal HV, Turner DA. Dendrites of classes of hippocampal neurons differ in structural complexity and branching patterns. J Comp Neurol. 1999 Nov 1;413(4):619–33. [PubMed] [Google Scholar]
- 41.Darmopil S, Petanjek Z, Mohammed AH, Bogdanovic N. Environmental enrichment alters dentate granule cell morphology in oldest-old rat. J Cell Mol Med. 2008 Oct 23; doi: 10.1111/j.1582-4934.2008.00560.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Buell SJ, Coleman PD. Dendritic growth in the aged human brain and failure of growth in senile dementia. Science. 1979 Nov 16;206(4420):854–6. doi: 10.1126/science.493989. [DOI] [PubMed] [Google Scholar]
- 43.Treves A, Tashiro A, Witter ME, Moser EI. What is the mammalian dentate gyrus good for? Neuroscience. 2008 Jul 17;154(4):1155–72. doi: 10.1016/j.neuroscience.2008.04.073. [DOI] [PubMed] [Google Scholar]
- 44.Renart A, Parga N, Rolls ET. Associative memory properties of multiple cortical modules. Network. 1999 Aug;10(3):237–55. [PubMed] [Google Scholar]
- 45.Vogel DD. A neural network model of memory and higher cognitive functions. Int J Psychophysiol. 2005 Jan;55(1):3–21. doi: 10.1016/j.ijpsycho.2004.05.007. [DOI] [PubMed] [Google Scholar]
- 46.Myers CE, Scharfman HE. Pattern separation in the dentate gyrus: A role for the CA3 backprojection. Hippocampus. 2010 Aug 3; doi: 10.1002/hipo.20828. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Ascoli GA. Passive dendritic integration heavily affects spiking dynamics of recurrent networks. Neural Netw. 2003 Jun-Jul;16(5–6):657–63. doi: 10.1016/S0893-6080(03)00090-X. [DOI] [PubMed] [Google Scholar]
- 48.Douglas RJ, Martin KA. Mapping the matrix: the ways of neocortex. Neuron. 2007 Oct 25;56(2):226–38. doi: 10.1016/j.neuron.2007.10.017. [DOI] [PubMed] [Google Scholar]
- 49.Douglas RJ, Martin KA. A functional microcircuit for cat visual cortex. J Physiol. 1991;440:735–69. doi: 10.1113/jphysiol.1991.sp018733. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Wang G, Bieberich E. Prenatal alcohol exposure triggers ceramide-induced apoptosis in neural crest-derived tissues concurrent with defective cranial development. Cell Death Dis. 2010 May;1(5):e46. doi: 10.1038/cddis.2010.22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Haberly LB. Parallel-distributed processing in olfactory cortex: new insights from morphological and physiological analysis of neuronal circuitry. Chem Senses. 2001 Jun;26(5):551–76. doi: 10.1093/chemse/26.5.551. [DOI] [PubMed] [Google Scholar]
- 52.Malnic B, Hirono J, Sato T, Buck LB. Combinatorial receptor codes for odors. Cell. 1999 Mar 5;96(5):713–23. doi: 10.1016/s0092-8674(00)80581-4. [DOI] [PubMed] [Google Scholar]
- 53.Hameroff S. The “conscious pilot”-dendritic synchrony moves through the brain to mediate consciousness. Journal of biological physics. 2009 Apr 2; doi: 10.1007/s10867-009-9148-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Tononi G. Consciousness as integrated information: a provisional manifesto. Biol Bull. 2008 Dec;215(3):216–42. doi: 10.2307/25470707. [DOI] [PubMed] [Google Scholar]
- 55.Tononi G. Consciousness, information integration, and the brain. Prog Brain Res. 2005;150:109–26. doi: 10.1016/S0079-6123(05)50009-8. [DOI] [PubMed] [Google Scholar]
- 56.Morsella E, Krieger SC, Bargh JA. Minimal neuroanatomy for a conscious brain: Homing in on the networks constituting consciousness. Neural Netw. 2009 Aug 20; doi: 10.1016/j.neunet.2009.08.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Benfenati F. Synaptic plasticity and the neurobiology of learning and memory. Acta Biomed. 2007;78( Suppl 1):58–66. [PubMed] [Google Scholar]
- 58.Pereira A, Jr, Furlan FA. On the role of synchrony for neuron-astrocyte interactions and perceptual conscious processing. J Biol Phys. 2009 Oct;35(4):465–80. doi: 10.1007/s10867-009-9147-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Pereira A, Jr, Furlan FA. Astrocytes and human cognition: modeling information integration and modulation of neuronal activity. Prog Neurobiol. 2010 Nov;92(3):405–20. doi: 10.1016/j.pneurobio.2010.07.001. [DOI] [PubMed] [Google Scholar]
- 60.McGeoch MW, McGeoch JE. Power spectra and cooperativity of a calcium-regulated cation channel. Biophys J. 1994 Jan;66(1):161–8. doi: 10.1016/S0006-3495(94)80747-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Branco T, Clark BA, Hausser M. Dendritic discrimination of temporal input sequences in cortical neurons. Science. 2010 Sep 24;329(5999):1671–5. doi: 10.1126/science.1189664. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Jia H, Rochefort NL, Chen X, Konnerth A. Dendritic organization of sensory input to cortical neurons in vivo. Nature. 2010 Apr 29;464(7293):1307–12. doi: 10.1038/nature08947. [DOI] [PubMed] [Google Scholar]
- 63.Errington AC, Renger JJ, Uebele VN, Crunelli V. State-dependent firing determines intrinsic dendritic Ca2+ signaling in thalamocortical neurons. J Neurosci. 2010 Nov 3;30(44):14843–53. doi: 10.1523/JNEUROSCI.2968-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Bieberich E. There is More to a Lipid than just Being a Fat: Sphingolipid-Guided Differentiation of Oligodendroglial Lineage from Embryonic Stem Cells. Neurochem Res. 2010 Dec 7; doi: 10.1007/s11064-010-0338-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Dart C. Lipid microdomains and the regulation of ion channel function. J Physiol. 2010 Sep 1;588(Pt 17):3169–78. doi: 10.1113/jphysiol.2010.191585. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Lingwood D, Simons K. Lipid rafts as a membrane-organizing principle. Science. 2010 Jan 1;327(5961):46–50. doi: 10.1126/science.1174621. [DOI] [PubMed] [Google Scholar]
- 67.Weerth SH, Holtzclaw LA, Russell JT. Signaling proteins in raft-like microdomains are essential for Ca2+ wave propagation in glial cells. Cell Calcium. 2007 Feb;41(2):155–67. doi: 10.1016/j.ceca.2006.06.006. [DOI] [PubMed] [Google Scholar]
- 68.Kitzbichler MG, Smith ML, Christensen SR, Bullmore E. Broadband criticality of human brain network synchronization. PLoS Comput Biol. 2009 Mar;5(3):e1000314. doi: 10.1371/journal.pcbi.1000314. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Liu Y, Dilger JP. Application of the one- and two-dimensional Ising models to studies of cooperativity between ion channels. Biophys J. 1993 Jan;64(1):26–35. doi: 10.1016/S0006-3495(93)81337-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Chen YD, Hill TL. On the theory of ion transport across the nerve membrane, VII. Cooperativity between channels of a large square lattice. Proc Natl Acad Sci U S A. 1973 Jan;70(1):62–5. doi: 10.1073/pnas.70.1.62. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Bilotta E, Pantano P. Emergent patterning phenomena in 2D cellular automata. Artif Life. 2005 Summer;11(3):339–62. doi: 10.1162/1064546054407167. [DOI] [PubMed] [Google Scholar]
- 72.Caudle RM. Memory in astrocytes: a hypothesis. Theor Biol Med Model. 2006;3:2. doi: 10.1186/1742-4682-3-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Rothemund PW, Papadakis N, Winfree E. Algorithmic self-assembly of DNA Sierpinski triangles. PLoS Biol. 2004 Dec;2(12):e424. doi: 10.1371/journal.pbio.0020424. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Burke BC, Freeman WJ, Chang HJ. Optimization of olfactory model in software to give 1/f power spectra reveals numerical instabilities in solutions governed by aperiodic (chaotic) attractors. Neural Netw. 1998 Apr;11(3):449–66. doi: 10.1016/s0893-6080(97)00116-0. [DOI] [PubMed] [Google Scholar]
- 75.Allahverdyan AE, Nieuwenhuizen TM. Breakdown of the Landauer bound for information erasure in the quantum regime. Phys Rev E Stat Nonlin Soft Matter Phys. 2001 Nov;64(5 Pt 2):056117. doi: 10.1103/PhysRevE.64.056117. [DOI] [PubMed] [Google Scholar]
- 76.Balduzzi D, Tononi G. Qualia: the geometry of integrated information. PLoS Comput Biol. 2009 Aug;5(8):e1000462. doi: 10.1371/journal.pcbi.1000462. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Maher MP, Pine J, Wright J, Tai YC. The neurochip: a new multielectrode device for stimulating and recording from cultured neurons. J Neurosci Methods. 1999 Feb 1;87(1):45–56. doi: 10.1016/s0165-0270(98)00156-3. [DOI] [PubMed] [Google Scholar]
- 78.Erickson J, Tooker A, Tai YC, Pine J. Caged neuron MEA: a system for long-term investigation of cultured neural network connectivity. J Neurosci Methods. 2008 Oct 30;175(1):1–16. doi: 10.1016/j.jneumeth.2008.07.023. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.Bieberich E, Anthony GE. Neuronal differentiation and synapse formation of PC12 and embryonic stem cells on interdigitated microelectrode arrays: contact structures for neuron-to-electrode signal transmission (NEST) Biosens Bioelectron. 2004 Mar 15;19(8):923–31. doi: 10.1016/j.bios.2003.08.016. [DOI] [PubMed] [Google Scholar]
- 80.Peterman MC, Mehenti NZ, Bilbao KV, Lee CJ, Leng T, Noolandi J, et al. The Artificial Synapse Chip: a flexible retinal interface based on directed retinal cell growth and neurotransmitter stimulation. Artif Organs. 2003 Nov;27(11):975–85. doi: 10.1046/j.1525-1594.2003.07307.x. [DOI] [PubMed] [Google Scholar]
- 81.Peterman MC, Noolandi J, Blumenkranz MS, Fishman HA. Localized chemical release from an artificial synapse chip. Proc Natl Acad Sci U S A. 2004 Jul 6;101(27):9951–4. doi: 10.1073/pnas.0402089101. [DOI] [PMC free article] [PubMed] [Google Scholar]




