Skip to main content
PLOS One logoLink to PLOS One
. 2014 Apr 11;9(4):e94204. doi: 10.1371/journal.pone.0094204

An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

Jérémie Cabessa 1,2,*, Alessandro E P Villa 1,3,*
Editor: Sergio Gómez4
PMCID: PMC3984152  PMID: 24727866

Abstract

We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of Inline graphic-automata, and then translating the most refined classification of Inline graphic-automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits.

Introduction

In neural computation, understanding the computational and dynamical properties of biological neural networks is an issue of central importance. In this context, much interest has been focused on comparing the computational power of diverse theoretical neural models with those of abstract computing devices. Nowadays, the computational capabilities of neural models is known to be tightly related to the nature of the activation function of the neurons, to the nature of their synaptic connections, to the eventual presence of noise in the model, to the possibility for the networks to evolve over time, and to the computational paradigm performed by the networks.

The first and seminal results in this direction were provided by McCulloch and Pitts, Kleene, and Minsky who proved that first-order Boolean recurrent neural networks were computationally equivalent to classical finite state automata [1][3]. Kremer extended these results to the class of Elman-style recurrent neural nets [4], and Sperduti discussed the computational power of different other architecturally constrained classes of networks [5].

Later, Siegelmann and Sontag proved that by considering rational synaptic weights and by extending the activation functions of the cells from Boolean to linear-sigmoid, the corresponding neural networks have their computational power drastically increased from finite state automata up to Turing machines [6][8]. Kilian and Siegelmann then generalised the Turing universality of neural networks to a broader class of sigmoidal activation functions [9]. The computational equivalence between so-called “rational recurrent neural networks” and Turing machines has now become standard result in the field.

Following von Neumann considerations [10], Siegelmann and Sontag further assumed that the variables appearing in the underlying chemical and physical phenomena could be modelled by continuous rather than discrete (rational) numbers, and therefore proposed a study of the computational capabilities of recurrent neural networks equipped with real instead of rational synaptic weights [11]. They proved that the so-called “analog recurrent neural networks” are computationally equivalent to Turing machines with advices, hence capable of super-Turing computational power from polynomial time of computation already [11]. In this context, a proper internal hierarchical classification of analog recurrent neural networks according to the Kolmogorov complexity of their underlying real synaptic weights was described [12].

It was also shown that the presence of arbitrarily small amount of analog noise seriously reduces the computational capability of both rational- and real-weighted recurrent neural networks to those of finite automata [13]. In the presence of Gaussian or other common analog noise distribution with sufficiently large support, the computational power of recurrent neural networks is reduced to even less than finite automata, namely to the recognition of definite languages [14].

Besides, the concept of evolvability has also turned out to be essential in the study of the computational power of circuits closer to the biological world. The research in this context has initially been focused almost exclusively on the application of genetic algorithms aimed at allowing networks with fully-connected topology and satisfying selected fitness functions (e.g., performed well on specific tasks) to reproduce and multiply [15][18]. This approach aimed to optimise the connection weights that determine the functionality of a network with fixed-topology. However, the topology of neural networks, i.e. their structure and connectivity patterns, greatly affects their functionality. The evolution of both topologies and connection weights following bioinspired rules that may also include features derived from the study of neural development, differentiation, genetically programmed cell-death and synaptic plasticity rules has become increasingly studied in recent years [19][26]. Along this line, Cabessa and Siegelmann provided a theoretical study proving that both models of rational-weighted and analog evolving recurrent neural networks are capable of super-Turing computational capabilities, equivalent to those of static analog neural networks [27].

Finally, from a general perspective, the classical computational approach from Turing [28] was argued to “no longer fully corresponds to the current notion of computing in modern systems” [29] – especially when it refers to bio-inspired complex information processing systems. In the brain (or in organic life in general), information is rather processed in an interactive way [30], where previous experience must affect the perception of future inputs, and where older memories may themselves change with response to new inputs. Following this perspective, Cabessa and Villa described the super-Turing computational power of analog recurrent neural networks involved in a reactive computational framework [31]. Cabessa and Siegelmann provided a characterisation of the Turing and super-Turing capabilities of rational and analog recurrent neural networks involved in a basic interactive computational paradigm, respectively [32]. Moreover, Cabessa and Villa proved that neural models combining the two crucial features of evolvability and interactivity were capable of super-Turing computational capabilities [33].

In this paper, we pursue the study of the computational power of neural models and provide two novel refined attractor-based complexity measurement for Boolean recurrent neural networks. More precisely, as a first step we provide a generalisation to the precise infinite input stream context of the classical equivalence result between Boolean neural networks and finite state automata [1][3]. Under some natural condition on the type specification of their attractors, we show that Boolean recurrent neural networks disclose the very same expressive power as deterministic Büchi automata [34]. This equivalence allows to establish a hierarchical classification of Boolean neural networks by translating the Wagner classification theory from the Büchi automaton to the neural network context [35]. The obtained classification consists of a pre-well ordering of width 2 and height Inline graphic (where Inline graphic denotes the first infinite ordinal). As a second step, we show that by totally relaxing the restrictions on the type specification of their attractors, the Boolean neural networks significantly increase their expressive power from deterministic Büchi automata up to Muller automata. Hence, another more refined hierarchical classification of Boolean neural networks is obtained by translating the Wagner classification theory from the Muller automaton to the neural network context. This classification consists of a pre-well ordering of width 2 and height Inline graphic. The complexity measurements induced by these two hierarchical classifications refer to the possibility of networks' dynamics to maximally alternate between attractors of different types along their evolutions. They represent an assessment of the computational power of Boolean neural networks in terms of the significance of their attractor dynamics. Finally, an application of this approach to a Boolean model of the basal ganglia-thalamocortical network is provided. This practical example shows that our automata-theoretical approach might bear new founding elements for the understanding of the complexity of real brain circuits.

Materials and Methods

Network Model

In this work, we focus on synchronous discrete-time first-order recurrent neural networks made up of classical McCulloch and Pitts cells. Such a neural network is modelled by a general labelled directed graph. The nodes and labelled edges of the graph respectively represent the cells and synaptic connections of the network. At each time step, the status of each activation cell can be of only two kinds: firing or quiet. When firing, a cell instantaneously transmits an action potential throughout all its outgoing connections, the intensity of which being equal to the label of the underlying connection. Then, a given cell is firing at time Inline graphic whenever the summed intensity of all the incoming action potentials transmitted at time Inline graphic by both its afferent cells and background activity exceeds its threshold (which we suppose without loss of generality to be equal to 1). The definition of such a network can be formalised as follows:

Definition 1. A first-order Boolean recurrent neural network (RNN) consists of a tuple Inline graphic, where Inline graphic is a finite set of Inline graphic activation cells, Inline graphic is a finite set of Inline graphic input units, and Inline graphic, Inline graphic, and Inline graphic are rational matrices describing the weighted synaptic connections between cells, the weighted connections from the input units to the activation cells, and the background activity, respectively.

The activation value of cells Inline graphic and input units Inline graphic at time Inline graphic, respectively denoted by Inline graphic and Inline graphic, is a Boolean value equal to Inline graphic if the corresponding cell is firing at time Inline graphic and equal to Inline graphic otherwise. Given the activation values Inline graphic and Inline graphic, the value Inline graphic is then updated by the following equation

graphic file with name pone.0094204.e027.jpg (1)

where Inline graphic is the classical Heaviside step function, i.e. a hard-threshold activation function defined by Inline graphic if Inline graphic and Inline graphic otherwise.

According to Equation (1), the dynamics of the whole network Inline graphic is described by the following governing equation

graphic file with name pone.0094204.e033.jpg (2)

where Inline graphic and Inline graphic are Boolean vectors describing the spiking configuration of the activation cells and input units, and Inline graphic denotes the Heaviside step function applied component by component.

Such Boolean neural networks have already been proven to reveal same computational capabilities as finite state automata [1][3]. Furthermore, it can be observed that rational- and real-weighted Boolean neural networks are actually computationally equivalent.

Example 1. Consider the network Inline graphic depicted in Figure 1. The dynamics of this network is then governed by the following system of equation:

graphic file with name pone.0094204.e038.jpg

Figure 1. A simple neural network.

Figure 1

The network is formed by two input units (Inline graphic) and three activation cells (Inline graphic). In this example the synaptic weights are all equal to 1/2, with positive sign corresponding to an excitatory input and a negative sign corresponding to a negative input. Notice that both cells Inline graphic and Inline graphic receive an excitatory background activity weighing 1/2.

Attractors

Neurophysiological Meaningfulness

In bio-inspired complex systems, the concept of an attractor has been shown to carry strong biological and computational implications. According to Kauffman: “Because many complex systems harbour attractors to which the system settle down, the attractors literally are most of what the systems do” [36, p. 191]. The central hypothesis for brain attractors is that, once activated by appropriate activity, network behaviour is maintained by continuous reentry of activity [37], [38]. This involves strong correlations between neuronal activities in the network and a high incidence of repeating firing patterns therein, being generated by the underlying attractors. Alternative attractors are commonly interpreted as alternative memories [39][46].

Certain pathways through the network may be favoured by preferred synaptic interactions between the neurons following developmental and learning processes [47][49]. The plasticity of these phenomena is likely to play a crucial role to shape the meaningfulness of an attractor and attractors must be stable at short time scales. Whenever the same information is presented in a network, the same pattern of activity is evoked in a circuit of functionally interconnected neurons, referred to as “cell assembly”. In cell assemblies interconnected in this way, some ordered and precise neurophysiological activity referred to as preferred firing sequences, or spatio-temporal patterns of discharges, may recur above chance levels whenever the same information is presented [50][52]. Recurring firing patterns may be detected without a specific association to a stimulus in large networks of spiking neural networks or during spontaneous activity in electrophysiological recordings [53][55]. These patterns may be viewed as spurious patterns generated by spurious attractors that are associated with the underlying topology of the network rather than with a specific signal [56]. On the other hand, several examples exist of spatiotemporal firing patterns in behaving animals, from rats to primates [57][61], where preferred firing sequences can be associated to specific types of stimuli or behaviours. These can be viewed as meaningful patterns associated with meaningful attractors. However, meaningfulness cannot be reduced to the detection of a behavioural correlate [62][64]. The repeating activity in a network may also be considered meaningful if it allows the activation of neural elements that can be associated to other attractors, thus allowing the build-up of higher order dynamics by means of itinerancy between attractor basins and opening the way to chaotic neural dynamics [51], [65][70].

The dynamics of rather simple Boolean recurrent neural networks can implement an associative memory with bioinspired features [71], [72]. In the Hopfield framework, stable equilibria of the network that do not represent any valid configuration of the optimisation problem are referred to as spurious attractors. Spurious modes can disappear by “unlearning” [71], but rational successive memory recall can actually be implemented by triggering spurious modes and achieving meaningful memory storage [66], [73][77]. In this paper, the notions of attractors, meaningful attractors, and spurious attractors are reformulated in our precise Boolean network context. Networks will then be classified according to their ability to alternate between different types of attractive behaviours. For this purpose, the following definitions need to be introduced.

Formal Definitions

As preliminary notations, for any Inline graphic, the space of Inline graphic-dimensional Boolean vectors is denoted by Inline graphic. For any vector Inline graphic and any Inline graphic, the Inline graphic-th component of Inline graphic is denoted by Inline graphic. Moreover, the spaces of finite and infinite sequences of Inline graphic-dimensional Boolean vectors are denoted by Inline graphic and Inline graphic, respectively. Any finite sequence of length Inline graphic of Inline graphic will be denoted by an expression of the form Inline graphic, and any infinite sequence of Inline graphic by an expression of the form Inline graphic, where each Inline graphic. For any finite sequence of Boolean vectors Inline graphic, we let the expression Inline graphic denote the infinite sequence obtained by infinitely many consecutive concatenations of Inline graphic, i.e. Inline graphic.

Now, let Inline graphic be some network with Inline graphic activation cells and Inline graphic input units. For each time step Inline graphic, the Boolean vectors Inline graphic and Inline graphic describing the spiking configurations of both the activation cells and input units of Inline graphic at time Inline graphic are called the state of Inline graphic at time Inline graphic and the input submitted to Inline graphic at time Inline graphic, respectively. An infinite input stream Inline graphic of Inline graphic is then defined as an infinite sequence of consecutive inputs, i.e. Inline graphic. Now, assuming the initial state of the network to be Inline graphic, any infinite input stream Inline graphic induces via Equation (2) an infinite sequence of consecutive states Inline graphic called the evolution of Inline graphic induced by the input stream Inline graphic.

Note that the set of all possible distinct states of a given Boolean network Inline graphic is always finite; indeed, if Inline graphic possesses Inline graphic activation cells, then there are at most Inline graphic distinct possible states of Inline graphic. Hence, any infinite evolution Inline graphic of Inline graphic consists of an infinite sequence of only finitely many distinct states. Therefore, in any evolution Inline graphic of Inline graphic, there necessarily exists at least one state that recurs infinitely many times in the infinite sequence Inline graphic, irrespective of the fact that Inline graphic is periodic or not. The non-empty set of all such states that recurs infinitely often in the evolution Inline graphic will be denoted by Inline graphic.

By definition, every state Inline graphic that is visited only finitely often in Inline graphic will no longer occur in Inline graphic after some time step Inline graphic. By taking the maximum of these time steps Inline graphic, we obtain a global time step Inline graphic such that all states of Inline graphic occurring after time Inline graphic will necessarily repeat infinitely often in Inline graphic. Formally, there necessarily exists an index Inline graphic such that, for all Inline graphic, one has Inline graphic. It is important to note that the reoccurrence of the states belonging to Inline graphic after time step Inline graphic does not necessarily occur in a periodic manner during the evolution Inline graphic. Therefore, any evolution Inline graphic consists of a possibly empty prefix of successive states that repeat only finite many times, followed by an infinite suffix of successive states that repeat infinitely often, yet not necessarily in a periodic way. A set of states of the form Inline graphic for some evolution Inline graphic is commonly called an attractor of Inline graphic [36]. A precise definition can be given as follows:

Definition 2. Let Inline graphic be some Boolean neural network with Inline graphic activation cells. A set Inline graphic is called an attractor for Inline graphic if there exists an input stream Inline graphic such that the corresponding evolution Inline graphic satisfies Inline graphic.

In other words, an attractor of a Boolean neural network is a set of states such that the behaviour of the network could eventually become forever confined to that set of states. In this sense, the definition of an attractor requires the infinite input stream context to be properly formulated.

In this work, we suppose that attractors can only be of two distinct types, namely either meaningful or spurious. For instance, the type of each attractor could be determined by its topological features or by its neurophysiological significance with respect to measurable observations associated with certain behaviours or sensory discriminations (see Section “Neurophysiological Meaningfulness” above). From this point onwards, any given network is assumed to be provided with a corresponding classification of all of its attractors into meaningful and spurious types. Further discussions about the attribution of the attractors to either types will be addressed in the forthcoming sections.

An infinite input stream Inline graphic of Inline graphic is called meaningful if Inline graphic is a meaningful attractor, and it is called spurious if Inline graphic is a spurious attractor. In other words, an input stream is called meaningful (respectively spurious) if the network dynamics induced by this input stream will eventually become confined into some meaningful (respectively spurious) attractor. Then, the set of all meaningful input streams of Inline graphic is called the neural language of Inline graphic and is denoted by Inline graphic. Finally, an arbitrary set of input streams Inline graphic is said to be recognisable by some Boolean neural network if there exists a network Inline graphic such that Inline graphic.

Besides, if Inline graphic denotes some Boolean neural network provided with an additional specification of the type of each of its attractors, then the complementary network Inline graphic is defined to be the same network as Inline graphic yet with a completely opposite type specification of its attractors. Then, an attractor Inline graphic is meaningful for Inline graphic iff Inline graphic is a spurious attractor for Inline graphic and one has Inline graphic. All preceding definitions are illustrated by the next Example 2.

Example 2. Let us consider the network Inline graphic described in Example 1 and illustrated in Figure 1. Let us further assume that the network state where the three cells Inline graphic simultaneously fire determines the meaningfulness of the attractors of Inline graphic. In other words, the meaningful attractors of Inline graphic are precisely those containing the state Inline graphic; all other attractors are assumed to be spurious.

Let us consider the periodic input stream Inline graphic and its corresponding evolution

graphic file with name pone.0094204.e147.jpg

From time step Inline graphic, the evolution Inline graphic of Inline graphic remains confined in a cyclic visit of the states Inline graphic. Thence, the set Inline graphic is an attractor of Inline graphic. Moreover, since the state Inline graphic does not belong to Inline graphic, the attractor Inline graphic is spurious. Therefore, the input stream Inline graphic is also spurious, and hence does not belong to the neural language of Inline graphic, i.e. Inline graphic.

Let us consider another periodic input stream Inline graphic and its corresponding evolution

graphic file with name pone.0094204.e161.jpg

The set of states Inline graphic is an attractor, and the evolution Inline graphic of Inline graphic is confined in Inline graphic already from the very first time step Inline graphic. Yet in this case, since the Boolean vector Inline graphic belongs to Inline graphic, the attractor Inline graphic is meaningful. It follows that the input stream Inline graphic is also meaningful, and thus Inline graphic.

Inline graphic-Automata

Büchi Automata

A finite deterministic Büchi automaton [34] is a 5-tuple Inline graphic, where Inline graphic is a finite set called the set of states, Inline graphic is a finite alphabet, Inline graphic is an element of Inline graphic called the initial state, Inline graphic is a partial function from Inline graphic into Inline graphic called the transition function, and Inline graphic is a subset of Inline graphic called the set of final states. A finite deterministic Büchi automaton is generally represented as a directed labelled graph whose nodes and labelled edges correspond to the states and transitions of the automaton, respectively.

Given some finite deterministic Büchi automaton Inline graphic, every triple Inline graphic such that Inline graphic is called a transition of Inline graphic. Then, a path in Inline graphic is a sequence of consecutive transitions Inline graphic, also denoted by Inline graphic. The path Inline graphic is said to successively visit the states Inline graphic and the word Inline graphic is the label of Inline graphic. The state Inline graphic is called the origin of path Inline graphic and Inline graphic is said to be initial if its starting state is initial, i.e. if Inline graphic. If Inline graphic is an infinite path, the set of states visited infinitely many times by Inline graphic is denoted by Inline graphic.

An infinite initial path Inline graphic of Inline graphic is said to be successful if it visits at least one of the final states infinitely often, i.e. if Inline graphic. An infinite word is then said to be recognised by Inline graphic if it is the label of a successful infinite path in Inline graphic. The language recognised by Inline graphic, denoted by Inline graphic, is the set of all infinite words recognised by Inline graphic.

A cycle in Inline graphic consists of a finite set of states Inline graphic such that there exists a finite path in Inline graphic with same initial and final state and visiting precisely all states of Inline graphic. A cycle Inline graphic is said to be accessible from cycle Inline graphic if there exists a path from some state of Inline graphic to some state of Inline graphic. Furthermore, a cycle is called successful if it contains a state belonging to Inline graphic, and non-successful otherwise.

An alternating chain of length Inline graphic (respectively co-alternating chain of length Inline graphic) is a finite sequence of Inline graphic distinct cycles Inline graphic such that Inline graphic is successful (resp. Inline graphic is non-successful), Inline graphic is successful iff Inline graphic is non-successful, Inline graphic is accessible from Inline graphic, and Inline graphic is not accessible from Inline graphic, for all Inline graphic. An alternating chain of length Inline graphic is a sequence of two cycles Inline graphic such that Inline graphic is successful, Inline graphic is non-successful, Inline graphic is accessible from Inline graphic, and Inline graphic is also accessible from Inline graphic (we recall that Inline graphic denotes the least infinite ordinal). In this case, cycles Inline graphic and Inline graphic are said to communicate. For any Inline graphic, an alternating chain of length Inline graphic is said to be maximal in Inline graphic if there is no alternating chain and no co-alternating chain in Inline graphic with a length strictly larger than Inline graphic. A co-alternating chain of length Inline graphic is said to be maximal in Inline graphic if exactly the same condition holds.

The above definitions are illustrated by the Example S1 and Figure S1 in File S1.

Muller Automata

A finite deterministic Muller automaton is a 5-tuple Inline graphic, where Inline graphic, Inline graphic, Inline graphic, and Inline graphic are defined exactly like for deterministic Büchi automata, and Inline graphic is a set of states' sets called the table of the automaton. The notions of transition and path are defined as for deterministic Büchi automata. An infinite initial path Inline graphic of Inline graphic is now called successful if Inline graphic. Given a finite deterministic Muller automaton Inline graphic, a cycle in Inline graphic is called successful if it belongs to Inline graphic, and non-succesful otherwise. An infinite word is then said to be recognised by Inline graphic if it is the label of a successful infinite path in Inline graphic, and the Inline graphic -language recognised by Inline graphic, denoted by Inline graphic, is defined as the set of all infinite words recognised by Inline graphic. The class of all Inline graphic-languages recognisable by some deterministic Muller automata is precisely the class of Inline graphic -rational languages [79].

It can be shown that deterministic Muller automata are strictly more powerful than deterministic Büchi automata, but have an equivalent expressive power as non-deterministic Büchi automata, Rabin automata, Street automata, parity automata, and non-deterministic Muller automata [81].

For each ordinal Inline graphic such that Inline graphic, we introduce the concept of an alternating tree of length Inline graphic in a deterministic Muller automaton Inline graphic, which consists of a tree-like disposition of the successful and non-successful cycles of Inline graphic induced by the ordinal Inline graphic, as illustrated in Figure 2. In order to describe this tree-like disposition, we first recall that any ordinal Inline graphic can uniquely be written of the form Inline graphic, for some Inline graphic, Inline graphic, and Inline graphic. Then, given some deterministic Muller automata Inline graphic and some strictly positive ordinal Inline graphic, an alternating tree (respectively co-alternating tree) of length Inline graphic is a sequence of cycles of Inline graphic Inline graphic such that:

Figure 2. An alternating tree of length Inline graphic, for some ordinal Inline graphic.

Figure 2

Illustration of the inclusion and accessibility relations between cycles forming an alternating tree of length Inline graphic.

  1. Inline graphic is successful (respectively non-successful);

  2. Inline graphic, and Inline graphic is successful iff Inline graphic is non-successful;

  3. Inline graphic is accessible from Inline graphic, and Inline graphic is successful iff Inline graphic is non-successful;

  4. Inline graphic and Inline graphic are both accessible from Inline graphic, and each Inline graphic is successful whereas each Inline graphic is non-successful.

An alternating tree of length Inline graphic is said to be maximal in Inline graphic if there is no alternating or co-altenrating tree in Inline graphic of length Inline graphic. A co-alternating tree of length Inline graphic is said to be maximal in Inline graphic if exactly the same condition holds. An alternating tree of length Inline graphic is illustrated in Figure 2.

The above definitions are illustrated by the Example S2 and Figure S2 in File S2.

Results

Hierarchical Classification of Neural Networks

Our notion of an attractor refers to a set of states such that the behaviour of the network could forever be confined into that set of states. In other words, an attractor corresponds to a cyclic behaviour of the network produced by an infinite input stream. According to these considerations, we provide a generalisation to this precise infinite input stream context of the classical equivalence result between Boolean neural networks and finite state automata [1][3]. More precisely, we show that, under some natural specific conditions on the specification of the type of their attractors, Boolean recurrent neural networks express the very same expressive power as deterministic Büchi automata. This equivalence result enables us to establish a hierarchical classification of neural networks by translating the Wagner classification theory from the Büchi automaton to the neural network context [35]. The obtained classification is intimately related to the attractive properties of the neural networks, and hence provides a new refined measurement of the computational power of Boolean neural networks in terms of their attractive behaviours.

Boolean Recurrent Neural Networks and Büchi Automata

We now prove that, under some natural conditions, Boolean recurrent neural networks are computationally equivalent to deterministic Büchi automata. Towards this purpose, we consider that the neural networks include selected elements belonging to an output layer. The activation of the output layer communicates the output of the system to the environment.

Formally, let us consider a recurrent neural network Inline graphic, as described in Definition 1, with Inline graphic activation cells and Inline graphic input units. In addition, let us assume that Inline graphic cells chosen among the Inline graphic activation cells form the output layer of the neural network, denoted by Inline graphic. For graphical purpose, the activation cells of the output layer are represented as double-circled nodes in the next figures. Thus, a recurrent neural network is now defined by a tuple Inline graphic. Let us assume also that the specification type of the attractors of a network Inline graphic is naturally related to its output layer as follows: an attractor Inline graphic of Inline graphic is considered meaningful if it contains at least one state where some output cell is spiking, i.e. if there exist Inline graphic and Inline graphic such that Inline graphic and Inline graphic; the attractor Inline graphic is considered spurious otherwise. According to these assumptions, meaningful attractors refer to the cyclic behaviours of the network that induce some response activity of the system via its output layer, whereas spurious attractors refer to the cyclic behaviours of the system that do not evoke any response at all of the output layer.

It can be stated that the expressive powers of Boolean recurrent neural networks and deterministic Büchi automaton are equivalent. As a first step towards this result, the following proposition shows that any Boolean recurrent neural network can be simulated by some deterministic Büchi automaton.

Proposition 1. Let Inline graphic be some Boolean recurrent neural network provided with an output layer. Then there exists a deterministic Büchi automaton Inline graphic such that Inline graphic .

Proof. Let Inline graphic be some neural network given by the tuple Inline graphic, with Inline graphic, Inline graphic, and Inline graphic. Consider the deterministic Büchi automaton Inline graphic, where Inline graphic, Inline graphic, Inline graphic is the Inline graphic-dimensional zero vector, Inline graphic, and Inline graphic is the function defined by Inline graphic iff Inline graphic. Note that the complexity of the transformation is exponential, since Inline graphic and Inline graphic.

According to this construction, any infinite evolution Inline graphic of Inline graphic naturally induces a corresponding infinite initial path Inline graphic in Inline graphic. Moreover, by the definitions of meaningful and spurious attractors of Inline graphic, an infinite input stream Inline graphic is meaningful for Inline graphic iff Inline graphic is recognised by Inline graphic. In other words, Inline graphic iff Inline graphic, and therefore Inline graphic.

According to the construction given in the proof of Proposition 1, any infinite evolution of the network Inline graphic is naturally associated with a corresponding infinite initial path in the automaton Inline graphic, and conversely, any infinite initial path in Inline graphic corresponds to some possible infinite evolution of Inline graphic. Consequently, there is a biunivocal correspondence between the attractors of the network Inline graphic and the cycles in the graph of the corresponding Büchi automaton Inline graphic. As a result, a procedure to compute all possible attractors of a given network Inline graphic is obtained by firstly constructing the corresponding deterministic Büchi automaton Inline graphic and secondly listing all cycles in the graph of Inline graphic.

As a second step towards the equivalence result, we prove now that any deterministic Büchi automaton can be simulated by some Boolean recurrent neural network.

Proposition 2. Let Inline graphic be some deterministic Büchi automaton over the alphabet Inline graphic , with Inline graphic . Then there exists a Boolean recurrent neural network Inline graphic provided with an output layer such that Inline graphic .

Proof. Let Inline graphic be some deterministic Büchi automaton over alphabet Inline graphic, with Inline graphic, and Inline graphic. Consider the network Inline graphic with Inline graphic cells given as follows: firstly, Inline graphic, where Inline graphic is decomposed into a set of Inline graphic “letter cells” Inline graphic, a “delay-cell” Inline graphic, and a set of Inline graphic “state cells” Inline graphic; secondly, the set of Inline graphic “input units” Inline graphic, and thirdly, the outptut layer Inline graphic. The idea of the simulation is that the “letter cells” and “state cells” of the network Inline graphic simulate the letters and states currently read and entered by the automaton Inline graphic, respectively.

Towards this purpose, the weight matrices Inline graphic, Inline graphic, and Inline graphic are described as follows. Concerning the matrix Inline graphic: for any Inline graphic, we consider the binary decomposition of Inline graphic, namely Inline graphic, with Inline graphic, and for any Inline graphic, we set the weight Inline graphic; for all other Inline graphic, we set Inline graphic, for any Inline graphic. Concerning the matrix Inline graphic: for any Inline graphic, we set Inline graphic; we also set Inline graphic; for all other Inline graphic, we set Inline graphic. Concerning the matrix Inline graphic: we set Inline graphic, and for any Inline graphic and any Inline graphic, we set Inline graphic iff Inline graphic is a transition of Inline graphic; otherwise, for any pair of indices Inline graphic such that Inline graphic has not been set to Inline graphic or Inline graphic, we set Inline graphic. This construction is illustrated in Figure 3.

Figure 3. The network Inline graphic described in the proof of Proposition 2.

Figure 3

The network is characterised by a set of Inline graphic input cells Inline graphic reading the alphabet Inline graphic, Inline graphic “letter cells” Inline graphic, a “delay-cell” Inline graphic, and a set of Inline graphic “state cells” Inline graphic. The idea of the simulation is that the “letter cells” and “state cells” of the network Inline graphic simulate the letters and states currently read and entered by the automaton Inline graphic, respectively. In this illustration, we assume that the binary decomposition of Inline graphic is given by Inline graphic, so that the “letter cell” Inline graphic receives synaptic connections of intensities Inline graphic and Inline graphic from input cells Inline graphic and Inline graphic, respectively, and it receives synaptic connections of intensities Inline graphic from any other input cells. Consequently, the “letter cell” Inline graphic becomes active at time Inline graphic iff the sole input cells Inline graphic and Inline graphic are active at time Inline graphic. The synaptic connections to other “letter cells” are not illustrated. Moreover, the synaptic connections Inline graphic model the transition Inline graphic of automaton Inline graphic. The synaptic connections modelling other transitions are not illustrated.

According to this construction, if we let Inline graphic denote the boolean vector whose components are the Inline graphic's (for Inline graphic), one has that the “letter cell” Inline graphic will spike at time Inline graphic iff the input vector Inline graphic is received at time Inline graphic. Moreover, at every time step Inline graphic, a unique “letter cell” Inline graphic and “state cell” Inline graphic are spiking, and, if Inline graphic performs the transition Inline graphic at time Inline graphic, then network Inline graphic evokes the spiking pattern Inline graphic. The relation between the final states Inline graphic of Inline graphic and the output layer Inline graphic of Inline graphic ensures that any infinite input stream Inline graphic is recognised by Inline graphic if and only if Inline graphic is meaningful for Inline graphic. Therefore, Inline graphic.

The proof of Proposition 2 can be generalised to any network dynamics driven by unate local transition functions Inline graphic, for Inline graphic, rather than by the Inline graphic threshold local transition functions defined by Equation 1. Since unate functions are a generalisation of threshold functions, this proof can be interesting in the broader context of switching theory.

Propositions 1 and 2 yield to the following equivalence between recurrent neural networks and deterministic Büchi automata.

Theorem 1. Let Inline graphic for some Inline graphic . Then Inline graphic is recognisable by some Boolean recurrent neural network provided with an output layer iff Inline graphic is recognisable by some deterministic Büchi automaton.

Proof. Proposition 1 shows that every language recognisable by some Boolean recurrent neural network is also recognisable by some deterministic Büchi automaton. Conversely, Proposition 2 shows that every language recognisable by some deterministic Büchi automaton is also recognisable by some Boolean recurrent neural network.

The two procedures given in the proofs of propositions 1 and 2 are illustrated by the Example S3 and Figure S3 in File S3.

RNN Hierarchy

In the theory of infinite word reading machines, abstract devices are commonly classified according to the topological complexity of their underlying Inline graphic-language (i.e., the languages of infinite words that they recognise). Such classifications provide an interesting measurement of the expressive power of various kinds of infinite word reading machines. In this context, the most refined hierarchical classification of Inline graphic-automata – or equivalently, of Inline graphic-rational languages – is the so-called Wagner hierarchy [35].

Here, this classification approach is translated from the Inline graphic-automaton to the neural network context. More precisely, according to the equivalence given by Theorem 1, the Wagner hierarchy can naturally be translated from Büchi automata to Boolean neural networks. As a result, a hierarchical classification of first-order Boolean recurrent neural networks is obtained. Interestingly, the obtained classification is tightly related to the attractive properties of the networks, and, more precisely, refers to the ability of the networks to switch between meaningful and spurious attractive behaviours along their evolutions. Hence, the obtained hierarchical classification provides a new measurement of complexity of neural networks associated with their abilities to switch between different types of attractors along their evolutions.

As a first step, the following facts and definitions need to be introduced. To begin with, for any Inline graphic, the space Inline graphic can naturally be equipped with the product topology of the discrete topology over Inline graphic. Accordingly, one can show that the basic open sets of Inline graphic are the sets of infinite sequences of Inline graphic-dimensional Boolean vectors which all begin with a same prefix, or formally, the sets of the form Inline graphic, where Inline graphic. An open set is then defined as a union of basic open sets. Moreover, as usual, a function Inline graphic is said to be continuous iff the inverse image by Inline graphic of every open set of Inline graphic is an open set of Inline graphic. Now, given two Boolean recurrent neural networks Inline graphic and Inline graphic with Inline graphic and Inline graphic input units respectively, we say that Inline graphic reduces (or Wadge reduces or continuously reduces) to Inline graphic, denoted by Inline graphic, iff there exists a continuous function Inline graphic such that, for any input stream Inline graphic, one has Inline graphic, or equivalently, such that Inline graphic [78]. Intuitively, Inline graphic iff the problem of determining whether some input stream Inline graphic belongs to the neural language of Inline graphic (i.e. whether Inline graphic is meaningful for Inline graphic) reduces via some simple function Inline graphic to the problem of knowing whether Inline graphic belongs to the neural language of Inline graphic (i.e. whether Inline graphic is meaningful for Inline graphic). The corresponding strict reduction, equivalence relation, and incomparability relation are then naturally defined by Inline graphic iff Inline graphic, as well as Inline graphic iff Inline graphic, and Inline graphic iff Inline graphic. Moreover, a network Inline graphic is called self-dual if Inline graphic; it is called non-self-dual if Inline graphic, which can be proved to be equivalent to saying that Inline graphic [78]. We recall that the network Inline graphic, as defined in Section “Formal Definitions”, corresponds to the network Inline graphic whose type specification of its attractors has been inverted. Consequently, Inline graphic does not correspond a priori to some neural network provided with an output layer. By extension, an Inline graphic-equivalence class of networks is called self-dual if all its elements are self-dual, and non-self-dual if all its elements are non-self-dual.

The continuous reduction relation over the class of Boolean recurrent neural networks naturally induces a hierarchical classification of networks formally defined as follows:

Definition 3. The collection of all Boolean recurrent neural networks ordered by the reduction “Inline graphic” is called the RNN hierarchy.

We now provide a precise description of the RNN hierarchy. The result is obtained by drawing a parallel between the RNN hierarchy and the restriction of the Wagner hierarchy to Büchi automata. For this purpose, let us define the DBA hierarchy to be the collection of all deterministic Büchi automata over multidimensional Boolean alphabets Inline graphic ordered by the continuous reduction relation “Inline graphic”. More precisely, given two deterministic Büchi automata Inline graphic and Inline graphic, we set Inline graphic iff there exists a continuous function Inline graphic such that, for any input stream Inline graphic, one has Inline graphic. The following result shows that the RNN hierarchy and the DBA hierarchy are actually isomorphic. Moreover, a possible isomorphism is given by the mapping described in Proposition 1 which associates to every network Inline graphic a corresponding deterministic Büchi automaton Inline graphic.

Proposition 3. The RNN hierarchy and the DBA hierarchy are isomorphic.

Proof. Consider the mapping described in Proposition 1 which associates to every network Inline graphic a corresponding deterministic automaton Inline graphic. We prove that this mapping is an embedding from the RNN hierarchy into the DBA hierarchy. Let Inline graphic and Inline graphic be any two networks, and let Inline graphic and Inline graphic be their corresponding deterministic Büchi automata. Proposition 1 ensures that Inline graphic and Inline graphic. Hence, one has Inline graphic iff by definition there exists a continuous function Inline graphic such that Inline graphic iff there exists a continuous function Inline graphic such that Inline graphic, iff by definition Inline graphic. Therefore Inline graphic iff Inline graphic. It follows that Inline graphic iff Inline graphic, proving that the considered mapping is an embedding. We now show that, up to the continuous equivalence relation “Inline graphic”, this mapping is also onto. Let Inline graphic be some deterministic Büchi automaton. By Proposition 2, there exists a network Inline graphic such that Inline graphic. Moreover, by Proposition 1, the automaton Inline graphic satisfies Inline graphic. It follows that for any infinite input stream Inline graphic, one has Inline graphic iff Inline graphic, meaning that both Inline graphic and Inline graphic hold, and thus Inline graphic. Therefore, for any deterministic Büchi automaton Inline graphic, there exists a neural network Inline graphic such that Inline graphic, meaning precisely that up to the continuous equivalence relation “Inline graphic”, the mapping Inline graphic described in Proposition 1 is onto. This concludes the proof.

By Proposition 3 and the usual results of the DBA hierarchy, a precise description of the RNN hierarchy can be given. First of all, the RNN hierarchy is well-founded, i.e. there is no infinite strictly descending sequence of networks Inline graphic. Moreover, the maximal strict chains in the RNN hierarchy have length Inline graphic, meaning that the RNN hierarchy has a height of Inline graphic. (A strict chain of length Inline graphic in the RNN hierarchy is a sequence of neural networks Inline graphic such that Inline graphic iff Inline graphic; a strict chain is said to be maximal if its length is at least as large as the length of every other strict chain.) Furthermore, the maximal antichains of the RNN hierarchy have length Inline graphic, meaning that the RNN hierarchy has a width of Inline graphic. (An antichain of length Inline graphic in the RNN hierarchy is a sequence of neural networks Inline graphic such that Inline graphic for all Inline graphic with Inline graphic; an antichain is said to be maximal if its length is at least as large as the length of every other antichain.) More precisely, it can be shown that incomparable networks are equivalent (for the relation Inline graphic) up to complementation, i.e., for any two networks Inline graphic and Inline graphic, one has Inline graphic iff Inline graphic and Inline graphic are non-self-dual and Inline graphic. These properties imply that, up to equivalence and complementation, the RNN hierarchy is actually a well-ordering. In fact, the RNN hierarchy consists of an infinite alternating succession of pairs of non-self-dual and single self-dual classes, overhung by an additional single non-self-dual class at the first limit level Inline graphic, as illustrated in Figure 4.

Figure 4. The RNN hierarchy.

Figure 4

An infinite alternating succession of pairs of non-self-dual classes of networks followed by single self-dual classes of networks, all of them overhung by an additional single non-self-dual class at the first limit level. Circles represent the equivalence classes of networks (with respect to the relation “Inline graphic”) and arrows between circles represent the strict reduction “Inline graphic” between all elements of the corresponding classes.

For convenience reasons, the degree of a network Inline graphic in the RNN hierarchy is defined such that the same degree is shared by both non-self-dual networks at some level and self-dual networks located just one level higher, as illustrated in Figure 4:

graphic file with name pone.0094204.e596.jpg

Moreover, the equivalence between the DBA and RNN hierarchies ensures that the RNN hierarchy is actually decidable, in the sense that there exists an algorithmic procedure which is able to compute the degree of any network in the RNN hierarchy. All the above properties of the RNN hierarchy are summarised in the following result.

Theorem 2. The RNN hierarchy is a decidable pre-well-ordering of width Inline graphic and height Inline graphic .

Proof. The DBA hierarchy consists of a decidable pre-well-ordering of width Inline graphic and height Inline graphic [79]. Proposition 3 ensures that the RNN hierarchy and the DBA hierarchy are isomorphic.

The following result provides a detailed description of the decidability procedure of the RNN hierarchy. More precisely, it is shown that the degree of a network Inline graphic in the RNN hierarchy corresponds precisely to the maximal number of times that this network might switch between visits of meaningful and spurious attractors along some evolution.

Theorem 3. Let Inline graphic be some network provided with an additional specification of an output layer, Inline graphic be the corresponding deterministic Büchi automaton of Inline graphic , and Inline graphic .

  • If there exists in Inline graphic a maximal alternating chain of length Inline graphic and no maximal co-alternating chain of length Inline graphic , then Inline graphic and Inline graphic is non-self-dual.

  • Symmetrically, if there exists in Inline graphic a maximal co-alternating chain of length Inline graphic but no maximal alternating chain of length Inline graphic , then also Inline graphic and Inline graphic is non-self-dual.

  • If there exist in Inline graphic a maximal alternating chain of length Inline graphic as well as a maximal co-alternating chain of length Inline graphic , then Inline graphic and Inline graphic is self-dual.

  • If there exist in Inline graphic a maximal alternating chain of length Inline graphic , then Inline graphic and Inline graphic is non-self-dual.

Proof. By Proposition 3, the degree of a network Inline graphic in the RNN hierarchy is equal to the degree of its corresponding deterministic Büchi automaton Inline graphic in the DBA hierarchy. Moreover, the degree of a deterministic Büchi automaton in the DBA hierarchy corresponds precisely to the length of a maximal alternating or co-alternating chain contained in this automaton [79].

By Theorem 3, the decidability procedure of the degree of a neural network Inline graphic in the RNN hierarchy consists firstly in translating the network Inline graphic into its corresponding deterministic Büchi automaton Inline graphic, as described in Proposition 1, and secondly in returning the ordinal Inline graphic corresponding to the length of a maximal alternating chain or co-alternating chain contained in Inline graphic. Note that this procedure can clearly be achieved by some graph analysis of the automaton Inline graphic, since the graph of Inline graphic is always finite. Furthermore, since alternating and co-alternating chains are defined in terms of cycles in the graph of the automaton, then according to the biunivocal correspondence between cycles in Inline graphic and attractors of Inline graphic, it can be deduced that the complexity of a network in the RNN hierarchy is in fact directly related to the attractive properties of this network.

More precisely, it can be observed that the complexity measurement provided by the RNN hierarchy actually corresponds to the maximal number of times that a network might alternate between visits of meaningful and spurious attractors along some evolution. Indeed, the existence of a maximal alternating or co-alternating chain Inline graphic of length Inline graphic in Inline graphic means that every infinite initial path in Inline graphic might alternate at most Inline graphic times between visits of successful and non-successful cycles. Yet this is precisely equivalent to saying that every evolution of Inline graphic can only alternate at most Inline graphic times between visits of meaningful and spurious attractors before eventually becoming trapped forever by a last attractor. In this case, Theorem 3 ensures that the degree of Inline graphic is equal to Inline graphic. Moreover, the existence of an alternating chain Inline graphic of length Inline graphic in Inline graphic is equivalent to the existence of an infinite initial path in Inline graphic that might alternate infinitely many times between visits of the cycles Inline graphic and Inline graphic. This is equivalent to saying that there exists an evolution of Inline graphic that might alternate infinitely many times between visits of a meaningful and a spurious attractor. By Theorem 3, the degree of Inline graphic is equal to Inline graphic is this case. Therefore, the RNN hierarchy provides a new measurement of complexity of neural networks according to their ability to maximally alternate between different types of attractors along their evolutions.

Finally, the decidability procedure of the RNN hierarchy is illustrated by the Example S4 in File S4.

Refined Hierarchical Classification of Neural Networks

In this section, we show that by relaxing the restrictions on the specification of the type of their attractors, the networks significantly increase their expressive power from deterministic Büchi automata up to Muller automata [80]. Hence, by translating once again the Wagner classification theory from the Muller automaton to the neural network context, another more refined hierarchical classification of Boolean neural networks can be obtained. The obtained classification is also tightly related to the attractive properties of the networks, and hence provides once again a new refined measurement of complexity of Boolean recurrent neural networks in terms of their attractive behaviours.

Boolean Recurrent Neural Networks and Muller Automata

The assumption that the networks are provided with an additional description of an output layer, which would subsequently influence the type specifications (meaningful/spurious) of their attractors, is not necessary anymore from this point onwards. Instead, let us assume that, for any network, the precise classification of its attractors into meaningful and spurious types is known. How the meaningfulness of the attractors is determined is an issue that is not considered here. For instance, the specification of the type of each attractor might have been determined by its neurophysiological significance with respect to measurable observations associated to certain behaviours or sensory discriminations. Formally, in this section, a recurrent neural network consists of a tuple Inline graphic, as described in Definition 1, but also provided with an additional specification into meaningful and spurious type for each one of its attractors.

We now prove that, by totally relaxing the restrictions on the specification of the type of their attractors, the Boolean neural networks significantly increase their expressive powers from deterministic Büchi automata up to Muller automata. The following straightforward generalisation of Proposition 1 states that any such Boolean recurrent neural network can be simulated by some deterministic Muller automaton.

Proposition 4. Let Inline graphic be some Boolean recurrent neural network provided with a type specification of each of its attractors. Then there exists a deterministic Muller automaton Inline graphic such that Inline graphic .

Proof. Let Inline graphic be given by the tuple Inline graphic, with Inline graphic, Inline graphic, and let the meaningful attractors of Inline graphic be given by Inline graphic, all others being spurious. Now, consider the deterministic Muller automaton Inline graphic, where Inline graphic, Inline graphic, Inline graphic is the Inline graphic-dimensional zero vector, Inline graphic is defined by Inline graphic iff Inline graphic, and Inline graphic. According to this construction, any input stream Inline graphic is meaningful for Inline graphic iff Inline graphic is recognised by Inline graphic. In other words, Inline graphic iff Inline graphic, and therefore Inline graphic.

Conversely, as a generalisation of Proposition 2, we can prove that any deterministic Muller automaton can be simulated by some Boolean recurrent neural network provided with a suitable type specification of its attractors.

Proposition 5. Let Inline graphic and let Inline graphic be some deterministic Muller automaton over the alphabet Inline graphic . Then there exists a Boolean recurrent neural network Inline graphic provided with a type specification of each of its attractors such that Inline graphic .

Proof. Let Inline graphic be given by the tuple Inline graphic, with Inline graphic, Inline graphic and Inline graphic. Now, consider the network Inline graphic described in the proof of Proposition 2. It remains to define which are the meaningful and spurious attractors of Inline graphic. As mentioned in the proof of Proposition 2, at every time step Inline graphic, only one among the “state cells” Inline graphic is spiking. Hence, for any state Inline graphic of Inline graphic that might occur at some time step Inline graphic, let Inline graphic be the index such that Inline graphic is the unique “state cell” which is spiking during state Inline graphic. An attractor Inline graphic of Inline graphic is then said to be meaningful iff Inline graphic.

Consequently, for any infinite infinite sequence Inline graphic, the infinite path Inline graphic in Inline graphic satisfies Inline graphic iff the evolution Inline graphic in Inline graphic is such that Inline graphic is a meaningful attractor. Therefore, Inline graphic is recognised by Inline graphic iff Inline graphic is meaningful for Inline graphic, showing that Inline graphic.

Propositions 4 and 5 yield the following equivalence between Boolean recurrent neural networks and deterministic Muller automata.

Theorem 4. Let Inline graphic for some Inline graphic . Then the following conditions are equivalent:

  1. Inline graphic is recognisable by some Boolean recurrent neural network provided with a type specification of its attractors;

  2. Inline graphic is recognisable by some deterministic Muller automaton;

  3. Inline graphic is Inline graphic -rational.

Proof. The equivalence between conditions a and b is given by propositions 4 and 5. The equivalence between conditions b and c is a well-known result of automata theory [79].

The two procedures described in the proofs of propositions 4 and 5 are illustrated by the Example S5 and Figure S4 in File S5.

Complete RNN Hierarchy

In this section, we prove that the collection of Boolean recurrent neural networks ordered by the continuous reduction corresponds to a refined hierarchical classification of height Inline graphic. This classification is directly related to the attractive properties of the networks, and therefore provides a new refined measurement of complexity of neural networks according to their attractive behaviours. This hierarchical classification is formally defined as follows.

Definition 4. The collection of all Boolean recurrent neural networks provided with a type specification of their attractors ordered by the continuous reduction “ Inline graphic ” is called the complete RNN hierarchy.

Like in Section “RNN Hierarchy”, a precise characterisation of the complete RNN hierarchy can be obtained by translating the Wagner classification theory from the Muller automaton to the neural network context. For this purpose, we shall consider the collection of all deterministic Muller automata over multidimensional Boolean alphabets Inline graphic ordered by the continuous reduction “Inline graphic”. This hierarchy is commonly referred to as the Wagner hierarchy [35]. A generalisation of Proposition 3 shows that the complete RNN hierarchy and the Wagner hierarchy are isomorphic, and a possible isomorphism is also given by the mapping described in Proposition 4 which associates to every network Inline graphic a corresponding deterministic Muller automaton Inline graphic.

Proposition 6. The complete RNN hierarchy and the Wagner hierarchy are isomorphic.

Proof. Consider the mapping described in Proposition 4 which associates to every network Inline graphic a corresponding deterministic Muller automaton Inline graphic. A similar reasoning as the one presented in the proof of Proposition 3 shows that this mapping is an isomorphism between the complete RNN hierarchy and the Wagner hierarchy.

By Proposition 6 and the usual results on the Wagner hierarchy [35], the following precise description of the complete RNN hierarchy can be given. First of all, like the RNN hierarchy, the complete RNN hierarchy also consists of a pre-well ordering of width Inline graphic, and any two networks Inline graphic and Inline graphic satisfy the incomparability relation Inline graphic iff Inline graphic and Inline graphic are non-self-dual networks such that Inline graphic. However, while the RNN hierarchy has only height Inline graphic, the complete RNN hierarchy shows a height of Inline graphic levels. In fact, the complete RNN hierarchy consists of an infinite alternating succession of pairs of non-self-dual and single self-dual classes, with non-self-dual classes at each limit level, as illustrated in Figure 5. For convenience reasons, the degree Inline graphic of a network Inline graphic in the complete RNN hierarchy is also defined such that the same degree is shared by both non-self-dual networks at some level and self-dual networks located just one level higher, namely:

graphic file with name pone.0094204.e740.jpg

Besides, the isomorphism between the Wagner hierarchy and the complete RNN hierarchy ensures that the complete RNN hierarchy is actually decidable, in the sense that there exists an algorithmic procedure allowing to compute the degree of any network in the complete RNN hierarchy. The following theorem summarises the properties of the complete RNN hierarchy.

Figure 5. The complete RNN hierarchy.

Figure 5

A transfinite alternating succession of pairs of non-self-dual classes of networks followed by single self-dual classes of networks, with non-self-dual classes at each limit level.

Theorem 5. The complete RNN hierarchy is a decidable pre-well-ordering of width Inline graphic and height Inline graphic .

Proof. The Wagner hierarchy consists of a decidable pre-well-ordering of width Inline graphic and height Inline graphic [35]. Proposition 6 ensures that the complete RNN hierarchy and the Wagner hierarchy are isomorphic.

The following result provides a detailed description of the decidability procedure of the complete RNN hierarchy. More precisely, it is shown that the degree of a network Inline graphic in the RNN hierarchy corresponds precisely to the largest ordinal Inline graphic such that there exists an alternating tree or a co-alternating tree of length Inline graphic in the deterministic Muller automaton Inline graphic.

Theorem 6. Let Inline graphic be some Boolean recurrent network provided with a type specification of its attractors, Inline graphic be the corresponding deterministic Muller automaton of Inline graphic , and Inline graphic be an ordinal such that Inline graphic .

  • If there exists in Inline graphic a maximal alternating tree of length Inline graphic and no maximal co-alternating tree of length Inline graphic , then Inline graphic and Inline graphic is non-self-dual.

  • If there exists in Inline graphic a maximal co-alternating tree of length Inline graphic and no maximal alternating tree of length Inline graphic , then Inline graphic and Inline graphic is non-self-dual.

  • If there exist in Inline graphic both a maximal alternating tree as well as a maximal co-alternating tree of length Inline graphic , then Inline graphic and Inline graphic is self-dual.

Proof. By Proposition 6, the degree of a network Inline graphic in the complete RNN hierarchy is equal to the degree of its corresponding deterministic Muller automaton Inline graphic in the Wagner hierarchy. Moreover, the degree of a deterministic Muller automaton in the Wagner hierarchy corresponds precisely to the length of a maximal alternating or co-alternating tree contained in this automaton [35], [82].

The decidability procedure of the degree of a neural network Inline graphic in the complete RNN hierarchy thus consists in first translating the network Inline graphic into its corresponding deterministic Muller automaton Inline graphic, as described in Proposition 4, and then returning the ordinal Inline graphic corresponding to the length of a maximal alternating tree, or co-alternating tree, contained in Inline graphic. Note that this procedure can be achieved by some graph analysis of the automaton Inline graphic, since the graph of Inline graphic is always finite.

By Theorem 6, the degree of a neural network Inline graphic in the complete RNN hierarchy corresponds precisely to the length of a maximal alternating, or co-alternating, tree contained in Inline graphic. Since alternating and co-alternating trees are defined in terms of cycles in the graph of the Muller automaton, and according to the biunivocal correspondence between cycles in Inline graphic and attractors of Inline graphic, it can be deduced that, like for the RNN hierarchy, the complexity of a network in the complete RNN hierarchy is also directly related to the attractive properties of this network. In fact, the complexity measurement provided by the complete RNN hierarchy refers to the maximal number of times that a network might alternate between visits of meaningful and spurious attractors along some evolution.

More precisely, the Inline graphic first levels of the complete RNN hierarchy provide a classification of the collection of networks that might switch at most finitely many times between different types of attractors along their evolutions. Indeed, by Theorem 6, for any Inline graphic, a network Inline graphic satisfies Inline graphic iff Inline graphic contains a maximal alternating, or co-alternating, tree of length Inline graphic. In other words, for any Inline graphic, a network Inline graphic satisfies Inline graphic iff Inline graphic is able to switch at most Inline graphic times between visits of different types of attractors during all its possible evolutions.

The levels of transfinite degrees provide a refined classification of the collection of networks that are able to alternate infinitely many times between different types of attractors. Indeed, according to Theorem 6, for any ordinal Inline graphic such that Inline graphic, a network Inline graphic satisfies Inline graphic iff Inline graphic contains a maximal alternating or co-alternating tree of length Inline graphic. Since Inline graphic, this implies that the graph of Inline graphic necessarily contains at least two cycles Inline graphic and Inline graphic such that Inline graphic and Inline graphic is successful iff Inline graphic is non-successful. But since Inline graphic, it follows that Inline graphic and Inline graphic are both accessible one from the other in the graph of Inline graphic. By the biunivocal correspondence between cycles and attractors, the network Inline graphic contains at least the two attractors Inline graphic and Inline graphic, and the accessibility between those ensures that the network is capable of alternating infinitely often between visits of Inline graphic and Inline graphic along some evolution. In fact, the collection of levels of transfinite degrees of the complete RNN hierarchy provides a refined classification of these potentially infinitely switching networks based on the intricacy of their underlying attractive structures (tree-like representation induced by the inclusion and accessibility relations between the attractors, as illustrated in Figure 2).

It can be noticed, according to the definition of alternating and co-alternating trees, that if some given Muller automaton contains either an alternating or a co-alternating tree of length Inline graphic in its underlying graph, then this automaton also necessarily contains in its graph both an alternating and a co-alternating tree of length Inline graphic, for all Inline graphic. Therefore, any network of the complete RNN hierarchy is capable of disclosing an attractive behaviour analogous to any other network of strictly smaller degree. In this precise sense, a network of the complete RNN hierarchy potentially contains in its structure all the possible attractive behaviours of every other networks of strictly smaller degrees. In this framework, the concept of alternation between different types of attractors corresponds to the transient trajectories between attractor basins, a concept referred to as “itinerancy” elsewhere in the literature [51], [65][67], [83], [84].

The decidability procedure of the complete RNN hierarchy is illustrated by the Example S6 in File S6.

It is worth noting that the complete RNN hierarchy can actually be seen as a proper extension of the RNN hierarchy. Indeed, the next result shows that the networks of RNN hierarchy and the networks of the specific initial segment of length Inline graphic of the complete RNN hierarchy recognise precisely the same languages. In this precise sense, the RNN hierarchy consists of an initial segment of length Inline graphic of the complete RNN hierarchy.

Proposition 7. Let Inline graphic . Then Inline graphic is recognisable by some network Inline graphic of the RNN hierarchy iff Inline graphic is also recognisable by some network Inline graphic of the complete RNN hierarchy such that either Inline graphic or Inline graphic and Inline graphic contains a maximal co-alternating tree of length Inline graphic but no maximal alternating tree of length Inline graphic .

Proof. Given any deterministic Muller automaton Inline graphic, let the degree of Inline graphic in the Wagner hierarchy be denoted by Inline graphic. Then, the relationship between the DBA and the Wagner hierarchies ensures that Inline graphic is recognisable by some deterministic Büchi automaton iff Inline graphic is also recognisable by some deterministic Muller automaton Inline graphic such that either Inline graphic or Inline graphic and Inline graphic contains a maximal co-alternating tree of length Inline graphic but no maximal alternating tree of length Inline graphic [79]. Theorems 1 and 4 together with Proposition 6 allow to translate these results to the neural network context, and therefore lead to the conclusion.

We recall that the RNN hierarchy consists of the classification of networks whose attractors' type specifications are induced by the existence of an output layer, whereas the complete RNN hierarchy consists of the classification of networks whose attractors' type specifications are a priori given without any restriction at all. For any ordinal Inline graphic, the two notions of alternating chain and alternating tree of length Inline graphic coincide. Hence, by Theorem 3 and Theorem 6, the two decidability procedures of the RNN hierarchy and the complete RNN hierarchy reduce to the very same, and the decidability procedures simply consist in computing the length of a maximal alternating or co-alternating tree contained in the underlying automata.

However, it is important to clarify the difference between the RNN hierarchy and the complete RNN hierarchy, illustrated in Figure 6. The restriction on the type specification of the attractors imposed by the existence of an output layer ensures that the networks of the RNN hierarchy will never be able to contain a maximal alternating or co-alternating tree of length strictly larger than Inline graphic in their underlying Büchi automata. Indeed, if Inline graphic and Inline graphic are two cycles in a deterministic Büchi automaton such that Inline graphic is successful and Inline graphic, then Inline graphic is necessarily also successful (since it visits the same final states as Inline graphic plus potentially some other ones), meaning that no meaningful cycle could ever be included in some spurious cycle in a deterministic Büchi automaton; consequently, the maximal number alternations between different type of cycles that can be found in a deterministic Büchi automaton is bounded by one – a spurious cycle included in a meaningful cycle – and therefore no alternating or co-alternating trees of length strictly larger than Inline graphic will never exist in a deterministic Büchi automaton. From this observation, it follows that the degree of any network of the RNN hierarchy is bounded by Inline graphic, meaning that the length of the RNN hierarchy cannot exceed Inline graphic, whereas the length of the complete RNN hierarchy climbs up to Inline graphic, as illustrated by Figure 6.

Figure 6. Comparison between the RNN and the complete RNN hierarchies.

Figure 6

The RNN hierarchy, depicted by the sequence of blacks classes, consists of an initial segment of length Inline graphic of the complete RNN hierarchy, which has height Inline graphic.

The “basal ganglia-thalamocortical network”

Neurobiological description

In order to illustrate the application of our method to a case study, we have considered one of the main systems of the brain involved in information processing, the basal ganglia-thalamocortical network. This network is formed by several parallel and segregated circuits involving different areas of the cerebral cortex, striatum, pallidum, thalamus, subthalamic nucleus and midbrain [85][94]. This network has been investigated for many years in particular in relation to disorders of the motor system and of the sleep-waking cycle. The simulations were generally performed by considering the basal ganglia-thalamocortical network as a circuit composed of several interconnected areas, each area being modeled by a network of spiking neurons, and were analysed using statistical approaches based on mean-field theory [95][106].

In the basal ganglia-thalamocortical network are several types of connections and transmitters. Based on the observation that almost all neurons of the central nervous system can be subdivided into projection neurons and interneurons, we consider the connections mediated by projection neurons, both glutamatergic excitatory projections and GABAergic inhibitory projections, as part of an information transmitting system. The local connections established by the interneurons, i.e. the connections remaining confined within a small distance from the projection neurons, are considered forming part of a regulatory system. The other connections, mainly produced by different types of projection neurons, i.e. the dopaminergic (including those from the substantia nigra pars compacta like the nigrostriatal and those from the ventromedial tegmental area), cholinergic (including those from the basal forebrain), the noradrenergic (including those from locus coeruleus), serotoninergic (including those from the dorsal raphe), histaminergic (from the tuberomamillary nucleus) and orexinergic projections (from the lateral and posterior hypothalamus) are considered forming part of a modulatory system. The three systems, information transmitting, regulatory and modulatory have an extensive pattern of reciprocal interconnectivity at various levels that is not addressed in this paper.

A characteristic of all the circuits of the basal ganglia-thalamocortical network is a combination of “open” and “closed” loops with ascending sensory afferences reaching the thalamus and the midbrain, and with descending motor efferences from the midbrain (the tectospinal tract) and the cortex (the corticospinal tract). We assume that the encoding of a large amount of the information treated by the brain is performed by recurrent patterns of activity circulating in the information transmitting system. For this reason, we focus our attention on the complexity of the dynamics that may emerge from that system. To this purpose, we present a Boolean recurrent neural network model of the information transmitting system of the basal ganglia-thalamocortical network, illustrated by Figure 7.

Figure 7. Model of the basal ganglia-thalamocortical network.

Figure 7

The network is constituted of Inline graphic different interconnected brain areas, each one represented by a single node in the Boolean neural network model: superior colliculus (SC), Thalamus, thalamic reticular nucleus (NRT), Cerebral Cortex, the striatopallidal and the striatonigral components of the striatum (Str), the subthalamic nucleus (STN), the external part of the pallidum (GPe), and the output nuclei of the basal ganglia formed by the GABAergic projection neurons of the intermediate part of the pallidum and of the substantia nigra pars reticulata (GPi/SNR). We consider also the inputs (IN) from the ascending sensory pathway and the motor outputs (OUT). The excitatory pathways are labeled in blue and the inhibitory ones in orange.

The pattern of connectivity corresponds to the wealth of data reported in the literature [85][94]. We assume that each brain area is formed by a neural network and that the network of brain areas corresponding to the basal ganglia-thalamocortical network can be modeled by a Boolean neural network formed by 9 nodes: superior colliculus (SC), Thalamus, thalamic reticular nucleus (NRT), Cerebral Cortex, the two functional parts (striatopallidal and the striatonigral components) of the striatum (Str), the subthalamic nucleus (STN), the external part of the pallidum (GPe), and the output nuclei of the basal ganglia formed by the GABAergic projection neurons of the intermediate part of the pallidum and of the substantia nigra pars reticulata (GPi/SNR).

We consider the ascending sensory pathway (IN), that reaches SC and the Thalamus. SC does not send other projections to the system and sends a projection outside of the system (OUT), to the motor system. The thalamus sends excitatory connections within the system via the thalamo-pallidal, thalamo-striatal and thalamo-cortical projections. Notice that STN receives also an excitatory projection from the Thalamus. NRT receives excitatory collateral projections from both the thalamo-cortical and cortico-thalamic projections. In turn, NRT sends an inhibitory projection to the Thalamus. The Cerebral Cortex receives also an excitatory input from STN and sends corticofugal projections to the basal ganglia (striatum and STN), to the thalamus, to the midbrain and to the motor system (OUT). The only excitatory nucleus of the basal ganglia is STN, that sends projections to the Cerebral Cortex, to GPe and to GPi/SNR. In the striatum (Str) the striatopallidal neurons send inhibitory projections to GPe and the striatonigral neurons send inhibitory projections to GPi/SNR, via the so-called “direct” pathway. The pallidum (GPe) plays a paramount role because it is an inhibitory nucleus, with reciprocal connections back to the striatum and to STN and a downstream inhibitory projection to GPi/SNR via the so-called “indirect” pathway. It is interesting to notice the presence of inhibitory projections that inhibit the inhibitory nuclei within the basal ganglia, thus leading to a kind of “facilitation”, but also inhibitory projections that inhibit RTN, that is a major nucleus in regulating the activity of the thalamus. The connectivity of the Boolean model of the basal ganglia-thalamocortical network is described by the adjacency matrix of the network in Table 1.

Table 1. The adjancency matrix of the Boolean model of the basal ganglia-thalamocortical network.
Source Target
Node Name SC Thalamus RTN GPi/SNr STN GPe Str-D2 Str-D1 CCortex
1 SC Inline graphic 1 Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic
2 Thalamus Inline graphic Inline graphic 1 Inline graphic 1 1 1 1 1
3 RTN Inline graphic −1 Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic
4 GPi/SNr −1 −1 −1 Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic
5 STN Inline graphic Inline graphic Inline graphic 2 Inline graphic 2 Inline graphic Inline graphic 2
6 GPe Inline graphic Inline graphic −1/2 −1/2 −1/2 Inline graphic −1/2 −1/2 Inline graphic
7 Str-D2 Inline graphic Inline graphic Inline graphic Inline graphic Inline graphic −1 Inline graphic Inline graphic Inline graphic
8 Str-D1 Inline graphic Inline graphic Inline graphic −1/2 Inline graphic −1/2 Inline graphic Inline graphic Inline graphic
9 Cer. Cortex 1/2 1/2 1/2 Inline graphic 1/2 Inline graphic 1/2 1/2 Inline graphic

Computation of attractor-based complexity

For sake of simplicity, we consider that the two inputs to the basal ganglia-thalamocortical network (Figure 7) are reduced to 1 input node sending projections to Thalamus and SC with synaptic weight equal to 1. We reduce our neurobiological model to a Boolean recurrent neural network Inline graphic that contains 9 activation nodes and 1 input node. Every activation node can be either active or quiet, which means Inline graphic possible states for the network Inline graphic. Every state of Inline graphic is represented by a 9-dimensional Boolean vector describing the sequence of active and quiet activation nodes. For example, the network state Inline graphic means that the nodes #1 (SC), #3 (RTN) and #4 (GPi/SNR) are quiet, whereas every other activation node is active.

In this section, we provide a practical illustration of our new attractor-based complexity measurement applied to the simplified model of the basal ganglia-thalamocortical network. Since the behaviour of network Inline graphic is not determined by any designated output layer, the attractor-based complexity of Inline graphic will be measured with respect to the complete RNN hierarchy rather than with respect to the RNN hierarchy, as described in Section “Complete RNN Hierarchy”. According to these considerations, as mentioned in Theorem 6, the attractor-based complexity of network Inline graphic relies on the graphical structure of its corresponding deterministic Muller automaton Inline graphic. Hence, we shall now describe the structure of the deterministic Muller automaton Inline graphic associated to network Inline graphic.

Firstly, as mentioned in the proof of Proposition 4, the states of the Muller automaton Inline graphic correspond precisely to the states of network Inline graphic. Hence, the deterministic Muller automaton associated to the basal ganglia-thalamocortical network contains Inline graphic states, numbered from Inline graphic to Inline graphic. The numbering of the states is chosen such that state Inline graphic is numbered by Inline graphic, where Inline graphic is the decimal representation of the Inline graphic-digit binary number Inline graphic. For instance, state Inline graphic is referred to as number Inline graphic, since Inline graphic is the decimal representation of the binary number Inline graphic. Secondly, also as mentioned in the proof of Proposition 4, the transitions of the Muller automaton Inline graphic are constructed as follows: there is a transition labelled by Inline graphic (resp. by Inline graphic) from state Inline graphic to state Inline graphic if and only if network Inline graphic transits from state Inline graphic to state Inline graphic when it receives input Inline graphic (resp. Inline graphic). Hence, the deterministic Muller automaton Inline graphic contains Inline graphic transitions (one 0-labelled and one 1-labelled outgoing transition from each of the 512 state), among which Inline graphic are labelled by Inline graphic and Inline graphic are labelled by Inline graphic. For instance, in the Muller automaton Inline graphic there is a transition labelled by Inline graphic (drawn in red in Figure 8) from state Inline graphic to state Inline graphic because network Inline graphic transits from state Inline graphic to state Inline graphic when it receives input Inline graphic. Figure 8a illustrates the graph of the deterministic Muller automaton associated to the basal ganglia-thalamocortical network.

Figure 8. Deterministic Muller automaton based on the “basal ganglia-thalamocortical” network of Figure 7.

Figure 8

a. The graph of the automaton Inline graphic associated to network Inline graphic contains Inline graphic states and Inline graphic directed transitions. The colours of the transitions represent their labels: green for label Inline graphic and red for label Inline graphic. For sake of readability, the directions of the transitions have been removed. The states and transitions of the strongly connected component Inline graphic of Inline graphic have been pulled out of the central graph and drawn in larger font. b. The graph of the strongly connected component Inline graphic of Inline graphic. Every state and transition of Inline graphic that does not belong to Inline graphic has been erased. The directions of the transitions are indicated by the arrowheads.

An analysis of the graph of the automaton Inline graphic reveals that it contains only one strongly connected component Inline graphic given by the states Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic and the transitions between those states, as illustrated in Figure 8b (we recall that a directed graph is called strongly connected if there is a path from every vertex of the graph to every other vertex). This strongly connected component Inline graphic corresponds to the subgraph of Inline graphic constituted by all states reachable from the initial state Inline graphic. In other words, any state of Inline graphic outside the strongly connected component Inline graphic cannot be reached from the initial state Inline graphic, meaning that it can never occur in the dynamics of network Inline graphic starting from initial state Inline graphic, and hence plays no role in the attractor-based complexity of network Inline graphic. In fact, the attractor-based complexity of network Inline graphic will be precisely determined by the cyclic structure of the strongly connected component Inline graphic of automaton Inline graphic.

In order to complete the description of the Muller automaton Inline graphic, it is necessary to specify its table, or in other words, to determine among all possible cycles of Inline graphic which ones are successful and which ones are non-successful. Since every cycle of Inline graphic is by definition contained in a strongly connected component of Inline graphic and since Inline graphic is the only strongly connected component of Inline graphic, it follows that all cycles of Inline graphic are necessarily contained in Inline graphic. Therefore, the specification of the table of Inline graphic amounts to the assignment of a type specification to every cycle of the strongly connected component Inline graphic. According to the biunivocal correspondence between cycles of Inline graphic and attractors of Inline graphic, this assignment procedure consists in determining the type specification (meaningful or spurious) of all possible attractors of the network Inline graphic.

In order to assign a type specification to every cycle of the strongly connected component Inline graphic, we have computed the list of all cycles starting from every state of Inline graphic, and for each cycle, we have further computed its decomposition into constitutive cycles (cycles which do not visit the same vertex two times). The results are summarised in Table 2.

Table 2. Number of cycles and constitutive cycles found for each starting state of the strongly connected component Inline graphic.
State # cycles # constitutive cycles
0 68 24
31 47 20
33 87 24
63 93 21
95 39 21
127 21 17
128 63 24
159 77 22
161 72 20
191 52 19
223 43 21
255 53 17
384 67 24
417 35 20
479 48 16
511 84 21

Then, we have assigned a type specification to each cycle of Inline graphic according to the following neurobiological criteria. First, a constitutive cycle is considered to be spurious if it is characterised either by active SC and quiet Thalamus at the same time step or by a quiet GPi/SNR during the majority of the duration of the constitutive cycle. A constitutive cycle is meaningful otherwise. Second, a non-constitutive cycle is considered to be meaningful if it contains a majority of meaningful constitutive cycles, and spurious if it contains a majority of spurious constitutive cycles – and in case of it containing as much meaningful as spurious constitutive cycles, its type specification was chosen to be meaningful. In order to illustrate this procedure, let us consider for example the cycles starting from state Inline graphic. Table 2 shows that there are overall Inline graphic cycles and Inline graphic constitutive cycles starting from state Inline graphic. We consider here the example of one out of the Inline graphic cycles, e.g. cycle Inline graphic (Figure 9a). This cycle can be decomposed into three constitutive cycles (Figure 9b), namely Inline graphic, Inline graphic, and Inline graphic. When state Inline graphic receives input Inline graphic the network dynamics evolves into the constitutive cycle Inline graphic (Figure 9c), whereas if state Inline graphic receives input Inline graphic the dynamics evolves into the constitutive cycle Inline graphic (Figure 9d). According to the aforementioned neurobiological criteria, the constitutive cycles Inline graphic and Inline graphic are spurious, whereas Inline graphic is meaningful, and therefore cycle Inline graphic is spurious.

Figure 9. A cycle and its constitutive cycles.

Figure 9

a. Among all cycles that can be observed starting from state Inline graphic (indicated by the short arrow showing the entry point), we consider here an example, i.e. the cycle (0, 0, 384, 223, 511, 191, 63, 33, 128, 95, 33, 0). b. This cycle contains three constitutive cycles Inline graphic, (0, 384, 223, 511, 191, 63, 33, 0) and (33, 128, 95, 33) that were assigned with type specification spurious (dotted line), meaningful (solid line), and spurious (dotted line), respectively. c. Sequence of states with graphical representation of the corresponding activated nodes of the basal ganglia-thalamocortical network for the spurious constitutive cycle Inline graphic. d. Sequence of states and activated network areas for the meaningful constitutive cycle (0, 384, 223, 511, 191, 63, 33, 0). e. Sequence of states and activated network areas for the spurious constitutive cycle (33, 128, 95, 33).

After the assignation of the type specification to every cycle, the attractor-based complexity of the network Inline graphic can be explicitly computed. More precisely, according to Theorem 6, the attractor-based degree of Inline graphic is given by the length of a maximal (co-)alternating tree contained in Inline graphic. Since Inline graphic contains only one strongly connected component Inline graphic, the maximal (co-)alternating tree of Inline graphic is necessarily contained in Inline graphic. Indeed, every cycle of Inline graphic is, being a cycle, necessarily contained in a strongly connected component of Inline graphic; hence in particular, every cycle of the maximal (co-)alternating tree is also contained in a strongly connected component of Inline graphic; yet since Inline graphic is the only strongly connected component of Inline graphic, every cycle of the maximal (co-)alternating tree is contained in Inline graphic, meaning that the maximal (co-)alternating tree itself is contained in Inline graphic.

After an exhaustive analysis of the strongly connected component Inline graphic and of all its cycles (Table 2) we observed no maximal alternating trees with length above Inline graphic. Conversely, we found Inline graphic maximal co-alternating trees of Inline graphic with length Inline graphic. For sake of clarity, we describe one such maximal co-alternating tree: it consists of an alternating sequence of seven cycles included one into the other, summarised in Table 3 below and illustrated in Figure 10. Notice that there is no alternation between Inline graphic and Inline graphic because both cycles C 0 = (0, 0) and C 1 = (0, 384, 223, 511, 63, 33, 0) are spurious. According to these results, it follows from Theorem 6 that the attractor-based complexity of network Inline graphic is Inline graphic and that Inline graphic is non-self-dual.

Table 3. A maximal co-alternating tree of Inline graphic of length Inline graphic referred to Figure 10.
Name State sequence Specification type
Inline graphic Inline graphic spurious
Inline graphic Inline graphic meaningful
Inline graphic Inline graphic spurious
Inline graphic Inline graphic meaningful
Inline graphic Inline graphic spurious
Inline graphic Inline graphic meaningful
Inline graphic Inline graphic spurious
Figure 10. A maximal co-alternating tree of the deterministic Muller automaton Inline graphic.

Figure 10

Panels 0 to 7 illustrate the sequence of eight cycles Inline graphic one included into the next. Cycles Inline graphic, Inline graphic, Inline graphic, Inline graphic, and Inline graphic are spurious whereas cycles Inline graphic, Inline graphic, and Inline graphic are meaningful. The sequence of cycles Inline graphic compose a maximal co-alternating tree of Inline graphic. This maximal co-alternating tree contains Inline graphic alternations between spurious and meaningful cycles, and thus has a length of Inline graphic. Therefore, the attractor-based degree of Inline graphic equals Inline graphic.

Discussion

The present work revisits and extends in light of modern automata theory the seminal studies by McCulloch and Pitts, Minsky and Kleene concerning the computational power of recurrent neural networks [1][3]. We present two novel attractor-based complexity measures for Boolean neural networks, and finally illustrate the application of our results to a model of the basal ganglia-thalamocortical network.

More precisely, we prove two computational equivalence between Boolean neural networks and Büchi and Muller automata, and deduce from these results two hierarchical classifications of Boolean recurrent neural networks based on their attractive behaviours. The hierarchical classifications are obtained by translating the Wagner classification theory from the Inline graphic-automaton to the neural network context. The first classification concerns the neural networks characterised by the specification of an output layer and the properties of the attractor dynamics associated with the activation of that output layer. In this case, the obtained hierarchical classification corresponds to a decidable pre-well ordering of width Inline graphic and height of Inline graphic. The second classification concerns the neural networks whose conditions on the type specifications of their attractors have been totally relaxed. In this case, the resulting hierarchy is significantly richer and consists of a decidable pre-well ordering of width Inline graphic and height of Inline graphic. We prove that both hierarchical classifications are decidable and provide the decidability procedures aimed at computing the degrees of the networks in the respective hierarchies. We also show that the shorter hierarchy corresponds to an initial segment of the longer one in a precise sense. The notable result is the proof that the two hierarchical classifications are directly related to the attractive properties of the neural networks. More precisely, the degrees of the Boolean neural networks in the hierarchies correspond to the ability of the networks to maximally alternate between visits of meaningful and spurious attractors along their evolutions. The two hierarchies therefore provide two novel complexity measurments of Boolean recurrent neural networks according to their attractive potentialities. These complexity measurements represents an assessment of the computational power of Boolean neural networks in terms of the significance of their attractor dynamics.

Attractor-Based Complexity Measurement

The degree of a neural network Inline graphic in the RNN hierarchy or in the complete RNN hierarchy corresponds precisely to the length of a maximal alternating chain or alternating tree contained in the graph of its corresponding automaton Inline graphic, respectively. Since alternating chains and trees are described in terms of accessibility and inclusion relations between cycles of Inline graphic, and according to the biunivocal correspondence between cycles of Inline graphic and attractors of Inline graphic, it follows that the degree of a neural network Inline graphic corresponds precisely to some intricacy relation – accessibility and inclusion – between the set of its meaningful and spurious attractors.

In order to better explain the attractor-based complexity measurement, suppose that some network Inline graphic follows the periodic infinite evolution Inline graphic, where Inline graphic are states of Inline graphic. It follows that Inline graphic alternates infinitely often between the two cycles of states Inline graphic and Inline graphic, or equivalently, between the two attractors Inline graphic and Inline graphic. If we suppose that Inline graphic is meaningful and Inline graphic is spurious, then Inline graphic alternates infinitely often between a meaningful and a spurious attractor along the evolution Inline graphic. However, note that Inline graphic also visits infinitely often the composed attractor Inline graphic. Hence, if Inline graphic is meaningful (resp. spurious), then Inline graphic not only alternates infinitely often between a meaningful and a spurious attractor Inline graphic and Inline graphic respectively, but also visits infinitely often the third composed meaningful (resp. spurious) attractor Inline graphic.

In fact, for any infinite evolution Inline graphic, there always exists a unique such maximal attractor (maximal for the inclusion relation) that is visited infinitely often. Let us call this attractor the global attractor associated to Inline graphic. The attractor-based complexity measurement can now be understood as follows. A network Inline graphic is more complex than a network Inline graphic iff for any infinite evolution Inline graphic of Inline graphic, there exists a corresponding infinite evolution Inline graphic of Inline graphic that can be build “simultaneously” to Inline graphic (in a precise sense described below) and such that, after infinitely many time steps, the types of global attractors visited by Inline graphic and Inline graphic are the very same. In other words, a network Inline graphic is more complex than a network Inline graphic iff Inline graphic is able to mimic step by step every possible infinite evolution of Inline graphic in order to finally obtain a global attractor of the same type.

This property can actually be precisely expressed in terms of game-theoretic considerations. Consider the game Inline graphic between networks Inline graphic and Inline graphic wholes rules are the following. Both networks begin in the rest state. Network Inline graphic begins the game and Inline graphic and Inline graphic play in turn during infinitely many rounds. At every step, Inline graphic chooses a possible next state (accessible from its previous one), and Inline graphic answers by either also choosing a possible next state (accessible from its previous one), or by skipping its turn. However, Inline graphic is obliged to chose infinitely many next states during the game. After infinitely many time steps, Inline graphic and Inline graphic will have produced two infinite evolutions Inline graphic and Inline graphic, respectively. If the types of the global attractors of Inline graphic and Inline graphic are the same, Inline graphic wins the game. Otherwise, Inline graphic wins the game. One can prove that the attractor based complexity measures of Inline graphic and Inline graphic can then be expressed as follows: the degree of Inline graphic is higher than that of Inline graphic iff Inline graphic has a winning strategy in the game Inline graphic.

In other words, a network Inline graphic is more complex than Inline graphic according to our attractor-based complexity iff Inline graphic is capable of mimicking Inline graphic in every of its possible attractive behaviours. Two networks Inline graphic and Inline graphic are equivalent if both are capable of mimicking each other in every one of its possible attractive behaviours. Assuming that the set of all possible attractive behaviours of a network is related to its computational power, our attractor-based complexity degree therefore represents a measurement of the computational power of Boolean neural networks in terms of the significance of their attractor dynamics.

Finally, note that the degree of a neural network in the RNN hierarchy or in the complete RNN hierarchy is intimately related to the structure of this network, more precisely to its connectivity. Indeed, for any neural network Inline graphic that would be given without any output layer or type specification of its attractors, it is possible to compute, by some graph analysis, the maximal alternating chains or alternating trees that could be contained in the graph of its corresponding automaton Inline graphic, and therefore, by theorems 3 and 6, to know the maximal degree that this network could be able to achieve in the RNN or in the complete RNN hierarchy, if the type specification of its attractors were optimally distributed. In other words, any neural network, according to its connectivity structure, contains a potential maximal degree, which is achieved only if the set of its attractors are optimally discriminated into meaningful and spurious types. Hence, based on its connectivity, a certain network could be characterised by a high potential maximal degree, but in practice, due to a very limited discrimination – i.e. non-alternation – between its spurious and meaningful attractors, it will exhibit a low degree of network complexity.

Significance of measuring network complexity

In an application of our network complexity measurement to a model of a real brain circuit, we demonstrated that, under specific assumptions of connectivity and dynamics, the basal ganglia-thalamocortical network can be modeled by a network of degree Inline graphic. Why is it so interesting to know this degree? What kind of increased understanding of that network do we gain from that? The degree of network complexity for a given network is important to be determined if we want to assess the computational power that can be achieved by that network. In other words, the degree of network complexity is a functional characteristic of a given network.

For example, a model of the basal ganglia-thalamocortical network with a complexity of degree Inline graphic is able to perform all possible computations made by a model of the same network with a complexity of degree Inline graphic and many more computations in addition. If we were able to associate certain functional states of cognitive relevance (or certain pathological conditions of clinical relevance, respectively) to an increase (or to a decrease, respectively) in network complexity, we would certainly gain a better insight into the role and the factors that modulate the operations executed by certain brain circuits.

Then, how and why the network complexity of a model of the basal ganglia-thalamocortical network could vary? The degree of complexity of a network is upper bounded by its potential maximal degree. In the next section, we discuss how control parameters can affect network dynamics and eventually its complexity degree.

Control parameters of network dynamics

The central hypothesis for brain attractors is that, once activated by appropriate activity, the network behaviour is maintained by continuous reentry of activity, thus generating a high incidence of repeating firing patterns associated with underlying attractors [37], [38]. The question whether the attractors revealed by certain patterns of activity are spurious or meaningful cannot be answered easily. Certain patterns may repeat above chance and occur transiently during the evolution of a network [23], [55] and during the transient inactivation of part of the newtwork, as shown experimentally with thalamic firing patterns during reversible inactivation of the cerebral cortex [60], [107]. On the other hand, patterns and attractors per se may reveal an epiphenomenon or a byproduct of the network dynamics, thus being classified as spurious. However, changing conditions and association of attractors into higher-order attractors may turn a spurious into a meaningful type, and vice versa. For this reason, in the present paper, we have emphasised the importance of the specification types of the constitutive cycles and how these affect the specification type of a cycle.

The measurements of networks complexities refer to the possibility of networks' dynamics to maximally alternate between attractors of different types along their evolutions. This is interesting for an overall assessment of the properties of a network because it associates the computational power of that network with the significance of their attractor dynamics.

The excitatory/inhibitory balance in a neural network is the major factor affecting the dynamics of its activity [38], [109][111]. The activity of the basal ganglia-thalamocortical network is modulated by a complex set of brain structures, including the dopaminergic (including those from the substantia nigra pars compacta like the nigrostriatal and those from the ventromedial tegmental area), cholinergic (including those from the basal forebrain), the noradrenergic (including those from locus coeruleus), serotoninergic (including those from the dorsal raphe), histaminergic (from the tuberomamillary nucleus) and orexinergic nuclei (from the lateral and posterior hypothalamus) [103], [112][115]. These neuromodulators affect, among other parameters, the synaptic kinetics (i.e., the decay time of the synaptic interaction) and the cellular excitability, thus producing stable or unstable spatiotemporally organised modes of activity and rapid state switches [69], [111], [116][119]. The effect of cholinergic modulation exerted by the basal forebrain is particularly noticeable to this aspect [120][122].

The possible different dynamics of a given network can be represented by an equilibrium surface where each point is determined by a network complexity associated with two (in the simplest abstraction) independent variables. Such a situation is illustrated in Figure 11 by the cusp catastrophe of the Catastrophe theory [108]. In our example, the two control parameters are the excitability and the synaptic kinetics. Depending on the ranges of the parameters that control the network dynamics, the network complexity may remain identical or only slightly modified, in which case we refer to a “smooth” path on the network dynamics surface. In other cases, small changes in the parameter values may provoke rapid state switches corresponding to “sudden” changes in network complexity (e.g., see [111]).

Figure 11. Cusp catastrophe model.

Figure 11

We consider an example of network dynamics controlled by two independent parameters, the synaptic kinetics and the cell excitability. Divergent behaviour is accounted for since as the dynamics moves out from the edge (point A) toward the fold, which is the starting point of separation between an upper and lower limbs, the network dynamics is forced to move towards one of the two opposing behaviours: either point B for network dynamics dominated by deterministic chaos and chaotic itinerancy, or point C for network dynamics dominated by stochastic activity.

The network dynamics surface has a singularity represented by a fold (or Riemann-Hugoniot cusp) in it. A bifurcation set is defined by the thresholds where sudden changes can occur, depending on the initial conditions, by projecting the cusp onto the control surface. The network complexity, as defined in this study, depends on the maximal (co-)alternation between spurious and meaningful attractors. In the network dynamics surface, the edge toward the fold (point A, in Figure 11) is the starting point of separation between two surfaces. One surface is the top sheet representing network dynamics with a high degree of complexity because of the presence of deterministic chaos enabling the possibility to increase the (co-)alternation by mean of chaotic itinerancy (point B, in Figure 11) [66], [67], [69]. The other surface is the bottom sheet reflecting the dominance of stochastic dynamics, hence absence of alternation (point C, in Figure 11). Hence, as the network dynamics moves out from the edge near the fold the dynamics is diverging and forced to move toward one of the two opposing behaviours. The path that will be followed by the dynamics depends on the values of the control parameters defining the state of the neural network just prior to reaching the fold. Sudden transitions are accounted for at the edges of the fold, for example as the stochastic dynamics moves along the surface toward the pleat, at some point a small change in control parameters may cause a sudden shift such that, after a long interval without cyclic activity, quasi-random activity develops into quasi-attractors and long cycles may suddenly appear containing many constitutive cycles and many alternations between spurious and meaningful attractors, e.g. tuning thalamic activity by corticofugal activity [123], [124].

Conclusion

The present work can be extended in at least three directions. First, it is expected to study the computational and dynamical complexity of neural networks induced by other mathematical bio-inspired criteria. Indeed, the approach followed in this paper provides a hierarchical classification of neural networks according to the topological complexity of their underlying neural languages, and subsequently, according to the complexity of their attractive behaviours. However, it remains to be clarified how this natural mathematical criterion could be translated into the real biological complexity of the networks. Other complexity measures might bring further insights to the global understanding of brain information processing.

Secondly, it is envisioned to describe the computational power of more biologically oriented neuronal models. For instance, first-order recurrent neural networks provided with some simple spike-timing dependent plasticity (STDP) rule could be of interest [48], [125][128]. Also, neural networks equipped with more complex activation function or dynamical equations governing the membrane dynamics could be relevant [129]. Important preliminary steps in this direction were made by providing a description of the computational capabilities of static/evolving rational-weighted/analog recurrent neural networks involved in a classical as well as in a memory active and interactive paradigm of computation [6], [11],[27],[31][33].

The third and maybe most important extension of our study is oriented towards the application of our new attractor-based complexity measurement to other examples of real neural networks, and to studying the effect of modulatory projections in controlling the network complexity. Indeed, the parameters that control neural dynamics (e.g., excitability and synaptic kinetics) are driven by so-called modulatory projections, such as the cholinergic and serotoninergic projections.

Finally, we believe that the theoretical approach to the computational power of neural models might ultimately bring further insight to the understanding of the intrinsic natures of both biological as well as artificial intelligences. On the one hand, the study of the computational and dynamical capabilities of brain-like models might improve the understanding of the biological features that are most relevant to brain information processing. On the other hand, foundational approaches to alternative models of computation might lead in the long term not only to relevant theoretical considerations [130], [131], but also to practical applications.

Supporting Information

File S1

Example S1, Description of a deterministic Büchi automaton, and illustration of the concept of an alternating chain. Figure S1, A deterministic Büchi automaton Inline graphic . The nodes and edges correspond to the states and transitions of Inline graphic, respectively. The node Inline graphic corresponds to the initial state, as indicated by the short input arrow. The double-circled red nodes correspond to the final states of Inline graphic. The Büchi automaton Inline graphic contains a maximal alternating chain of length Inline graphic, and a maximal co-alternating chain of length Inline graphic also.

(ZIP)

File S2

Example S2, Description of a deterministic Muller automaton, and illustration of the concept of an alternating tree. Figure S2, A Muller automaton Inline graphic . The underlying alphabet of Inline graphic is Inline graphic. The table Inline graphic represents the set of cycles of Inline graphic that are successful. All other cycles of Inline graphic are by definition non-successful. The successful and non-successful cycles are denoted in green and red, respectively. This Muller automaton Inline graphic contains a maximal alternating tree of length Inline graphic.

(ZIP)

File S3

Example S3, Illustration of the translation procedures described in Propositions 1 and 2. Figure S3, Panels a, b. Translation from a neural network to its corresponding deterministic Büchi automaton. a. The neural network Inline graphic of Figure 1 provided with an additional specification of an output layer Inline graphic denoted in red and double-circled. b. The deterministic Büchi automaton Inline graphic corresponding to the neural network Inline graphic of panel a. The final states are denoted in red and double-circled, and the active status of the output layer, namely cell Inline graphic, is emphasised by a bold red Inline graphic. Panels c, d. Translation from a deterministic Büchi automaton to its corresponding neural network. c. A deterministic Büchi automaton Inline graphic with three states. The initial state Inline graphic is denoted with an incoming edge. The final state Inline graphic is emphasised in red and double-circled. d. The network Inline graphic corresponding to the Büchi automaton Inline graphic. The output layer is represented by the cell Inline graphic, denoted in red and double-circled. The background activities are labeled in blue.

(ZIP)

File S4

Example S4, Illustration of the decidability procedure of the RNN hierarchy.

(ZIP)

File S5

Example S5, Illustration of the translation procedures described in Propositions 4 and 5. Figure S4, Panels a, b. Translation from a Boolean neural network provided with a type specification of its attractors to its corresponding deterministic Muller automaton. a. A neural network Inline graphic provided with an additional type specification of each of its attractors. In this case, Inline graphic contains only one meaningful attractor determined by the following set of states Inline graphic; all other ones are considered as spurious. b. The deterministic Muller automaton Inline graphic corresponding to the neural network Inline graphic of panel a. Automaton Inline graphic works over alphabet Inline graphic, contains six states, and possesses in its table Inline graphic the sole cycle Inline graphic, which corresponds to the sole meaningful attractor of Inline graphic. Panels c, d. Translation from a deterministic Muller automaton to its corresponding Boolean neural network provided with a type specification of its attractors. c. A deterministic Muller automaton Inline graphic. The automaton works over alphabet Inline graphic, has three states, and possesses the two successful cycles Inline graphic and Inline graphic, as mentioned by its table Inline graphic. d. The neural network Inline graphic corresponding to the Muller automaton Inline graphic of panel c. The network Inline graphic contains two letter cells, one delay cell, and three state cells to simulate the two possible inputs and three states of automaton Inline graphic. It has only two meaningful attractors corresponding to the two successful cycles of automaton Inline graphic.

(ZIP)

File S6

Example S6, Illustration of the decidability procedure of the complete RNN hierarchy.

(ZIP)

Acknowledgments

The authors thank all the reviewers for their comments, and in particular the last reviewer for his suggestions concerning the proof of Proposition 2.

Funding Statement

This work is funded by the University of Lausanne, Switzerland. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysic 5: 115–133. [PubMed] [Google Scholar]
  • 2.Kleene SC (1956) Representation of events in nerve nets and finite automata. In: Automata Studies, Princeton, N. J.: Princeton University Press, volume 34 of Annals of Mathematics Studies. pp. 3–42. [Google Scholar]
  • 3.Minsky ML (1967) Computation: finite and infinite machines. Upper Saddle River, NJ, USA: Prentice-Hall, Inc. [Google Scholar]
  • 4. Kremer SC (1995) On the computational power of elman-style recurrent networks. Neural Networks, IEEE Transactions on 6: 1000–1004. [DOI] [PubMed] [Google Scholar]
  • 5. Sperduti A (1997) On the computational power of recurrent neural networks for structures. Neural Netw 10: 395–400. [Google Scholar]
  • 6. Siegelmann HT, Sontag ED (1995) On the computational power of neural nets. J Comput Syst Sci 50: 132–150. [Google Scholar]
  • 7.Hyötyniemi H (1996) Turing machines are recurrent neural networks. In: Proceedings of STeP'96. Finnish Artificial Intelligence Society, pp. 13–24. [Google Scholar]
  • 8.Neto JaPG, Siegelmann HT, Costa JF, Araujo CPS (1997) Turing universality of neural nets (revisited). In: EUROCAST '97: Proceedings of the A Selection of Papers from the 6th International Workshop on Computer Aided Systems Theory. London, UK: Springer-Verlag, pp. 361–366. [Google Scholar]
  • 9. Kilian J, Siegelmann HT (1996) The dynamic universality of sigmoidal neural networks. Inf Comput 128: 48–56. [Google Scholar]
  • 10.Neumann Jv (1958) The computer and the brain. New Haven, CT, USA: Yale University Press. [Google Scholar]
  • 11. Siegelmann HT, Sontag ED (1994) Analog computation via neural networks. Theor Comput Sci 131: 331–360. [Google Scholar]
  • 12. Balcázar JL, Gavaldà R, Siegelmann HT (1997) Computational power of neural networks: a characterization in terms of kolmogorov complexity. IEEE Transactions on Information Theory 43: 1175–1183. [Google Scholar]
  • 13. Maass W, Orponen P (1998) On the effect of analog noise in discrete-time analog computations. Neural Comput 10: 1071–1095. [Google Scholar]
  • 14. Maass W, Sontag ED (1999) Analog neural nets with gaussian or other common noise distributions cannot recognize arbitary regular languages. Neural Comput 11: 771–782. [DOI] [PubMed] [Google Scholar]
  • 15. Fogel DB, Fogel LJ, Porto V (1990) Evolving neural networks. Biological Cybernetics 63: 487–493. [Google Scholar]
  • 16. Whitley D, Dominic S, Das R, Anderson CW (1993) Genetic reinforcement learning for neurocontrol problems. Machine Learning 13: 259–284. [Google Scholar]
  • 17. Moriarty DE, Miikkulainen R (1997) Forming neural networks through efficient and adaptive coevolution. Evolutionary Computation 5: 373–399. [DOI] [PubMed] [Google Scholar]
  • 18. Yao X, Liu Y (1997) A new evolutionary system for evolving artificial neural networks. Trans Neur Netw 8: 694–713. [DOI] [PubMed] [Google Scholar]
  • 19. Angeline PJ, Saunders GM, Pollack JB (1994) An evolutionary algorithm that constructs recurrent neural networks. Neural Networks, IEEE Transactions on 5 54–65 Angeline1994pp54. [DOI] [PubMed] [Google Scholar]
  • 20. Chechik G, Meilijson I, Ruppin E (1999) Neuronal regulation: A mechanism for synaptic pruning during brain maturation. Neural Comput 11: 2061–2080. [DOI] [PubMed] [Google Scholar]
  • 21. Iglesias J, Eriksson J, Pardo B, Tomassini T, Villa A (2005) Emergence of oriented cell assemblies associated with spike-timing-dependent plasticity. Lecture Notes in Computer Science 3696: 127–132. [Google Scholar]
  • 22. Chao TC, Chen CM (2005) Learning-induced synchronization and plasticity of a developing neural network. Journal of Computational Neuroscience 19: 311–324. [DOI] [PubMed] [Google Scholar]
  • 23. Iglesias J, Villa AEP (2010) Recurrent spatiotemporal firing patterns in large spiking neural networks with ontogenetic and epigenetic processes. J Physiol Paris 104: 137–146. [DOI] [PubMed] [Google Scholar]
  • 24. Perrig S, Iglesias J, Shaposhnyk V, Chibirova O, Dutoit P, et al. (2010) Functional interactions in hierarchically organized neural networks studied with spatiotemporal firing patterns and phase-coupling frequencies. Chin J Physiol 53: 382–395. [DOI] [PubMed] [Google Scholar]
  • 25. Tetzlaff C, Okujeni S, Egert U, Wörgötter F, Butz M (2010) Self-organized criticality in developing neuronal networks. PLoS computational biology 6: e1001013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Shaposhnyk V, Villa AEP (2012) Reciprocal projections in hierarchically organized evolvable neural circuits affect EEG-like signals. Brain Res 1434: 266–276. [DOI] [PubMed] [Google Scholar]
  • 27.Cabessa J, Siegelmann HT (2011) Evolving recurrent neural networks are super-turing. In: IJCNN. IEEE, pp. 3200–3206. [Google Scholar]
  • 28. Turing AM (1936) On computable numbers, with an application to the Entscheidungsproblem. Proc London Math Soc 2: 230–265. [Google Scholar]
  • 29.van Leeuwen J, Wiedermann J (2008) How we think of computing today. In: Beckmann A, Dimitracopoulos C, Löwe B, editors, Logic and Theory of Algorithms, Springer Berlin/Heidelberg, volume 5028 of LNCS. pp. 579–593. [Google Scholar]
  • 30.Goldin D, Smolka SA, Wegner P (2006) Interactive Computation: The New Paradigm. Secaucus, NJ, USA: Springer-Verlag New York, Inc. [Google Scholar]
  • 31. Cabessa J, Villa AEP (2012) The expressive power of analog recurrent neural networks on infinite input streams. Theor Comput Sci 436: 23–34. [Google Scholar]
  • 32. Cabessa J, Siegelmann HT (2012) The computational power of interactive recurrent neural networks. Neural Computation 24: 996–1019. [DOI] [PubMed] [Google Scholar]
  • 33.Cabessa J, Villa AEP (2013) The super-turing computational power of interactive evolving recurrent neural networks. In: Mladenov V, Koprinkova-Hristova PD, Palm G, Villa AEP, Appollini B, et al., editors, ICANN. Springer, volume 8131 of Lecture Notes in Computer Science, pp. 58–65. [Google Scholar]
  • 34. Büchi JR (1966) Symposium on decision problems: On a decision method in restricted second order arithmetic. Studies in Logic and the Foundations of Mathematics 44: 1–11. [Google Scholar]
  • 35. Wagner K (1979) On ω-regular sets. Inform and Control 43: 123–177. [Google Scholar]
  • 36.Kauffman SA (1993) The origins of order: Self-organization and selection in evolution. New York: Oxford University Press. [Google Scholar]
  • 37.Abeles M (1991) Corticonics: Neural Circuits of the Cerebral Cortex. Cambridge University Press, first edition. [Google Scholar]
  • 38.Amit DJ (1992) Modeling brain function: The world of attractor neural networks. Cambridge University Press. [Google Scholar]
  • 39. Little WA (1974) The existence of persistent states in the brain. Mathematical biosciences 19: 101–120. [Google Scholar]
  • 40. Little WA, Shaw GL (1978) Analytical study of the memory storage capacity of a neural network. Mathematical biosciences 39: 281–290. [Google Scholar]
  • 41.Seung HS (1998) Learning continuous attractors in recurrent networks. In: Advances in Neural Information Processing Systems. MIT Press, pp. 654–660. [Google Scholar]
  • 42. Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences 79: 2554–2558. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Amit DJ, Tsodyks MV (1991) Quantitative study of attractor neural network retrieving at low spike rates: I. substrate–spikes, rates and neuronal gain. Network: Computation in Neural Systems 2: 259–273. [Google Scholar]
  • 44.Coolen T, Sherrington D (1993) Dynamics of Attractor Neural Networks. In: Taylor J, editor, Mathematical Approaches to Neural Networks, Elsevier, volume 51 of North-Holland Mathematical Library. pp. 293–306. doi:http://dx.doi.org/10.1016/S0924-6509(08)70041-2. Available: http://www.sciencedirect.com/science/article/pii/S0924650908700412. [Google Scholar]
  • 45. Eliasmith C (2005) A unified approach to building and controlling spiking attractor networks. Neural Comput 17: 1276–1314. [DOI] [PubMed] [Google Scholar]
  • 46. Knierim JJ, Zhang K (2012) Attractor dynamics of spatially correlated neural activity in the limbic system. Annu Rev Neurosci 35: 267–285. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Braitenberg V, Schüz A (1998) Cortex: Statistics and Geometry of Neuronal Connectivity. Berlin, Germany: Springer, 249 pp. ISBN: 3-540-63816-4. [Google Scholar]
  • 48. Iglesias J, Eriksson J, Grize F, Tomassini M, Villa AE (2005) Dynamics of pruning in simulated large-scale spiking neural networks. BioSystems 79: 11–20. [DOI] [PubMed] [Google Scholar]
  • 49. Iglesias J, Villa AEP (2008) Emergence of preferred firing sequences in large spiking neural networks during simulated neuronal development. Int J Neural Syst 18: 267–277. [DOI] [PubMed] [Google Scholar]
  • 50. Abeles M, Gerstein GL (1988) Detecting spatiotemporal firing patterns among simultaneously recorded single neurons. J Neurophysiol 60: 909–924. [DOI] [PubMed] [Google Scholar]
  • 51.Villa AEP (2000) Empirical Evidence about Temporal Structure in Multi-unit Recordings. In: Miller R, editor, Time and the brain, Amsterdam, The Netherlands: Harwood Academic, volume 3 of Conceptual Advances in Brain Research, chapter 1. pp. 1–51. [Google Scholar]
  • 52. Tetko IV, Villa AEP (2001) A pattern grouping algorithm for analysis of spatiotemporal patterns in neuronal spike trains. 1. Detection of repeated patterns. J Neurosci Meth 105: 1–14. [DOI] [PubMed] [Google Scholar]
  • 53. Villa AEP, Abeles M (1990) Evidence for spatiotemporal firing patterns within the auditory thalamus of the cat. Brain Res 509: 325–327. [DOI] [PubMed] [Google Scholar]
  • 54. Tetko IV, Villa AEP (1997) Fast combinatorial methods to estimate the probability of complex temporal patterns of spikes. Biol Cybern 76: 397–408. [Google Scholar]
  • 55. Iglesias J, Villa AEP (2007) Effect of stimulus-driven pruning on the detection of spatiotemporal patterns of activity in large neural networks. BioSystems 89: 287–293. [DOI] [PubMed] [Google Scholar]
  • 56. Iglesias J, Chibirova O, Villa A (2007) Nonlinear dynamics emerging in large scale neural networks with ontogenetic and epigenetic processes. Lecture Notes in Computer Science 4668: 579–588. [Google Scholar]
  • 57. Abeles M, Bergman H, Margalit E, Vaadia E (1993) Spatiotemporal firing patterns in the frontal cortex of behaving monkeys. J Neurophysiol 70: 1629–1638. [DOI] [PubMed] [Google Scholar]
  • 58. Prut Y, Vaadia E, Bergman H, Slovin H, Abeles M (1998) Spatiotemporal structure of cortical activity: Properties and behavioral relevance. J Neurophysiol 79: 2857–2874. [DOI] [PubMed] [Google Scholar]
  • 59. Villa AEP, Tetko IV, Hyland B, Najem A (1999) Spatiotemporal activity patterns of rat cortical neurons predict responses in a conditioned task. Proc Natl Acad Sci U S A 96: 1106–1111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60. Tetko IV, Villa AE (2001) A pattern grouping algorithm for analysis of spatiotemporal patterns in neuronal spike trains. 2. application to simultaneous single unit recordings. J Neurosci Meth 105: 15–24. [DOI] [PubMed] [Google Scholar]
  • 61. Shmiel T, Drori R, Shmiel O, Ben-Shaul Y, Nadasdy Z, et al. (2005) Neurons of the cerebral cortex exhibit precise interspike timing in correspondence to behavior. Proc Natl Acad Sci U S A 102: 18655–18657. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62. Amari SI (1975) Homogeneous nets of neuron-like elements. Biol Cybern 17: 211–220. [DOI] [PubMed] [Google Scholar]
  • 63. Skarda CA, Freeman WJ (1987) How brains make chaos in order to make sense of the world. Behavioral and brain sciences 10: 161–195. [Google Scholar]
  • 64. Freeman WJ (2003) A neurobiological theory of meaning in perception Part I: Information and meaning in nonconvergent and nonlocal brain dynamics. International Journal of Bifurcation and Chaos 13: 2493–2511. [Google Scholar]
  • 65.Freeman W (1975) Mass action in the nervous system. Academic Press. [Google Scholar]
  • 66. Tsuda I, Koerner E, Shimizu H (1987) Memory dynamics in asynchronous neural networks. Prog Th Phys 78: 51–71. [Google Scholar]
  • 67.Freeman W (1990) On the problem of anomalous dispersion in chaoto-chaotic phase transitions of neural masses, and its significance for the management of perceptual information in brains. In: Haken H, Stadler M, editors, Synergetics of Cognition, Springer Berlin Heidelberg, volume 45 of Springer Series in Synergetics. pp. 126–143. [Google Scholar]
  • 68. Tsuda I (2001) Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. Behav Brain Sci 24: 793–810. [DOI] [PubMed] [Google Scholar]
  • 69. Segundo JP (2003) Nonlinear dynamics of point process systems and data. International Journal of Bifurcation and Chaos 13: 2035–2116. [Google Scholar]
  • 70. Fujii H, Tsuda I (2004) Neocortical gap junction-coupled interneuron systems may induce chaotic behavior itinerant among quasi-attractors exhibiting transient synchrony. Neurocomputing 58: 151–157. [Google Scholar]
  • 71. Hopfield JJ, Feinstein DI, Palmer RG (1983) ‘Unlearning’ has a stabilizing effect in collective memories. Nature 304: 158–159. [DOI] [PubMed] [Google Scholar]
  • 72. Watta PB, Wang K, Hassoun MH (1997) Recurrent neural nets as dynamical boolean systems with application to associative memory. IEEE Trans Neural Netw 8: 1268–1280. [DOI] [PubMed] [Google Scholar]
  • 73. Amit DJ, Treves A (1989) Associative memory neural network with low temporal spiking rates. Proc Natl Acad Sci U S A 86: 7871–7875. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74. Griniasty M, Tsodyks MV, Amit DJ (1993) Conversion of temporal correlations between stimuli to spatial correlations between attractors. Neural Computation 5: 1–17. [Google Scholar]
  • 75. Nara S, Davis P, Totsuji H (1993) Memory search using complex dynamics in a recurrent neural network model. Neural Networks 6: 963–973. [Google Scholar]
  • 76. Sandberg A, Lansner A, Petersson KM, Ekeberg O (2002) A bayesian attractor network with incremental learning. Network 13: 179–194. [PubMed] [Google Scholar]
  • 77. Knoblauch A (2011) Neural associative memory with optimal bayesian learning. Neural Computation 23: 1393–1451. [DOI] [PubMed] [Google Scholar]
  • 78.Wadge WW (1983) Reducibility and determinateness on the Baire space. Ph.D. thesis, University of California, Berkeley. [Google Scholar]
  • 79.Perrin D, Pin JE (2004) Infinite Words, volume 141 of Pure and Applied Mathematics. Elsevier. ISBN 0-12-532111-2. [Google Scholar]
  • 80. McNaughton R (1966) Testing and generating infinite sequences by a finite automaton. Information and control 9: 521–530. [Google Scholar]
  • 81. Piterman N (2007) From nondeterministic büchi and streett automata to deterministic parity automata. Logical Methods in Computer Science 3: 1–21. [Google Scholar]
  • 82. Selivanov VL (1998) Fine hierarchy of regular omega-languages. Theor Comput Sci 191: 37–59. [Google Scholar]
  • 83. Tsuda I (1991) Chaotic itinerancy as a dynamical basis of hermeneutics of brain and mind. World Futures 32: 167–185. [Google Scholar]
  • 84. Kaneko K, Tsuda I (2003) Chaotic itinerancy. Chaos 13: 926–936. [DOI] [PubMed] [Google Scholar]
  • 85. Alexander GE, Crutcher MD (1990) Functional architecture of basal ganglia circuits: neural substrates of parallel processing. Trends Neurosci 13: 266–271. [DOI] [PubMed] [Google Scholar]
  • 86. Hoover JE, Strick PL (1993) Multiple output channels in the basal ganglia. Science 259: 819–821. [DOI] [PubMed] [Google Scholar]
  • 87. Asanuma C (1994) GABAergic and pallidal terminals in the thalamic reticular nucleus of squirrel monkeys. Exp Brain Res 101: 439–451. [DOI] [PubMed] [Google Scholar]
  • 88. Groenewegen HJ, Galis-de Graaf Y, Smeets WJ (1999) Integration and segregation of limbic cortico-striatal loops at the thalamic level: an experimental tracing study in rats. J Chem Neuroanat 16: 167–185. [DOI] [PubMed] [Google Scholar]
  • 89. Yasukawa T, Kita T, Xue Y, Kita H (2004) Rat intralaminar thalamic nuclei projections to the globus pallidus: a biotinylated dextran amine anterograde tracing study. J Comp Neurol 471: 153–167. [DOI] [PubMed] [Google Scholar]
  • 90. Cebrián C, Parent A, Prensa L (2005) Patterns of axonal branching of neurons of the substantia nigra pars reticulata and pars lateralis in the rat. J Comp Neurol 492: 349–369. [DOI] [PubMed] [Google Scholar]
  • 91. Degos B, Deniau JM, Le Cam J, Mailly P, Maurice N (2008) Evidence for a direct subthalamo-cortical loop circuit in the rat. Eur J Neurosci 27: 2599–2610. [DOI] [PubMed] [Google Scholar]
  • 92. Smith Y, Raju D, Nanda B, Pare JF, Galvan A, et al. (2009) The thalamostriatal systems: anatomical and functional organization in normal and parkinsonian states. Brain Res Bull 78: 60–68. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93. Gandhi NJ, Katnani HA (2011) Motor functions of the superior colliculus. Annu Rev Neurosci 34: 205–231 Gandhi2011pp205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94. Krauzlis RJ, Lovejoy LP, Zénon A (2013) Superior colliculus and visual spatial attention. Annu Rev Neurosci 36: 165–182. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95. Terman D, Rubin JE, Yew AC, Wilson CJ (2002) Activity patterns in a model for the subthalamopallidal network of the basal ganglia. J Neurosci 22: 2963–2976. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96. Nakahara H, Amari Si S, Hikosaka O (2002) Self-organization in the basal ganglia with modulation of reinforcement signals. Neural Comput 14: 819–844. [DOI] [PubMed] [Google Scholar]
  • 97. Rubin JE, Terman D (2004) High frequency stimulation of the subthalamic nucleus eliminates pathological thalamic rhythmicity in a computational model. J Comput Neurosci 16: 211–235. [DOI] [PubMed] [Google Scholar]
  • 98. Jones BE (2005) From waking to sleeping: neuronal and chemical substrates. Trends Pharmacol Sci 26: 578–586. [DOI] [PubMed] [Google Scholar]
  • 99. Leblois A, Boraud T, Meissner W, Bergman H, Hansel D (2006) Competition between feedback loops underlies normal and pathological dynamics in the basal ganglia. J Neurosci 26: 3567–3583. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 100. Silkis I (2007) A hypothetical role of cortico-basal ganglia-thalamocortical loops in visual processing. Biosystems 89: 227–235. [DOI] [PubMed] [Google Scholar]
  • 101. Tsujino N, Sakurai T (2009) Orexin/hypocretin: a neuropeptide at the interface of sleep, energy homeostasis, and reward system. Pharmacol Rev 61: 162–176. [DOI] [PubMed] [Google Scholar]
  • 102. van Albada SJ, Robinson PA (2009) Mean-field modeling of the basal ganglia-thalamocortical system. I Firing rates in healthy and parkinsonian states. J Theor Biol 257: 642–663. [DOI] [PubMed] [Google Scholar]
  • 103.Reinoso-Suárez F, De Andrés I, Garzón M (2011) The Sleep–Wakefulness Cycle, volume 208 of Advances in Anatomy, Embryology and Cell Biology. Berlin Heidelberg: Springer, 1–128 pp. [PubMed] [Google Scholar]
  • 104. Meijer HG, Krupa M, Cagnan H, Lourens MA, Heida T, et al. (2011) From Parkinsonian thalamic activity to restoring thalamic relay using deep brain stimulation: new insights from computational modeling. J Neural Eng 8: 066005–066005. [DOI] [PubMed] [Google Scholar]
  • 105. Kerr CC, Neymotin SA, Chadderdon GL, Fietkiewicz CT, Francis JT, et al. (2012) Electrostimulation as a prosthesis for repair of information flow in a computer model of neocortex. Neural Systems and Rehabilitation Engineering, IEEE Transactions on 20: 153–160. [DOI] [PubMed] [Google Scholar]
  • 106. Guthrie M, Leblois A, Garenne A, Boraud T (2013) Interaction between cognitive and motor cortico-basal ganglia loops during decision making: a computational study. J Neurophysiol 109: 3025–3040. [DOI] [PubMed] [Google Scholar]
  • 107. Villa AEP, Tetko IV, Dutoit P, De Ribaupierre Y, De Ribaupierre F (1999) Corticofugal modulation of functional connectivity within the auditory thalamus of rat, guinea pig and cat revealed by cooling deactivation. J Neurosci Methods 86: 161–178. [DOI] [PubMed] [Google Scholar]
  • 108.Thom R (1972) Stabilité structurelle et morphogenèse. Essai d'une théorie générale des modèles. oaris: InterÉditions. [Google Scholar]
  • 109. Douglas RJ, Martin KA, Whitteridge D (1989) A canonical microcircuit for neocortex. Neural computation 1: 480–488. [Google Scholar]
  • 110. van Vreeswijk C, Sompolinsky H (1996) Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274: 1724–1726. [DOI] [PubMed] [Google Scholar]
  • 111. Hill S, Villa AEP (1997) Dynamic transitions in global network activity influenced by the balance of excitation and inhibtion. Network: computational neural networks 8: 165–184. [Google Scholar]
  • 112. Phillis JW, Kirkpatrick JR (1980) The actions of motilin, luteinizing hormone releasing hormone, cholecystokinin, somatostatin, vasoactive intestinal peptide, and other peptides on rat cerebral cortical neurons. Can J Physiol Pharmacol 58: 612–623. [DOI] [PubMed] [Google Scholar]
  • 113.Steriade M, Jones EG, Llinás R (1990) Thalamic oscillations and signalling. New York: Wiley. [Google Scholar]
  • 114. Wright JJ (1990) Reticular activation and the dynamics of neuronal networks. Biol Cybern 62: 289–298. [DOI] [PubMed] [Google Scholar]
  • 115. Parent A, Hazrati LN (1995) Functional anatomy of the basal ganglia. i. the cortico-basal ganglia-thalamo-cortical loop. Brain Res Brain Res Rev 20: 91–127. [DOI] [PubMed] [Google Scholar]
  • 116. Fukai T, Shiino M (1990) Asymmetric neural networks incorporating the Dale hypothesis and noise-driven chaos. Phys Rev Lett 64: 1465–1468. [DOI] [PubMed] [Google Scholar]
  • 117. Tsodyks MV, Sejnowski T (1995) Rapid state switching in balanced cortical network models. Network: Computation in Neural Systems 6: 111–124. [Google Scholar]
  • 118.Taylor JG, Villa AEP (2001) The “Conscious I”: A Neuroheuristic Approach to the Mind. In: Baltimore D, Dulbecco R, Jacob F, Levi Montalcini R, editors, Frontiers of Life, Academic Press, volume III. pp. 349–270. ISBN: 0-12-077340-6. [Google Scholar]
  • 119. Kanamaru T, Sekine M (2005) Synchronized firings in the networks of class 1 excitable neurons with excitatory and inhibitory connections and their dependences on the forms of interactions. Neural Computation 17: 1315–1338. [DOI] [PubMed] [Google Scholar]
  • 120. Villa AEP, Bajo Lorenzana VM, Vantini G (1996) Nerve growth factor modulates information processing in the auditory thalamus. Brain Res Bull 39: 139–147. [DOI] [PubMed] [Google Scholar]
  • 121. Villa AEP, Tetko IV, Dutoit P, Vantini G (2000) Non-linear cortico-cortical interactions modulated by cholinergic afferences from the rat basal forebrain. Biosystems 58: 219–228. [DOI] [PubMed] [Google Scholar]
  • 122. Kanamaru T, Fujii H, Aihara K (2013) Deformation of attractor landscape via cholinergic presynaptic modulations: a computational study using a phase neuron model. PLoS One 8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 123. Lien AD, Scanziani M (2013) Tuned thalamic excitation is amplified by visual cortical circuits. Nat Neurosci 16: 1315–1323. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 124. Lintas A, Schwaller B, Villa AE (2013) Visual thalamocortical circuits in parvalbumin-deficient mice. Brain Res [DOI] [PubMed] [Google Scholar]
  • 125. Turova TS, Villa AEP (2007) On a phase diagram for random neural networks with embedded spike timing dependent plasticity. Biosystems 89: 280–286. [DOI] [PubMed] [Google Scholar]
  • 126. Kozloski J, Cecchi GA (2010) A theory of loop formation and elimination by spike timing-dependent plasticity. Front Neural Circuits 4: e7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 127. Waddington A, Appleby PA, De Kamps M, Cohen N (2012) Triphasic spike-timing-dependent plasticity organizes networks to produce robust sequences of neural activity. Front Comput Neurosci 6: e88 Waddington2012e88. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 128. Kerr RR, Burkitt AN, Thomas DA, Gilson M, Grayden DB (2013) Delay selection by spike-timing-dependent plasticity in recurrent networks of spiking neurons receiving oscillatory inputs. PLoS Comput Biol 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 129. Asai Y, Villa AEP (2012) Integration and transmission of distributed deterministic neural activity in feed-forward networks. Brain Res 1434: 17–33. [DOI] [PubMed] [Google Scholar]
  • 130. Copeland BJ (2002) Hypercomputation. Minds Mach 12: 461–502. [Google Scholar]
  • 131. Copeland BJ (2004) Hypercomputation: philosophical issues. Theor Comput Sci 317: 251–267. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

File S1

Example S1, Description of a deterministic Büchi automaton, and illustration of the concept of an alternating chain. Figure S1, A deterministic Büchi automaton Inline graphic . The nodes and edges correspond to the states and transitions of Inline graphic, respectively. The node Inline graphic corresponds to the initial state, as indicated by the short input arrow. The double-circled red nodes correspond to the final states of Inline graphic. The Büchi automaton Inline graphic contains a maximal alternating chain of length Inline graphic, and a maximal co-alternating chain of length Inline graphic also.

(ZIP)

File S2

Example S2, Description of a deterministic Muller automaton, and illustration of the concept of an alternating tree. Figure S2, A Muller automaton Inline graphic . The underlying alphabet of Inline graphic is Inline graphic. The table Inline graphic represents the set of cycles of Inline graphic that are successful. All other cycles of Inline graphic are by definition non-successful. The successful and non-successful cycles are denoted in green and red, respectively. This Muller automaton Inline graphic contains a maximal alternating tree of length Inline graphic.

(ZIP)

File S3

Example S3, Illustration of the translation procedures described in Propositions 1 and 2. Figure S3, Panels a, b. Translation from a neural network to its corresponding deterministic Büchi automaton. a. The neural network Inline graphic of Figure 1 provided with an additional specification of an output layer Inline graphic denoted in red and double-circled. b. The deterministic Büchi automaton Inline graphic corresponding to the neural network Inline graphic of panel a. The final states are denoted in red and double-circled, and the active status of the output layer, namely cell Inline graphic, is emphasised by a bold red Inline graphic. Panels c, d. Translation from a deterministic Büchi automaton to its corresponding neural network. c. A deterministic Büchi automaton Inline graphic with three states. The initial state Inline graphic is denoted with an incoming edge. The final state Inline graphic is emphasised in red and double-circled. d. The network Inline graphic corresponding to the Büchi automaton Inline graphic. The output layer is represented by the cell Inline graphic, denoted in red and double-circled. The background activities are labeled in blue.

(ZIP)

File S4

Example S4, Illustration of the decidability procedure of the RNN hierarchy.

(ZIP)

File S5

Example S5, Illustration of the translation procedures described in Propositions 4 and 5. Figure S4, Panels a, b. Translation from a Boolean neural network provided with a type specification of its attractors to its corresponding deterministic Muller automaton. a. A neural network Inline graphic provided with an additional type specification of each of its attractors. In this case, Inline graphic contains only one meaningful attractor determined by the following set of states Inline graphic; all other ones are considered as spurious. b. The deterministic Muller automaton Inline graphic corresponding to the neural network Inline graphic of panel a. Automaton Inline graphic works over alphabet Inline graphic, contains six states, and possesses in its table Inline graphic the sole cycle Inline graphic, which corresponds to the sole meaningful attractor of Inline graphic. Panels c, d. Translation from a deterministic Muller automaton to its corresponding Boolean neural network provided with a type specification of its attractors. c. A deterministic Muller automaton Inline graphic. The automaton works over alphabet Inline graphic, has three states, and possesses the two successful cycles Inline graphic and Inline graphic, as mentioned by its table Inline graphic. d. The neural network Inline graphic corresponding to the Muller automaton Inline graphic of panel c. The network Inline graphic contains two letter cells, one delay cell, and three state cells to simulate the two possible inputs and three states of automaton Inline graphic. It has only two meaningful attractors corresponding to the two successful cycles of automaton Inline graphic.

(ZIP)

File S6

Example S6, Illustration of the decidability procedure of the complete RNN hierarchy.

(ZIP)


Articles from PLoS ONE are provided here courtesy of PLOS

RESOURCES