Abstract
After an apparent hiatus of roughly 30 years, we revisit a seemingly neglected subject in the theory of (one-dimensional) cellular automata: sublinear-time computation. The model considered is that of ACAs, which are language acceptors whose acceptance condition depends on the states of all cells in the automaton. We prove a time hierarchy theorem for sublinear-time ACA classes, analyze their intersection with the regular languages, and, finally, establish strict inclusions in the parallel computation classes
and (uniform)
. As an addendum, we introduce and investigate the concept of a decider ACA (DACA) as a candidate for a decider counterpart to (acceptor) ACAs. We show the class of languages decidable in constant time by DACAs equals the locally testable languages, and we also determine
as the (tight) time complexity threshold for DACAs up to which no advantage compared to constant time is possible.
Introduction
While there have been several works on linear- and real-time language recognition by cellular automata over the years (see, e.g., [14, 24] for an overview), interest in the sublinear-time case has been scanty at best. We can only speculate this has been due to a certain obstinacy concerning what is now the established acceptance condition for cellular automata, namely that the first cell determines the automaton’s response, despite alternatives being long known [18]. Under this condition, only a constant-size prefix can ever influence the automaton’s decision, which effectively dooms sublinear time to be but a trivial case just as it is for (classical) Turing machines, for example. Nevertheless, at least in the realm of Turing machines, this shortcoming was readily circumvented by adding a random access mechanism to the model, thus sparking rich theories on parallel computation [5, 20], probabilistically checkable proofs [23], and property testing [8, 19].
In the case of cellular automata, the adaptation needed is an alternate (and by all means novel) acceptance condition, covered in Sect. 2. Interestingly, in the resulting model, called ACA, parallelism and local behavior seem to be more marked features, taking priority over cell communication and synchronization algorithms (which are the dominant themes in the linear- and real-time constructions). As mentioned above, the body of theory on sublinear-time ACAs is very small and, to the best of our knowledge, resumes itself to [10, 13, 21]. Ibarra et al. [10] show sublinear-time ACAs are capable of recognizing non-regular languages and also determine a threshold (namely
) up to which no advantage compared to constant time is possible. Meanwhile, Kim and McCloskey [13] and Sommerhalder and Westrhenen [21] analyze the constant-time case subject to different acceptance conditions and characterize it based on the locally testable languages, a subclass of the regular languages.
Indeed, as covered in Sect. 3, the defining property of the locally testable languages, that is, that words which locally appear to be the same are equivalent with respect to membership in the language at hand, effectively translates into an inherent property of acceptance by sublinear-time ACAs. In Sect. 4, we prove a time hierarchy theorem for sublinear-time ACAs as well as further relate the language classes they define to the regular languages and the parallel computation classes
and (uniform)
. In the same section, we also obtain an improvement on a result of [10]. Finally, in Sect. 5 we consider a plausible model of ACAs as language deciders, that is, machines which must not only accept words in the target language but also explicitly reject those which do not. Section 6 concludes.
Definitions
We assume the reader is familiar with the theory of formal languages and cellular automata as well as with computational complexity theory (see, e.g., standard references [1, 6]). This section reviews basic concepts and introduces ACAs.
denotes the set of integers,
that of (strictly) positive integers, and
.
is the set of functions
. For a word
over an alphabet
, w(i) is the i-th symbol of w (starting with the 0-th symbol), and
is the number of occurrences of
in w. For
,
,
, and
are the prefix, suffix and set of infixes of length k of w, respectively, where
and
for
.
is the set of words
for which
. Unless otherwise noted, n is the input length.
(Strictly) Locally Testable Languages
The class
of regular languages is defined in terms of (deterministic) automata with finite memory and which read their input in a single direction (i.e., from left to right), one symbol at a time; once all symbols have been read, the machine outputs a single bit representing its decision. In contrast, a scanner is a memoryless machine which reads a span of
symbols at a time of an input provided with start and end markers (so it can handle prefixes and suffixes separately); the scanner validates every such substring it reads using the same predicate, and it accepts if and only if all these validations are successful. The languages accepted by these machines are the strictly locally testable languages.1
Definition 1
(strictly locally testable). Let
be an alphabet. A language
is strictly locally testable if there is some
and sets
and
such that, for every word
,
if and only if
,
, and
.
is the class of strictly locally testable languages.
A more general notion of locality is provided by the locally testable languages. Intuitively, L is locally testable if a word w being in L or not is entirely dependent on a property of the substrings of w of some constant length
(that depends only on L, not on w). Thus, if any two words have the same set of substrings of length k, then they are equivalent with respect to being in L:
Definition 2
(locally testable). Let
be an alphabet. A language
is locally testable if there is some
such that, for every
with
,
, and
we have that
if and only if
.
denotes the class of locally testable languages.
is the Boolean closure of
, that is, its closure under union, intersection, and complement [16]. In particular,
(i.e., the inclusion is proper [15]).
Cellular Automata
In this paper, we are strictly interested in one-dimensional cellular automata with the standard neighborhood. For
, let
denote the extended neighborhood of radius r of the cell
.
Definition 3
(cellular automaton). A cellular automaton (CA) C is a triple
where Q is a finite, non-empty set of states,
is the local transition function, and
is the input alphabet. An element of
(resp.,
) is called a local (resp., global) configuration of C.
induces the global transition function
on the configuration space
by
, where
is a cell and
.
Our interest in CAs is as machines which receive an input and process it until a final state is reached. The input is provided from left to right, with one cell for each input symbol. The surrounding cells are inactive and remain so for the entirety of the computation (i.e., the CA is bounded). It is customary for CAs to have a distinguished cell, usually cell zero, which communicates the machine’s output. As mentioned in the introduction, this convention is inadequate for computation in sublinear time; instead, we require the finality condition to depend on the entire (global) configuration (modulo inactive cells):
Definition 4
(CA computation). There is a distinguished state
, called the inactive state, which, for every
, satisfies
if and only if
. A cell not in state q is said to be active. For an input
, the initial configuration
of C for w is
for
and
otherwise. For
, a configuration
is F-final (for w) if there is a (minimal)
such that
and c contains only states in
. In this context, the sequence
is the trace of w, and
is the time complexity of C (with respect to F and w).
Because we effectively consider only bounded CAs, the computation of w involves exactly |w| active cells. The surrounding inactive cells are needed only as markers for the start and end of w. As a side effect, the initial configuration
for the empty word
is stationary (i.e.,
) regardless of the choice of
. Since this is the case only for
, we disregard it for the rest of the paper, that is, we assume it is not contained in any of the languages considered.
Finally, we relate final configurations and computation results. We adopt an acceptance condition as in [18, 21] and obtain a so-called ACA; here, the “A” of “ACA” refers to the property that all (active) cells are relevant for acceptance.
Definition 5
(ACA). An ACA is a CA C with a non-empty subset
of accept states. For
, if C reaches an A-final configuration, we say C
accepts
w. L(C) denotes the set of words accepted by C. For
, we write
for the class of languages accepted by an ACA with time complexity bounded by t, that is, for which the time complexity of accepting w is
.
is immediate for functions
with
for every
. Because Definition 5 allows multiple accept states, it is possible for each (non-accepting) state z to have a corresponding accept state
. In the rest of this paper, when we say a cell becomes (or marks itself as) accepting (without explicitly mentioning its state), we intend to say it changes from such a state z to
.
Figure 1 illustrates the computation of an ACA with input alphabet
and which accepts
with time complexity equal to one (step). The local transition function is such that
, a being the (only) accept state, and
for
and arbitrary
and
.
Fig. 1.
Computation of an ACA which recognizes
. The input words are
and
, respectively.
First Observations
This section recalls results on sublinear-time ACA computation (i.e.,
where
) from [10, 13, 21] and provides some additional remarks. We start with the constant-time case (i.e.,
). Here, the connection between scanners and ACAs is apparent: If an ACA accepts an input w in time
, then w can be verified by a scanner with an input span of
symbols and using the predicate induced by the local transition function of the ACA (i.e., the predicate is true if and only if the symbols read correspond to
for some cell z in the initial configuration and z is accepting after
steps).
Constant-time ACA computation has been studied in [13, 21]. Although in [13] we find a characterization based on a hierarchy over
, the acceptance condition there differs slightly from that in Definition 5; in particular, the automata there run for a number of steps which is fixed for each automaton, and the outcome is evaluated (only) in the final step. In contrast, in [21] we find the following, where
denotes the closure of
under union:
Theorem 6
([21]).
.
Thus,
is closed under union. In fact, more generally:
Proposition 7
For any
,
is closed under union.
is closed under intersection [21]. It is an open question whether
is also closed under intersection for every
.
Moving beyond constant time, in [10] we find the following:
Theorem 8
([10]). For
,
.
In [10] we find an example for a non-regular language in
which is essentially a variation of the language
![]() |
where
is the k-digit binary representation of
.
To illustrate the ideas involved, we present an example related to
(though it results in a different time complexity) and which is also useful in later discussions in Sect. 5. Let
and consider the language
![]() |
of all identity matrices in line-for-line representations, where the lines are separated by
symbols.2
We now describe an ACA for
; the construction closely follows the aforementioned one for
found in [10] (and the difference in complexity is only due to the different number and size of blocks in the words of
and
). Denote each group of cells initially containing a (maximally long)
substring of
by a block. Each block of size b propagates its contents to the neighboring blocks (in separate registers); using a textbook CA technique, this requires exactly 2b steps. Once the strings align, a block initially containing
verifies it has received
and
from its left and right neighbor blocks (if either exists), respectively. The cells of a block and its delimiters become accepting if and only if the comparisons are successful and there is a single
between the block and its neighbors. This process takes linear time in b; since any
has
many blocks, each with
cells, it follows that
.
To show the above construction is time-optimal, we use the following observation, which is also central in proving several other results in this paper:
Lemma 9
Let C be an ACA, and let w be an input which C accepts in (exactly)
steps. Then, for every input
such that
,
, and
, C accepts
in at most
steps.
The lemma is intended to be used with
since otherwise
. It can be used, for instance, to show that
is not in
for any
(e.g., set
and
for large
). It follows
for
.
Since the complement of
(respective to
) is
and
(e.g., simply set 0 as the ACA’s accepting state),
is not closed under complement for any
. Also,
is a regular language and
is not, so we have:
Proposition 10
For
,
and
are incomparable.
If the inclusion of infixes in Lemma 9 is strengthened to an equality, one may apply it in both directions and obtain the following stronger statement:
Lemma 11
Let C be an ACA with time complexity bounded by
(i.e., C accepts any input of length n in at most t(n) steps). Then, for any two inputs w and
with
,
, and
where
, we have that
if and only if
.
Finally, we can show our ACA for
is time-optimal:
Proposition 12
For any
,
.
Main Results
In this section, we present various results regarding
where
. First, we obtain a time hierarchy theorem, that is, under plausible conditions,
for
. Next, we show
is (strictly) contained in
and also present an improvement to Theorem 8. Finally, we study inclusion relations between
and the
and (uniform)
hierarchies. Save for the material covered so far, all three subsections stand out independently from one another.
Time Hierarchy
For functions
, we say f is time-constructible by CAs in t(n) time if there is a CA C which, on input
, reaches a configuration containing the value f(n) (binary-encoded) in at most t(n) steps.3 Note that, since CAs can simulate (one-tape) Turing machines in real-time, any function constructible by Turing machines (in the corresponding sense) is also constructible by CAs.
Theorem 13
Let
with
,
, and let f and g be time-constructible (by CAs) in f(n) time. Furthermore, let
be such that
for some constant
and all but finitely many
. Then, for every
,
.
Given
, this can be used, for instance, with any time-constructible
(resp.,
, in which case
is also possible) and
(resp.,
). The proof idea is to construct a language L similar to
(see Sect. 3) in which every
has length exponential in the size of its blocks while the distance between any two blocks is
. Due to Lemma 9, the latter implies L is not recognizable in o(t(|w|)) time.
Proof
For simplicity, let
. Consider
where
![]() |
and note
. Because
and
, given any
, setting
,
, and
and applying Lemma 9 for sufficiently large k yields
.
By assumption it suffices to show
is accepted by an ACA C in at most
steps for sufficiently large
. The cells of C perform two procedures
and
simultaneously:
is as in the ACA for
(see Sect. 3) and ensures that the blocks of w have the same length, that the respective binary encodings are valid, and that the last value is correct (i.e., equal to
). In
, each block computes f(k) as a function of its block length k. Subsequently, the value f(k) is decreased using a real-time counter (see, e.g., [12] for a construction). Every time the counter is decremented, a signal starts from the block’s leftmost cell and is propagated to the right. This allows every group of cells of the form bs with
and
to assert there are precisely f(k) symbols in total (i.e.,
). A cell is accepting if and only if it is accepting both in
and
. The proof is complete by noticing either procedure takes a maximum of 3f(k) steps (again, for sufficiently large k). 
Intersection with the Regular Languages
In light of Proposition 10, we now consider the intersection
for
(in the same spirit as a conjecture by Straubing [22]). For this section, we assume the reader is familiar with the theory of syntactic semigroups (see, e.g., [7] for an in-depth treatment).
Given a language L, let
denote the syntactic semigroup of L. It is well-known that
is finite if and only if L is regular. A semigroup S is a semilattice if
and
for every
. Additionally, S is locally semilattice if eSe is a semilattice for every idempotent
, that is,
. We use the following characterization of locally testable languages:
Theorem 14
([3, 15]).
if and only if
is finite and locally semilattice.
In conjunction with Lemma 9, this yields the following, where the strict inclusion is due to
(since
; see Sect. 3):
Theorem 15
For every
,
.
Proof
Let
be a language over the alphabet
and, in addition, let
, that is,
is finite. By Theorem 14, it suffices to show S is locally semilattice. To that end, let
be idempotent, and let
.
To show
, let
and consider the words
and
. For
, let
, and let
be such that
and also
. Since e is idempotent,
and u belong to the same class in S, that is,
if and only if
; the same is true for
and v. Furthermore,
,
, and
hold. Since
, Lemma 11 applies.
The proof of
is analogous. Simply consider the words
and
for sufficiently large
and use, again, Lemma 11 and the fact that e is idempotent. 
Using Theorems 8 and 15, we have
for
. We can improve this bound to
, which is a proper subset of
:
Theorem 16
For every
,
.
Proof
We prove every ACA C with time complexity at most
actually has O(1) time complexity. Let Q be the state set of C and assume
, and let
be such that
for
. Letting
and assuming
, we then have
(
). We shall use this to prove that, for any word
of length
, there is a word
of length
as well as
such that
,
, and
. By Lemma 9, C must have
time complexity on w and, since the set of all such
is finite, it follows that C has O(1) time complexity.
Now let w be as above and let C accept w in (exactly)
steps. We prove the claim by induction on |w|. The base case
is trivial, so let
and assume the claim holds for every word in L of length strictly less than n. Consider the De Bruijn graph G over the words in
where
. Then, from the infixes of w of length
(in order of appearance in w) one obtains a path P in G by starting at the leftmost infix and visiting every subsequent one, up to the rightmost one. Let
be the induced subgraph of G containing exactly the nodes visited by P, and notice P visits every node in
at least once. It is not hard to show that, for every such P and
, there is a path
in
with the same starting and ending points as P and that visits every node of
at least once while having length at most
, where m is the number of nodes in
.4 To this
corresponds a word
of length
for which, by construction of
and
,
,
, and
. Since
, using (
) we have
, and then either
and
(since otherwise
, which contradicts
), or we may apply the induction hypothesis; in either case, the claim follows. 
Relation to Parallel Complexity Classes
In this section, we relate
to other classes which characterize parallel computation, namely the
and (uniform)
hierarchies. In this context,
is the class of problems decidable by Turing machines in
space and polynomial time, whereas
is that decidable by Boolean circuits with polynomial size,
depth, and gates with unbounded fan-in.
(resp.,
) is the union of all
(resp.,
) for
. Here, we consider only uniform versions of
; when relevant, we state the respective uniformity condition. Although
is known, it is unclear whether any other containment holds between
and
.
One should not expect to include
or
in
for any
. Conceptually speaking, whereas the models of
and
are capable of random access to their input, ACAs are inherently local (as evinced by Lemmas 9 and 11). Explicit counterexamples may be found among the unary languages: For any fixed
and
with
, trivially
,
, and
hold. Hence, by Lemma 9, if an ACA C accepts
in
time and |w| is large (e.g.,
), then C accepts any
with
. Thus, extending a result from [21]:
Proposition 17
If
and
is a unary language (i.e.,
and
), then L is either finite or co-finite.
In light of the above, the rest of this section is concerned with the converse type of inclusion (i.e., of
in the
or
hierarchies). For
with
, we say f is constructible (by a Turing machine) in s(n) space and t(n) time if there is a Turing machine T which, on input
, outputs f(n) in binary using at most s(n) space and t(n) time. Also, recall a Turing machine can simulate
steps of a CA with m (active) cells in O(m) space and
time.
Proposition 18
Let C be an ACA with time complexity bounded by
,
, and let t be constructible in t(n) space and
time. Then, there is a Turing machine which decides L(C) in O(t(n)) space and
time.
Thus, for polylogarithmic t (where the strict inclusion is due to Proposition 17):
Corollary 19
For
,
.
Moving on to the
classes, we employ some notions from descriptive complexity theory (see, e.g., [11] for an introduction). Let
be the class of languages describable by first-order formulas with numeric relations in
(i.e., logarithmic space) and quantifier block iterations bounded by
.
Theorem 20
Let
with
be constructible in logarithmic space (and arbitrary time). For any ACA C whose time complexity is bounded by t,
.
Corollary 21
For
,
.
Because
(regardless of non-uniformity) [9], this is an improvement on Corollary 19 at least for
. Nevertheless, note the usual uniformity condition for
is not
- but the more restrictive
-uniformity [25], and there is good evidence that these two versions of
are distinct [4]. Using methods from [2], Corollary 21 may be rephrased for
in terms of
- or even
-uniformity, but the
-uniformity case remains unclear.
Decider ACA
So far, we have considered ACAs strictly as language acceptors. As such, their time complexity for inputs not in the target language (i.e., those which are not accepted) is entirely disregarded. In this section, we investigate ACAs as deciders, that is, as machines which must also (explicitly) reject invalid inputs. We analyze the case in which these decider ACAs must reject under the same condition as acceptance (i.e., all cells are simultaneously in a final rejecting state):
Definition 22
(DACA). A decider ACA (DACA) is an ACA C which, in addition to its set A of accept states, has a non-empty subset
of reject states that is disjoint from A (i.e.,
). Every input
of C must lead to an A- or an R-final configuration (or both). C
accepts
w if it leads to an A-final configuration
and none of the configurations prior to
are R-final. Similarly, C
rejects
w if it leads to an R-final configuration
and none of the configurations prior to
are A-final. The time complexity of C (with respect to w) is the number of steps elapsed until C reaches an R- or A-final configuration (for the first time).
is the DACA analogue of
.
In contrast to Definition 5, here we must be careful so that the accept and reject results do not overlap (i.e., a word cannot be both accepted and rejected). We opt for interpreting the first (chronologically speaking) of the final configurations as the machine’s response. Since the outcome of the computation is then irrelevant regardless of any subsequent configurations (whether they are final or not), this is equivalent to requiring, for instance, that the DACA must halt once a final configuration is reached.
One peculiar consequence of Definition 22 is the relation between languages which can be recognized by acceptor ACAs and DACAs (i.e., the classes
and
). As it turns out, the situation is quite different from what is usually expected of restricting an acceptor model to a decider one, that is, that deciders yield a (possibly strictly) more restricted class of machines. In fact, one can show
holds for
since
(see discussion after Lemma 9); nevertheless,
. For example, the local transition function
of the DACA can be chosen as
and
for
, where
and
are arbitrary states, and a and r are the (only) accept and reject states, respectively; see Fig. 2. Choosing the same
for an (acceptor) ACA does not yield an ACA for
since then all words of the form
are accepted in the second step (as they are not rejected in the first one). We stress this rather counterintuitive phenomenon occurs only in the case of sublinear time (as
for
).
Fig. 2.
Computation of a DACA C which decides
. The inputs words are
and
, respectively.
Similar to (acceptor) ACAs (Lemma 9), sublinear-time DACAs operate locally:
Lemma 23
Let C be a DACA and let
be a word which C decides in exactly
steps. Then, for every word
with
,
, and
, C decides
in
steps, and
holds if and only if
.
One might be tempted to relax the requirements above to
(as in Lemma 9). We stress, however, the equality
is crucial; otherwise, it might be the case that C takes strictly less than
steps to decide
and, hence,
may not be equivalent to
.
We note that, in addition to Lemmas 9 and 11, the results from Sect. 4 are extendable to decider ACAs; a more systematic treatment is left as a topic for future work. The remainder of this section is concerned with characterizing
computation (as a parallel to Theorem 6) as well as establishing the time threshold for DACAs to decide languages other than those in
(as Theorem 16 and the result
do for acceptor ACAs).
The Constant-Time Case
First notice that, for any DACA C, swapping the accept and reject states yields a DACA with the same time complexity and which decides the complement of L(C). Hence, in contrast to ACAs (see discussion following Lemma 9):
Proposition 24
For any
,
is closed under complement.
Using this, we can prove the following, which characterizes constant-time DACA computation as a parallel to Theorem 6:
Theorem 25
.
Hence, we obtain the rather surprising inclusion
, that is, for constant time, DACAs constitute a strictly more powerful model than their acceptor counterparts.
Beyond Constant Time
Theorem 16 establishes a logarithmic time threshold for (acceptor) ACAs to recognize languages not in
. We now turn to obtaining a similar result for DACAs. As it turns out, in this case the bound is considerably larger:
Theorem 26
For any
,
.
One immediate implication is that
and
are incomparable for
(since, e.g.,
; see Sect. 3). The proof idea is that any DACA whose time complexity is not constant admits an infinite sequence of words with increasing time complexity; however, the time complexity of each such word can be traced back to a critical set of cells which prevent the automaton from either accepting or rejecting. By contracting the words while keeping the extended neighborhoods of these cells intact, we obtain a new infinite sequence of words which the DACA necessarily takes
time to decide:
Proof
Let C be a DACA with time complexity bounded by t and assume
; we show
. Since
, for every
there is a
such that C takes strictly more than i steps to decide
. In particular, when C receives
as input, there are cells
and
for
such that
(resp.,
) is not accepting (resp., rejecting) in step j. Let
be the set of all
for which
, that is,
for some j. Consider the restriction
of
to the symbols having index in
, that is,
for
and
, and notice
has the same property as
(i.e., C takes strictly more than i steps to decide
). Since
, C has
time complexity on the (infinite) set
. 
Using
(see Sect. 3), we show the bound in Theorem 26 is optimal:
Proposition 27
.
We have
(see Sect. 3); the non-trivial part is ensuring the DACA also rejects every
in
time. In particular, in such strings the
delimiters may be an arbitrary number of cells apart or even absent altogether; hence, naively comparing every pair of blocks is not an option. Rather, we check the existence of a particular set of substrings of increasing length and which must present if the input is in
. Every O(1) steps the existence of a different substring is verified; the result is that the input length must be at least quadratic in the length of the last substring tested (and the input is timely rejected if it does not contain any one of the required substrings).
Conclusion and Open Problems
Following the definition of ACAs in Sect. 2, Sect. 3 reviewed existing results on
for sublinear t (i.e.,
); we also observed that sublinear-time ACAs operate in an inherently local manner (Lemmas 9 and 11). In Sect. 4, we proved a time hierarchy theorem (Theorem 13), narrowed down the languages in
(Theorem 15), improved Theorem 8 to
(Theorem 16), and, finally, obtained (strict) inclusions in the parallel computation classes
and
(Corollaries 19 and 21, respectively). The existence of a hierarchy theorem for ACAs is of interest because obtaining an equivalent result for
and
is an open problem in computational complexity theory. Also of note is that the proof of Theorem 13 does not rely on diagonalization (the prevalent technique for most computational models) but, rather, on a quintessential property of sublinear-time ACA computation (i.e., locality as in the sense of Lemma 9).
In Sect. 5, we considered a plausible definition of ACAs as language deciders as opposed to simply acceptors, obtaining DACAs. The respective constant-time class is
(Theorem 25), which surprisingly is a (strict) superset of
. Meanwhile,
is the time complexity threshold for deciding languages other than those in
(Theorem 26 and Proposition 27).
As for future work, the primary concern is extending the results of Sect. 4 to DACAs.
is closed under union and intersection and we saw that
is closed under complement for any
; a further question would be whether
is also closed under union and intersection. Finally, we have
,
, and that
and
are incomparable for
; it remains open what the relation between the two classes is for
.
Acknowledgments
I would like to thank Thomas Worsch for the fruitful discussions and feedback during the development of this work. I would also like to thank the DLT 2020 reviewers for their valuable comments and suggestions and, in particular, one of the reviewers for pointing out a proof idea for Theorem 16, which was listed as an open problem in a preliminary version of the paper.
Footnotes
Alternatively, one can also think of
as a (natural) problem on graphs presented in the adjacency matrix representation.
Just as is the case for Turing machines, there is not a single definition for time-constructibility by CAs (see, e.g., [12] for an alternative). Here, we opt for a plausible variant which has the benefit of simplifying the ensuing line of argument.
Number the nodes of
from 1 to m according to the order in which they are first visited by P. Then, there is a path in
from i to
for every
, and a shortest such path has length at most m. Piecing these paths together along with a last (shortest) path from m to the ending point of P, we obtain a path of length at most
with the purported property.
Some proofs have been omitted due to page constraints. These may be found in the full version of the paper [17].
Contributor Information
Nataša Jonoska, Email: jonoska@mail.usf.edu.
Dmytro Savchuk, Email: savchuk@usf.edu.
Augusto Modanese, Email: modanese@kit.edu.
References
- 1.Arora S, Barak B. Computational Complexity - A Modern Approach. Cambridge: Cambridge University Press; 2009. [Google Scholar]
- 2.Mix Barrington DA. Extensions of an idea of McNaughton. Math. Syst. Theory. 1990;23(3):147–164. doi: 10.1007/BF02090772. [DOI] [Google Scholar]
- 3.Brzozowski JA, Simon I. Characterizations of locally testable events. Discrete Math. 1973;4(3):243–271. doi: 10.1016/S0012-365X(73)80005-6. [DOI] [Google Scholar]
-
4.Caussinus H, et al. Nondeterministic
computation. J. Comput. Syst. Sci. 1998;57(2):200–212. doi: 10.1006/jcss.1998.1588. [DOI] [Google Scholar] - 5.Cook SA. A taxonomy of problems with fast parallel algorithms. Inf. Control. 1985;64(1–3):2–21. doi: 10.1016/S0019-9958(85)80041-3. [DOI] [Google Scholar]
- 6.Delorme M, Mazoyer J, editors. Cellular Automata. A Parallel Model. Netherlands: Springer; 1999. [Google Scholar]
- 7.Eilenberg S. Automata, Languages, and Machines. New York: Academic Press; 1976. [Google Scholar]
- 8.Fischer, E.: The art of uninformed decisions. In: Bulletin of the EATCS 75, p. 97 (2001)
- 9.Furst ML, et al. Parity, circuits, and the polynomial-time hierarchy. Math. Syst. Theory. 1984;17(1):13–27. doi: 10.1007/BF01744431. [DOI] [Google Scholar]
- 10.Ibarra OH, et al. Fast parallel language recognition by cellular automata. Theor. Comput. Sci. 1985;41:231–246. doi: 10.1016/0304-3975(85)90073-8. [DOI] [Google Scholar]
- 11.Immerman N. Descriptive Complexity. New York: Springer; 1999. [Google Scholar]
- 12.Iwamoto C, et al. Constructible functions in cellular automata and their applications to hierarchy results. Theor. Comput. Sci. 2002;270(1–2):797–809. doi: 10.1016/S0304-3975(01)00112-8. [DOI] [Google Scholar]
- 13.Kim S, McCloskey R, Sam Kim and Robert McCloskey A characterization of constant-time cellular automata computation. Phys. D. 1990;45(1–3):404–419. doi: 10.1016/0167-2789(90)90198-X. [DOI] [Google Scholar]
- 14.Kutrib M. Cellular automata and language theory. In: Meyers R, editor. Encyclopedia of Complexity and Systems Science. New York: Springer; 2009. pp. 800–823. [Google Scholar]
- 15.McNaughton R. Algebraic decision procedures for local testability. Math. Syst. Theory. 1974;8(1):60–76. doi: 10.1007/BF01761708. [DOI] [Google Scholar]
- 16.McNaughton R, Papert S. Counter-Free Automata. Cambridge, MA: The MIT Press; 1971. [Google Scholar]
- 17.Modanese, A.: Sublinear-Time Language Recognition and Decision by One-Dimensional Cellular Automata. CoRR abs/1909.05828 (2019). arXiv: 1909.05828
- 18.Rosenfeld A. Picture Languages: Formal Models for Picture Recognition. New York: Academic Press; 1979. [Google Scholar]
- 19.Rubinfeld R, Shapira A, Ronitt Rubinfeld and Asaf Shapira Sublinear time algorithms. SIAM J. Discrete Math. 2011;25(4):1562–1588. doi: 10.1137/100791075. [DOI] [Google Scholar]
- 20.Ruzzo WL. On uniform circuit complexity. J. Comput. Syst. Sci. 1981;22(3):365–383. doi: 10.1016/0022-0000(81)90038-6. [DOI] [Google Scholar]
- 21.Sommerhalder R, van Westrhenen SC. Parallel language recognition in constant time by cellular automata. Acta Inf. 1983;19:397–407. doi: 10.1007/BF00290736. [DOI] [Google Scholar]
- 22.Straubing H. Finite Automata, Formal Logic, and Circuit Complexity. Boston, MA: Birkhäuser; 1994. [Google Scholar]
- 23.Sudan M. Probabilistically checkable proofs. Commun. ACM. 2009;52(3):76–84. doi: 10.1145/1467247.1467267. [DOI] [Google Scholar]
- 24.Terrier V. Language recognition by cellular automata. In: Rozenberg G, Back T, Kok JN, editors. Handbook of Natural Computing. Heidelberg: Springer; 2012. pp. 123–158. [Google Scholar]
- 25.Vollmer H. Introduction to Circuit Complexity - A Uniform Approach. Heidelberg: Springer; 1999. [Google Scholar]








