Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2020 Jan 7;12038:3–16. doi: 10.1007/978-3-030-40608-0_1

The New Complexity Landscape Around Circuit Minimization

Eric Allender 5,
Editors: Alberto Leporati8, Carlos Martín-Vide9, Dana Shapira10, Claudio Zandron11
PMCID: PMC7206620

Abstract

We survey recent developments related to the Minimum Circuit Size Problem.

Keywords: Complexity theory, Kolmogorov complexity, Minimum Circuit Size Problem

Introduction

Over the past few years, there has been an explosion of interest in the Minimum Circuit Size Problem (Inline graphic) and related problems. Thus the time seemed right to provide a survey, describing the new landscape and offering a guidebook so that one can easily reach the new frontiers of research in this area.

It turns out that this landscape is extremely unstable, with new features arising at an alarming rate. Although this makes it a scientifically-exciting time, it also means that this survey is doomed to be obsolete before it appears. It also means that the survey is going to take the form of an “annotated bibliography” with the intent to provide many pointers to the relevant literature, along with a bit of context.

The title of this article is “The New Complexity Landscape around Circuit Minimization” (emphasis added). This means that I will try to avoid repeating too many of the observations that were made in an earlier survey I wrote on a related topic [1]. Although that article was written only three years ago, several of the open questions that were mentioned there have now been resolved (and some of the conjectures that were mentioned have been overturned).

Meta-complexity, Inline graphic and Kolmogorov Complexity

The focus of complexity theory is to determine how hard problems are. The focus of meta-complexity is to determine how hard it is to determine how hard problems are. Some of the most exciting recent developments in complexity theory have been the result of meta-complexity-theoretic investigations.

The Minimum Circuit Size Problem (Inline graphic) is, quite simply, the problem of determining the circuit complexity of functions. The input consists of a pair (fi), where f is a bit string of length Inline graphic representing the truth-table of a Boolean function, and Inline graphic, and the problem is to determine if f has a circuit of size at most i. The study of the complexity of Inline graphic is therefore the canonical meta-complexity-theoretic question. Complexity theoreticians are fond of complaining that the problems they confront (showing that computational problems are hard to compute) are notoriously difficult. But is this really true? Is it hard to show that a particular function is difficult to compute? This question can be made precise by asking about the computational complexity of Inline graphic. (See also [44] for a different approach.)

A small circuit is a short description of a large truth-table f; thus it is no surprise that investigations of Inline graphic have made use of the tools and terminology of Kolmogorov complexity. In order to discuss some of the recent developments, it will be necessary to review some of the different notions, and to establish the notation that will be used throughout the rest of the article.

Let U be a Turing machine. We define Inline graphic to be Inline graphic. Those readers who are familiar with Kolmogorov complexity1 will notice that the definition here is for what is sometimes called “plain” Kolmogorov complexity, although the notation Inline graphic is more commonly used to denote what is called “prefix-free” Kolmogorov complexity. This is intentional. In this survey, the distinctions between these two notions will be blurred, in order to keep the discussion on a high level. Some of the theorems that will be mentioned below are only known to hold for the prefix-free variant, but the reader is encouraged to ignore these finer distinctions here, and seek the more detailed information in the cited references. For some Turing machines U, Inline graphic will not be defined for some x, and the values of Inline graphic and Inline graphic can be very different, for different machines U and Inline graphic. But the beauty of Kolmogorov complexity (and the applicability of of the theory it gives rise to) derives from the fact that if U and Inline graphic are universal Turing machines, then Inline graphic and Inline graphic differ by at most O(1). By convention, we select one particular universal machine U and define K(x) to be equal to Inline graphic.

The function K is not computable. The simplest way to obtain a computable function that shares many of the properties of K is to simply impose a time bound, leading to the definition Inline graphic in time t(|x|)} where t is a computable function. Although it is useful in many contexts, Inline graphic does not appear to be closely connected to the circuit size of x (where x is viewed as the truth-table of a function). Thus we will frequently refer to two additional resource-bounded Kolmogorov complexity measures, Inline graphic and Inline graphic.

Levin defined Inline graphic to be Inline graphic in time t} [32]; it has the nice property that it can be used to define the optimal search strategy to use, in searching for accepting computations on a nondeterministic Turing machine. Inline graphic also corresponds to the circuit size of the function x, but not on “normal” circuits. As is shown in [2], Inline graphic is roughly the same as the size of the smallest oracle circuit that computes x, where the oracle is a complete set for Inline graphic. (An oracle circuit has “oracle gates” in addition to the usual AND, OR, and NOT gates; an oracle gate for oracle A has k wires leading into it, and if those k wires encode a bitstring y of length k where y is in A, then the gate outputs 1, otherwise it outputs 0.)

It is clearly desirable to have a version of Kolmogorov complexity that is more closely related to “ordinary” circuit size, instead of oracle circuit size. This is accomplished by defining Inline graphic to be Inline graphic in time t}. (More precise definitions can be found in [2, 10].)

We have now presented a number of different measures Inline graphic. By analogy with Inline graphic, we can study Inline graphic in place of the “circuit size” measure, and thus obtain various problems of the form Inline graphic, such as Inline graphic, Inline graphic and Inline graphic. If Inline graphic, then Inline graphic is in Inline graphic, and several theorems about Inline graphic yield corollaries about Inline graphic in this case. (See, e.g. [2]). Similarly, if Inline graphic for some Inline graphic, then Inline graphic is in Inline graphic, and several theorems about Inline graphic yield corollaries about Inline graphic for t in this range [2].

In order to highlight some of the recent developments, let us introduce some notation that is somewhat imprecise and which is not used anywhere else, but which will be convenient for our purposes. Let Inline graphic serve as a shorthand for Inline graphic whenever Inline graphic, and similarly let Inline graphic serve as a shorthand for Inline graphic whenever Inline graphic for some Inline graphic. We will thus be referring to Inline graphic and Inline graphic. Doing so will enable us to avoid some confusing notation surrounding the name MINKT, which was introduced by Ko [31] to denote the set Inline graphic}. That is, Inline graphic iff Inline graphic (where the time bound Inline graphic). Hence these sets have comparable complexity and results about MINKT can be rephrased in terms of Inline graphic with only a small loss of accuracy. In particular, some recent important results [19, 20] are phrased in terms of MINKT, and as such they deal with Inline graphic complexity, and they are not really very closely connected with the Inline graphic measure; the name MINKT was devised more than a decade before Inline graphic was formulated. The reader who is interested in the details should refer to the original papers for the precise formulation of the theorems. However, the view presented here is “probably approximately correct”.

Frequently, theorems about Inline graphic and the various Inline graphic problems are stated not in terms of exactly computing the circuit size or the complexity of a string, but in terms of approximating these values. This is usually presented in terms of two thresholds Inline graphic, where the desired solution is to say yes if the complexity of x is less than Inline graphic, and to say no if the complexity of x is greater than Inline graphic, and any answer is allowed if the complexity of x lies in the “gap” between Inline graphic and Inline graphic. In the various theorems that have been proved in this setting, the choice of thresholds Inline graphic and Inline graphic is usually important, but in this article those details will be suppressed, and all of these approximation problems will be referred to as Inline graphic, Inline graphic, Inline graphic, etc.

At this point, the reader’s eyes may be starting to glaze over. It is natural to wonder if we really need to have all of these different related notions. In particular, there does not seem to be much difference between Inline graphic and Inline graphic. Most hardness results for Inline graphic actually hold for Inline graphic, and if the “gap” is large enough, then Inline graphic is a solution to Inline graphic (and vice-versa). Furthermore it has frequently been the case that a theorem about Inline graphic was first proved for Inline graphic and then the result for Inline graphic was obtained as a corollary. However, there is no efficient reduction known (in either direction) between Inline graphic and Inline graphic, and there are some theorems that are currently known to hold only for Inline graphic, although they are suspected to hold also for Inline graphic (e.g., [4, 6, 23]). Similarly, some of the more intriguing recent developments can only be understood by paying attention to the distinction between different notions of resource-bounded Kolmogorov complexity. Thus it is worth making this investment in defining the various distinct notions.

Connections to Learning Theory

Certain connections between computational learning theory and Kolmogorov complexity were identified soon after computational learning theory emerged as a field. After all, the goal of computational learning theory is to find a satisfactory (and hence succinct) explanation of a large body of observed data. For instance, in the 1980s and 1990s there was work [40, 41] showing that it is Inline graphic-hard to find “succinct explanations” that have size somewhat close to the optimal size, if these “explanations” are required to be finite automata or various other restricted formalisms. Ko studied this in a more general setting, allowing “explanations” to be efficient programs (in the setting of time-bounded Kolmogorov complexity).

Thus Ko studied not only the problem of computing Inline graphic (where one can consider x to be a completely-specified Boolean function), but also the problem of finding the smallest description d such that U(d) agrees with a given list of “yes instances” Y and a list of “no instances” N (that is, x can be considered as a partial Boolean function, with many “don’t care” instances). Thus, following [28], we can call this problem Inline graphic. In the setting that is most relevant for computational learning theory, the partial function x is presented compactly as separate lists Y and N, rather than as a string of length Inline graphic over the alphabet Inline graphic.

Ko showed in [31] that relativizing techniques would not suffice, in order to settle the question of whether Inline graphic and Inline graphic are Inline graphic-complete. That is, by giving the universal Turing machine U that defines Kolmogorov complexity access to an oracle A, one obtains the problems Inline graphic and Inline graphic, and these sets can either be Inline graphic-complete or not, depending on the choice of A.

Thus it is noteworthy that it has recently been shown that Inline graphic is Inline graphic-complete under Inline graphic reductions [28]. I suspect (although I have not verified) that the proof also establishes that Inline graphic is Inline graphic-complete under Inline graphic reductions. One lesson to take from this is that Inline graphic and Inline graphic complexity differ from each other in significant ways. There are other recent examples of related phenomena, which will be discussed below.

There are other strong connections between Inline graphic and learning theory that have come to light recently. Using Inline graphic as an oracle (or even using a set that shares certain characteristics with Inline graphic) one can efficiently learn small circuits that do a good job of explaining the data [11]. For certain restricted classes of circuits, there are sets in Inline graphic that one can use in place of Inline graphic to obtain learning algorithms that don’t require an oracle [11]. This connection has been explored further [12, 36].

Completeness, Hardness, Reducibility

The preceding section mentioned a result about a problem being Inline graphic-complete under Inline graphic reductions. In order to discuss other results about the complexity of Inline graphic and related problems it is necessary to go into more detail about different notions of reducibility.

Let Inline graphic be either a class of functions or a class of circuits. The classes that will concern us the most are the standard complexity classes Inline graphic as well as the circuit classes (both uniform and nonuniform):

graphic file with name M120.gif

We refer the reader to the text by Vollmer [46] for background and more complete definitions of these standard circuit complexity complexity classes, as well as for a discussion of uniformity.

We say that Inline graphic if there is a function Inline graphic (or f computed by a circuit family in Inline graphic, respectively) such that Inline graphic iff Inline graphic. We will make use of Inline graphic and Inline graphic reducibility. The more powerful notion of Turing reducibility also plays an important role in this work. Here, Inline graphic is a complexity class that admits a characterization in terms of Turing machines or circuits, which can be augmented with an “oracle” mechanism, either by providing a “query tape” or “oracle gates”. We say that Inline graphic if there is a oracle machine in Inline graphic (or a family of oracle circuits in Inline graphic) accepting A, when given oracle B. We make use of Inline graphic, and Inline graphic reducibility; instead of writing Inline graphic or Inline graphic, we will sometimes write Inline graphic or Inline graphic. Turing reductions that are “nonadaptive” – in the sense that the list of queries that are posed on input x does not depend on the answers provided by the oracle – are called truth table reductions. We make use of Inline graphic reducibility.

Not much has changed, regarding what is known about the “hardness” of Inline graphic, in the three years that have passed since my earlier survey [1]. Here is what I wrote at that time:

Table 1 presents information about the consequences that will follow if Inline graphic is Inline graphic-complete (or even if it is hard for certain subclasses of Inline graphic). The table is incomplete (since it does not mention the influential theorems of Kabanets and Cai [30] describing various consequences if Inline graphic were complete under a certain restricted type of Inline graphic reduction). It also fails to adequately give credit to all of the papers that have contributed to this line of work, since – for example – some of the important contributions of [35] have subsequently been slightly improved [7, 25]. But one thing should jump out at the reader from Table 1: All of the conditions listed in Column 3 (with the exception of “FALSE”) are widely believed to be true, although they all seem to be far beyond the reach of current proof techniques.

Table 1.

Summary of what is known about the consequences of Inline graphic being hard for Inline graphic under different types of reducibility. If Inline graphic is hard for the class in Column 1 under the reducibility shown in Column 2, then the consequence in Column 3 follows.

Class Inline graphic Reductions Inline graphic Statement Inline graphic Reference
Inline graphic Inline graphic FALSE [35]
Inline graphic Inline graphic Inline graphic and Inline graphic [7, 35]
Inline graphic Inline graphic Inline graphic [7]
Inline graphic Inline graphic Inline graphic [7]
Inline graphic Inline graphic Inline graphic [35]
Inline graphic Inline graphic Inline graphic [25]

Inline graphic Inline graphic is the linear-time analog of the polynomial hierarchy. Problems in Inline graphic are accepted by alternating Turing machines that make only O(1) alternations and run for linear time.

It is significant that neither Inline graphic nor Inline graphic is Inline graphic-complete under Inline graphic reductions, since Inline graphic and many other well-known problems are complete under this very restrictive notion of reducibility – but it would be more satisfying to know whether these problems can be complete under more widely-used reducibilities such as Inline graphic. These sublinear-time reductions are so restrictive, that even the Inline graphic problem is not Inline graphic-reducible to Inline graphic or Inline graphic. In an attempt to prove that Inline graphic is not Inline graphic-reducible to Inline graphic, we actually ended up proving the opposite:

Theorem 1

[6]. Inline graphic is hard for Inline graphic under non-uniform Inline graphic reductions. This also holds for Inline graphic and Inline graphic.

Here, Inline graphic is the class of problems Inline graphic-Turing-reducible to computing the determinant. It includes the well-known complexity classes Inline graphic and Inline graphic. This remains the only theorem that shows hardness of Inline graphic problems under any kind of Inline graphic reductions.

As a corollary of this theorem it follows that Inline graphic is not in Inline graphic for any prime p. This was mentioned as an open question in [1] (see footnote 2 in [1]). (An alternate proof was given in [23].) It remained open whether Inline graphic was in Inline graphic until a lower bound was proved in [18].

It is still open whether Inline graphic is hard for Inline graphic. The proof of the hardness result in [6] actually carries over to a version of Inline graphic where the “gap” is quite small. Thus one avenue for proving a hardness result for Inline graphic had seemed to be to improve the hardness result of [6], so that it worked for a much larger “gap”. This avenue was subsequently blocked, when it was shown that Inline graphic is not Inline graphic-reducible to Inline graphic (or to Inline graphic) for a moderate-sized “gap” [8]. Thus, although it is still open whether Inline graphic is Inline graphic-complete under Inline graphic reductions, we now know that Inline graphic is not Inline graphic-complete under this notion of reducibility.

When a much larger “gap” is considered, it was shown in [6] that, if cryptographically-secure one-way functions exist, then Inline graphic and Inline graphic are Inline graphic-intermediate in the sense that neither problem is in Inline graphic, and neither problem is complete for Inline graphic under Inline graphic-Turing reductions.

The strongest hardness results that are known for the Inline graphic problems in Inline graphic remain the results of [3], where it was shown that Inline graphic, Inline graphic, and Inline graphic are all hard for Inline graphic under Inline graphic reductions. Inline graphic is the class of problems that have statistical zero knowledge interactive proofs; Inline graphic contains most of the problems that are assumed to be intractable, in order to build public-key cryptosystems. Thus it is widely assumed that Inline graphic and related problems lie outside of Inline graphic, and cryptographers hope that it requires nearly exponential-sized circuits. Inline graphic also contains the Graph Isomorphism problem, which is Inline graphic-reducible to Inline graphic and Inline graphic. In [4], Graph Isomorphism (and several other problems) were shown to be Inline graphic reducible to Inline graphic; it remains unknown if this also holds for Inline graphic. In fact, there is no interesting example of a problem A that is not known to be in Inline graphic that has been shown to be Inline graphic reducible to Inline graphic.

We close this section with a discussion of a very powerful notion of reducibility: Inline graphic reductions. (Informally A is Inline graphic reducible to B means that A is Inline graphic-reducible to B.) Hitchcock and Pavan have shown that Inline graphic is indeed Inline graphic-complete under Inline graphic reductions if Inline graphic contains problems that require large circuits (which seems very plausible) [25]. It is interesting to note that, back in the early 1990’s, Ko explicitly considered the possibility that computing Inline graphic might be Inline graphic-complete under Inline graphic reductions [31].

Completeness in Inline graphic and Other Classes

There are problems “similar” to Inline graphic that reside in many complexity classes. We can define Inline graphic to be Inline graphic for oracle circuits with A-oracle gates. That is, Inline graphic has an A-oracle circuit of size at most i}. When A is complete for Inline graphic, then Inline graphic is thought of as being quite similar to Inline graphic. Both of these problems, along with Inline graphic, are complete for Inline graphic under Inline graphic and Inline graphic reductions [2].

It is still open whether either of Inline graphic or Inline graphic is in Inline graphic, and it had been open if Inline graphic is in Inline graphic for “small” exponential functions t such as Inline graphic. But there is recent progress:

Theorem 2

[20]. Inline graphic is complete for Inline graphic under Inline graphic reductions.

This seems to go a long way toward addressing Open Question 3.6 in [1].

As a corollary, Inline graphic is not in Inline graphic. In fact, a much stronger result holds. Let t be any superpolynomial function. Then the set of Inline graphic-random strings Inline graphic is immune to Inline graphic (meaning: it has no infinite subset in Inline graphic) [20]. The proof does not seem to carry over to Inline graphic complexity, highlighting a significant difference between Inline graphic and Inline graphic.

Although it remains open whether Inline graphic, Hirahara does show that Inline graphic is not in Inline graphic-uniform Inline graphic, and in fact the set of Inline graphic-random strings is immune to Inline graphic-uniform Inline graphic. Furthermore, improved immunity results for the Inline graphic-random strings are in some sense possible if and only if better algorithms for CircuitSAT can be devised for larger classes of circuits.

Oliveira has defined a randomized version of Inline graphic complexity, which is conjectured to be nearly the same as Inline graphic, but for which he is able to prove unconditional intractability results [37].

Inline graphic was known to be complete for Inline graphic under Inline graphic reductions [2]. In more recent work, for various subclasses Inline graphic of Inline graphic, when A is a suitable complete problem for Inline graphic, then Inline graphic has been shown to be complete for Inline graphic under Inline graphic reductions [29]. Crucially, the techniques used by [29] (and, indeed, by any of the authors who had proved hardness results for Inline graphic previously for various A) failed to work for any A in the polynomial hierarchy. We will return to this issue in the following section.

In related work, it was shown [6] that the question of whether Inline graphic is hard for Inline graphic under a type of uniform Inline graphic reductions is equivalent to the question of whether Inline graphic contains any sets that require exponential-size A-oracle circuits. Furthermore, this happens if and only if Inline graphic reduces to Inline graphic. Note that this condition is more likely to be true if A is easy, than if A is complex; it is false if A is complete for Inline graphic, and it is probably true if Inline graphic. Thus, although Inline graphic is almost certainly more complex than Inline graphic (the former is Inline graphic-complete, and the latter is in Inline graphic), a reasonably-large subclass of Inline graphic probably reduces to Inline graphic via these uniform Inline graphic reductions, whereas hardly anything Inline graphic-reduces to Inline graphic. The explanation for this is that a uniform Inline graphic reduction cannot formulate any useful queries to a complex oracle, whereas it (probably) can do so for a simpler oracle.

Inline graphic-Hardness

Recall from the previous section that there were no Inline graphic-hardness results known for any problem of the form Inline graphic where A is in the polynomial hierarchy.

This is still true; however, there is some progress to report. Hirahara has shown that computing the “conditional” complexity Inline graphic relative to Inline graphic (i.e., given (xy), finding the length of the shortest description d such that Inline graphic in time Inline graphic) is Inline graphic-hard under Inline graphic reductions [20].

It might be more satisfying to remove the Inline graphic oracle, and have a hardness result for computing Inline graphic – but Hirahara shows that this can’t be shown to be hard for Inline graphic (or even hard for Inline graphic) under Inline graphic reductions without first separating Inline graphic from Inline graphic.

In a similar vein, if one were to show that Inline graphic or Inline graphic (or Inline graphic or Inline graphic for any set Inline graphic) is hard for Inline graphic under Inline graphic reductions, then one will have shown that Inline graphic [20].

A different kind of Inline graphic-hardness result for conditional Kolmogorov complexity was proved recently by Ilango [27]. In [2], conditional Inline graphic complexity Inline graphic was studied by making the string y available to the universal Turing machine U as an “oracle”. Thus it makes sense to consider a “conditional complexity” version of Inline graphic by giving a string y available to a circuit via oracle gates. This problem was formalized and shown to be Inline graphic-complete under Inline graphic reductions [27].

Many of the functions that we compute daily produce more than one bit of output. Thus it makes sense to study the circuit size that is required in order to compute such functions. This problem is called Inline graphic in [28], where it is shown to be Inline graphic-complete under Inline graphic reductions. It will be interesting to see how the complexity of this problem varies, as the number of output bits of the functions under consideration shrinks toward one (at which point it becomes Inline graphic).

It has been known since the 1970’s that computing the size of the smallest DNF expression for a given truth-table is Inline graphic-complete. (A simple proof, and a discussion of the history can be found in [5].) However, it remains unknown what the complexity is of finding the smallest depth-three circuit for a given truth table. (Some very weak intractability results for minimizing constant-depth circuits can be found in [5], giving subexponential reductions from the problem of factoring Blum integers.) The first real progress on this front was reported in [22], giving an Inline graphic-completeness result (under Inline graphic reductions) for a class of depth three circuits (with MOD gates on the bottom level). Ilango proved that computing the size of the smallest depth-d formula for a truth-table lies outside of Inline graphic for any prime p [27], and he has now followed that up with a proof that computing the size of the smallest depth-d formula is Inline graphic-complete under Inline graphic reductions [26]. Note that a constant-depth circuit can be transformed into a formula with only a polynomial blow-up; thus in many situations we are able to ignore the distinction between circuits and formulas in the constant-depth realm. However, the techniques employed in [26, 27] are quite sensitive to small perturbations in the size, and hence the distinction between circuits and formulae is important here. Still, this is dramatic progress on a front where progress has been very slow.

Average Case Complexity, One-Way Functions

Cai and Kabanets gave birth to the modern study of Inline graphic in 2000 [30], in a paper that was motivated in part by the study of Natural Proofs [42], and which called attention to the fact that if Inline graphic is easy, then there are no cryptographically-secure one-way functions. In the succeeding decades, there has been speculation about whether the converse implication also holds. That is, can one base cryptography on assumptions about the complexity of Inline graphic?

First, it should be observed that, in some sense, Inline graphic is very easy “on average”. For instance the hardness results that we have (such as reducing Inline graphic to Inline graphic) show that the “hard instances” of Inline graphic are the ones where we want to distinguish between n-ary functions that require circuits of size Inline graphic (the “NO” instances) and those that have circuits of size at most Inline graphic (the “YES” instances). However, an algorithm that simply says “no” on all inputs will give the correct answer more than 99% of the time.

Thus Hirahara and Santhanam [23] chose to study a different notion of heuristics for Inline graphic, where algorithms must always give an answer in {Yes, No, I don’t know}, where the algorithm never gives an incorrect answer, and the algorithm is said to perform well “on average” if it only seldom answers “I don’t know”. They were able to show unconditionally that Inline graphic is hard on average in this sense for Inline graphic for any prime p, and to show that certain well-studied hypotheses imply that Inline graphic is hard on average.

More recently, Santhanam [43] has formulated a conjecture (which would involve too big of a digression to describe more carefully here), which – if true – would imply that a version of Inline graphic is hard on average in this sense if and only if cryptographically-secure one-way functions exist. That is, Santhanam’s conjecture provides a framework for believing that one can base cryptography on the average-case complexity of Inline graphic.

But how does the average-case complexity of Inline graphic depend on its worst-case complexity? Hirahara [19] showed that Inline graphic has no solution in Inline graphic if and only if a version of Inline graphic is hard on average. A related result stated in terms of Inline graphic appears in the same paper. These results attracted considerable attention, because prior work had indicated that such worst-case-to-average-case reductions would be impossible to prove using black-box techniques. Additional work has given further evidence that the techniques of [19] are inherently non-black-box [24].

Complexity Classes and Noncomputable Complexity Measures

The title of this section is the same as the title of Sect. 4 of the survey that I wrote three years ago [1]. In that section, I described the work that had been done, studying the classes of sets that are reducible to the (non-computable) set of Kolmogorov-random strings Inline graphic, and to Inline graphic, including the reasons why it seemed reasonable to conjecture that Inline graphic and Inline graphic could be characterized in terms of different types of reductions to the Kolmogorov-random strings.

I won’t repeat that discussion here, because both of those conjectures have been disproved (barring some extremely unlikely complexity class collapses). Taken together, the papers [21, 24], and [20] give a much better understanding of the classes of languages reducible to the Kolmogorov-random strings.

Previously, it was known that Inline graphic, and Inline graphic. Hirahara [20] has now shown Inline graphic.

This same paper also gives a surprising answer to Open Question 4.6 of [1], in showing that Quasipolynomial-time nonadaptive reductions to Inline graphic suffice to capture Inline graphic (and also some other classes in the polynomial hierarchy).

Magnification

Some of the most important and exciting developments relating to Inline graphic and related problems deal with the emerging study of “hardness magnification”. This is the phenomenon whereby seemingly very modest lower bounds can be “amplified” or “magnified” and thereby be shown to imply superpolynomial lower bounds. I was involved in some of the early work in this direction [9] (which did not involve Inline graphic), but much stronger work has subsequently appeared.

It is important to note, in this regard, that lower bounds have been proved for Inline graphic that essentially match the strongest lower bounds that we have for any problems in Inline graphic [16]. There is now a significant body of work, showing that slight improvements to those bounds, or other seemingly-attainable lower bounds for Inline graphic or Inline graphic or related problems, would yield dramatic complexity class separations [1215, 34, 38, 39, 45].

This would be a good place to survey this work, except that an excellent survey already appears in [12]. Igor Carboni Oliveira has also written some notes entitled “Advances in Hardness Magnification” related to a talk he gave at the Simons Institute in December, 2019, available on his home page. These notes and [12] describe in detail the reasons that this approach seems to avoid the Natural Proofs barrier identified in the work of Razborov and Rudich [42]. But they also describe some potential obstacles that need to be overcome, before this approach can truly be used to separate complexity classes.

Footnotes

1

If the reader is not familiar with Kolmogorov complexity, then we recommend some excellent books on this topic [17, 33].

Supported in part by NSF Grant CCF-1909216.

Contributor Information

Alberto Leporati, Email: alberto.leporati@unimib.it.

Carlos Martín-Vide, Email: carlos.martin@urv.cat.

Dana Shapira, Email: shapird@g.ariel.ac.il.

Claudio Zandron, Email: zandron@disco.unimib.it.

Eric Allender, Email: allender@cs.rutgers.edu, http://www.cs.rutgers.edu/~allender.

References

  • 1.Allender E. The complexity of complexity. In: Day A, Fellows M, Greenberg N, Khoussainov B, Melnikov A, Rosamond F, editors. Computability and Complexity; Cham: Springer; 2017. pp. 79–94. [Google Scholar]
  • 2.Allender E, Buhrman H, Koucký M, van Melkebeek D, Ronneburger D. Power from random strings. SIAM J. Comput. 2006;35:1467–1493. doi: 10.1137/050628994. [DOI] [Google Scholar]
  • 3.Allender E, Das B. Zero knowledge and circuit minimization. Inf. Comput. 2017;256:2–8. doi: 10.1016/j.ic.2017.04.004. [DOI] [Google Scholar]
  • 4.Allender E, Grochow J, van Melkebeek D, Morgan A, Moore C. Minimum circuit size, graph isomorphism and related problems. SIAM J. Comput. 2018;47:1339–1372. doi: 10.1137/17M1157970. [DOI] [Google Scholar]
  • 5.Allender E, Hellerstein L, McCabe P, Pitassi T, Saks ME. Minimizing disjunctive normal form formulas and ACInline graphic circuits given a truth table. SIAM J. Comput. 2008;38(1):63–84. doi: 10.1137/060664537. [DOI] [Google Scholar]
  • 6.Allender E, Hirahara S. New insights on the (non)-hardness of circuit minimization and related problems. ACM Trans. Comput. Theory (ToCT) 2019;11(4):27:1–27:27. doi: 10.1145/3349616. [DOI] [Google Scholar]
  • 7.Allender E, Holden D, Kabanets V. The minimum oracle circuit size problem. Comput. Complex. 2017;26(2):469–496. doi: 10.1007/s00037-016-0124-0. [DOI] [Google Scholar]
  • 8.Allender E, Ilango R, Vafa N. The non-hardness of approximating circuit size. In: van Bevern R, Kucherov G, editors. Computer Science – Theory and Applications; Cham: Springer; 2019. pp. 13–24. [Google Scholar]
  • 9.Allender E, Koucký M. Amplifying lower bounds by means of self-reducibility. J. ACM. 2010;57:14:1–14:36. doi: 10.1145/1706591.1706594. [DOI] [Google Scholar]
  • 10.Allender E, Koucký M, Ronneburger D, Roy S. The pervasive reach of resource-bounded Kolmogorov complexity in computational complexity theory. J. Comput. Syst. Sci. 2010;77:14–40. doi: 10.1016/j.jcss.2010.06.004. [DOI] [Google Scholar]
  • 11.Carmosino, M., Impagliazzo, R., Kabanets, V., Kolokolova, A.: Learning algorithms from natural proofs. In: 31st Conference on Computational Complexity, CCC. LIPIcs, vol. 50, pp. 10:1–10:24. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2016). 10.4230/LIPIcs.CCC.2016.10
  • 12.Chen, L., Hirahara, S., Oliveira, I.C., Pich, J., Rajgopal, N., Santhanam, R.: Beyond natural proofs: hardness magnification and locality. In: 11th Innovations in Theoretical Computer Science Conference (ITCS). LIPIcs, vol. 151, pp. 70:1–70:48. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2020). 10.4230/LIPIcs.ITCS.2020.70
  • 13.Chen, L., Jin, C., Williams, R.: Hardness magnification for all sparse NP languages. In: Symposium on Foundations of Computer Science (FOCS), pp. 1240–1255 (2019)
  • 14.Chen, L., Jin, C., Williams, R.: Sharp threshold results for computational complexity (2019). Manuscript
  • 15.Chen, L., McKay, D.M., Murray, C.D., Williams, R.R.: Relations and equivalences between circuit lower bounds and Karp-Lipton theorems. In: 34th Computational Complexity Conference (CCC). LIPIcs, vol. 137, pp. 30:1–30:21. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2019). 10.4230/LIPIcs.CCC.2019.30
  • 16.Cheraghchi, M., Kabanets, V., Lu, Z., Myrisiotis, D.: Circuit lower bounds for MCSP from local pseudorandom generators. In: 46th International Colloquium on Automata, Languages, and Programming, (ICALP). LIPIcs, vol. 132, pp. 39:1–39:14. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2019). 10.4230/LIPIcs.ICALP.2019.39
  • 17.Downey R, Hirschfeldt D. Algorithmic Randomness and Complexity. New York: Springer; 2010. [Google Scholar]
  • 18.Golovnev, A., Ilango, R., Impagliazzo, R., Kabanets, V., Kolokolova, A., Tal, A.: ACInline graphic lower bounds against MCSP via the coin problem. In: 46th International Colloquium on Automata, Languages, and Programming, (ICALP). LIPIcs, vol. 132, pp. 66:1–66:15. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2019). 10.4230/LIPIcs.ICALP.2019.66
  • 19.Hirahara, S.: Non-black-box worst-case to average-case reductions within NP. In: 59th IEEE Annual Symposium on Foundations of Computer Science (FOCS), pp. 247–258 (2018). 10.1109/FOCS.2018.00032
  • 20.Hirahara, S.: Kolmogorov-randomness is harder than expected (2019). Manuscript
  • 21.Hirahara, S.: Unexpected power of random strings. In: 11th Innovations in Theoretical Computer Science Conference, ITCS. LIPIcs, vol. 151, pp. 41:1–41:13. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2020). 10.4230/LIPIcs.ITCS.2020.41
  • 22.Hirahara, S., Oliveira, I.C., Santhanam, R.: NP-hardness of minimum circuit size problem for OR-AND-MOD circuits. In: 33rd Conference on Computational Complexity, CCC. LIPIcs, vol. 102, pp. 5:1–5:31. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2018). 10.4230/LIPIcs.CCC.2018.5
  • 23.Hirahara, S., Santhanam, R.: On the average-case complexity of MCSP and its variants. In: 32nd Conference on Computational Complexity, CCC. LIPIcs, vol. 79, pp. 7:1–7:20. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2017). 10.4230/LIPIcs.CCC.2017.7
  • 24.Hirahara, S., Watanabe, O.: On nonadaptive reductions to the set of random strings and its dense subsets. In: Electronic Colloquium on Computational Complexity (ECCC), vol. 26, p. 25 (2019)
  • 25.Hitchcock, J.M., Pavan, A.: On the NP-completeness of the minimum circuit size problem. In: Conference on Foundations of Software Technology and Theoretical Computer Science (FST&TCS). LIPIcs, vol. 45, pp. 236–245. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2015). 10.4230/LIPIcs.FSTTCS.2015.236
  • 26.Ilango, R.: Personal communication (2019)
  • 27.Ilango, R.: Approaching MCSP from above and below: Hardness for a conditional variant and ACInline graphic. In: 11th Innovations in Theoretical Computer Science Conference, ITCS. LIPIcs, vol. 151, pp. 34:1–34:26. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2020). 10.4230/LIPIcs.ITCS.2020.34
  • 28.Ilango, R., Loff, B., Oliveira, I.C.: NP-hardness of minimizing circuits and communication (2019). Manuscript
  • 29.Impagliazzo, R., Kabanets, V., Volkovich, I.: The power of natural properties as oracles. In: 33rd Conference on Computational Complexity, CCC. LIPIcs, vol. 102, pp. 7:1–7:20. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2018). 10.4230/LIPIcs.CCC.2018.7
  • 30.Kabanets, V., Cai, J.Y.: Circuit minimization problem. In: ACM Symposium on Theory of Computing (STOC), pp. 73–79 (2000). 10.1145/335305.335314
  • 31.Ko K. On the notion of infinite pseudorandom sequences. Theor. Comput. Sci. 1986;48(3):9–33. doi: 10.1016/0304-3975(86)90081-2. [DOI] [Google Scholar]
  • 32.Levin LA. Randomness conservation inequalities; information and independence in mathematical theories. Inf. Control. 1984;61(1):15–37. doi: 10.1016/S0019-9958(84)80060-1. [DOI] [Google Scholar]
  • 33.Li, M., Vitányi, P.M.B.: An Introduction to Kolmogorov Complexity and Its Applications. Texts in Computer Science, 4th edn. Springer (2019). 10.1007/978-3-030-11298-1
  • 34.McKay, D.M., Murray, C.D., Williams, R.R.: Weak lower bounds on resource-bounded compression imply strong separations of complexity classes. In: Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC), pp. 1215–1225 (2019). 10.1145/3313276.3316396
  • 35.Murray C, Williams R. On the (non) NP-hardness of computing circuit complexity. Theory Comput. 2017;13(4):1–22. doi: 10.4086/toc.2017.v013a004. [DOI] [Google Scholar]
  • 36.Oliveira, I., Santhanam, R.: Conspiracies between learning algorithms, circuit lower bounds and pseudorandomness. In: 32nd Conference on Computational Complexity, CCC. LIPIcs, vol. 79, pp. 18:1–18:49. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2017). 10.4230/LIPIcs.CCC.2017.18
  • 37.Oliveira, I.C.: Randomness and intractability in Kolmogorov complexity. In: 46th International Colloquium on Automata, Languages, and Programming, (ICALP). LIPIcs, vol. 132, pp. 32:1–32:14. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2019). 10.4230/LIPIcs.ICALP.2019.32
  • 38.Oliveira, I.C., Pich, J., Santhanam, R.: Hardness magnification near state-of-the-art lower bounds. In: 34th Computational Complexity Conference (CCC). LIPIcs, vol. 137, pp. 27:1–27:29. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2019). 10.4230/LIPIcs.CCC.2019.27
  • 39.Oliveira, I.C., Santhanam, R.: Hardness magnification for natural problems. In: 59th IEEE Annual Symposium on Foundations of Computer Science (FOCS), pp. 65–76 (2018). 10.1109/FOCS.2018.00016
  • 40.Pitt L, Valiant LG. Computational limitations on learning from examples. J. ACM. 1988;35(4):965–984. doi: 10.1145/48014.63140. [DOI] [Google Scholar]
  • 41.Pitt L, Warmuth MK. The minimum consistent DFA problem cannot be approximated within any polynomial. J. ACM. 1993;40(1):95–142. doi: 10.1145/138027.138042. [DOI] [Google Scholar]
  • 42.Razborov A, Rudich S. Natural proofs. J. Comput. Syst. Sci. 1997;55:24–35. doi: 10.1006/jcss.1997.1494. [DOI] [Google Scholar]
  • 43.Santhanam, R.: Pseudorandomness and the minimum circuit size problem. In: 11th Innovations in Theoretical Computer Science Conference (ITCS), LIPIcs, vol. 151, pp. 68:1–68:26. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2020). 10.4230/LIPIcs.ITCS.2020.68
  • 44.Santhanam, R.: Why are proof complexity lower bounds hard? In: Symposium on Foundations of Computer Science (FOCS), pp. 1305–1324 (2019)
  • 45.Tal, A.: The bipartite formula complexity of inner-product is quadratic. In: Electronic Colloquium on Computational Complexity (ECCC), vol. 23, p. 181 (2016)
  • 46.Vollmer H. Introduction to Circuit Complexity: A Uniform Approach. New York Inc.: Springer; 1999. [Google Scholar]

Articles from Language and Automata Theory and Applications are provided here courtesy of Nature Publishing Group

RESOURCES