Abstract
We survey recent developments related to the Minimum Circuit Size Problem.
Keywords: Complexity theory, Kolmogorov complexity, Minimum Circuit Size Problem
Introduction
Over the past few years, there has been an explosion of interest in the Minimum Circuit Size Problem (
) and related problems. Thus the time seemed right to provide a survey, describing the new landscape and offering a guidebook so that one can easily reach the new frontiers of research in this area.
It turns out that this landscape is extremely unstable, with new features arising at an alarming rate. Although this makes it a scientifically-exciting time, it also means that this survey is doomed to be obsolete before it appears. It also means that the survey is going to take the form of an “annotated bibliography” with the intent to provide many pointers to the relevant literature, along with a bit of context.
The title of this article is “The New Complexity Landscape around Circuit Minimization” (emphasis added). This means that I will try to avoid repeating too many of the observations that were made in an earlier survey I wrote on a related topic [1]. Although that article was written only three years ago, several of the open questions that were mentioned there have now been resolved (and some of the conjectures that were mentioned have been overturned).
Meta-complexity,
and Kolmogorov Complexity
The focus of complexity theory is to determine how hard problems are. The focus of meta-complexity is to determine how hard it is to determine how hard problems are. Some of the most exciting recent developments in complexity theory have been the result of meta-complexity-theoretic investigations.
The Minimum Circuit Size Problem (
) is, quite simply, the problem of determining the circuit complexity of functions. The input consists of a pair (f, i), where f is a bit string of length
representing the truth-table of a Boolean function, and
, and the problem is to determine if f has a circuit of size at most i. The study of the complexity of
is therefore the canonical meta-complexity-theoretic question. Complexity theoreticians are fond of complaining that the problems they confront (showing that computational problems are hard to compute) are notoriously difficult. But is this really true? Is it hard to show that a particular function is difficult to compute? This question can be made precise by asking about the computational complexity of
. (See also [44] for a different approach.)
A small circuit is a short description of a large truth-table f; thus it is no surprise that investigations of
have made use of the tools and terminology of Kolmogorov complexity. In order to discuss some of the recent developments, it will be necessary to review some of the different notions, and to establish the notation that will be used throughout the rest of the article.
Let U be a Turing machine. We define
to be
. Those readers who are familiar with Kolmogorov complexity1 will notice that the definition here is for what is sometimes called “plain” Kolmogorov complexity, although the notation
is more commonly used to denote what is called “prefix-free” Kolmogorov complexity. This is intentional. In this survey, the distinctions between these two notions will be blurred, in order to keep the discussion on a high level. Some of the theorems that will be mentioned below are only known to hold for the prefix-free variant, but the reader is encouraged to ignore these finer distinctions here, and seek the more detailed information in the cited references. For some Turing machines U,
will not be defined for some x, and the values of
and
can be very different, for different machines U and
. But the beauty of Kolmogorov complexity (and the applicability of of the theory it gives rise to) derives from the fact that if U and
are universal Turing machines, then
and
differ by at most O(1). By convention, we select one particular universal machine U and define K(x) to be equal to
.
The function K is not computable. The simplest way to obtain a computable function that shares many of the properties of K is to simply impose a time bound, leading to the definition
in time t(|x|)} where t is a computable function. Although it is useful in many contexts,
does not appear to be closely connected to the circuit size of x (where x is viewed as the truth-table of a function). Thus we will frequently refer to two additional resource-bounded Kolmogorov complexity measures,
and
.
Levin defined
to be
in time t} [32]; it has the nice property that it can be used to define the optimal search strategy to use, in searching for accepting computations on a nondeterministic Turing machine.
also corresponds to the circuit size of the function x, but not on “normal” circuits. As is shown in [2],
is roughly the same as the size of the smallest oracle circuit that computes x, where the oracle is a complete set for
. (An oracle circuit has “oracle gates” in addition to the usual AND, OR, and NOT gates; an oracle gate for oracle A has k wires leading into it, and if those k wires encode a bitstring y of length k where y is in A, then the gate outputs 1, otherwise it outputs 0.)
It is clearly desirable to have a version of Kolmogorov complexity that is more closely related to “ordinary” circuit size, instead of oracle circuit size. This is accomplished by defining
to be
in time t}. (More precise definitions can be found in [2, 10].)
We have now presented a number of different measures
. By analogy with
, we can study
in place of the “circuit size” measure, and thus obtain various problems of the form
, such as
,
and
. If
, then
is in
, and several theorems about
yield corollaries about
in this case. (See, e.g. [2]). Similarly, if
for some
, then
is in
, and several theorems about
yield corollaries about
for t in this range [2].
In order to highlight some of the recent developments, let us introduce some notation that is somewhat imprecise and which is not used anywhere else, but which will be convenient for our purposes. Let
serve as a shorthand for
whenever
, and similarly let
serve as a shorthand for
whenever
for some
. We will thus be referring to
and
. Doing so will enable us to avoid some confusing notation surrounding the name MINKT, which was introduced by Ko [31] to denote the set
}. That is,
iff
(where the time bound
). Hence these sets have comparable complexity and results about MINKT can be rephrased in terms of
with only a small loss of accuracy. In particular, some recent important results [19, 20] are phrased in terms of MINKT, and as such they deal with
complexity, and they are not really very closely connected with the
measure; the name MINKT was devised more than a decade before
was formulated. The reader who is interested in the details should refer to the original papers for the precise formulation of the theorems. However, the view presented here is “probably approximately correct”.
Frequently, theorems about
and the various
problems are stated not in terms of exactly computing the circuit size or the complexity of a string, but in terms of approximating these values. This is usually presented in terms of two thresholds
, where the desired solution is to say yes if the complexity of x is less than
, and to say no if the complexity of x is greater than
, and any answer is allowed if the complexity of x lies in the “gap” between
and
. In the various theorems that have been proved in this setting, the choice of thresholds
and
is usually important, but in this article those details will be suppressed, and all of these approximation problems will be referred to as
,
,
, etc.
At this point, the reader’s eyes may be starting to glaze over. It is natural to wonder if we really need to have all of these different related notions. In particular, there does not seem to be much difference between
and
. Most hardness results for
actually hold for
, and if the “gap” is large enough, then
is a solution to
(and vice-versa). Furthermore it has frequently been the case that a theorem about
was first proved for
and then the result for
was obtained as a corollary. However, there is no efficient reduction known (in either direction) between
and
, and there are some theorems that are currently known to hold only for
, although they are suspected to hold also for
(e.g., [4, 6, 23]). Similarly, some of the more intriguing recent developments can only be understood by paying attention to the distinction between different notions of resource-bounded Kolmogorov complexity. Thus it is worth making this investment in defining the various distinct notions.
Connections to Learning Theory
Certain connections between computational learning theory and Kolmogorov complexity were identified soon after computational learning theory emerged as a field. After all, the goal of computational learning theory is to find a satisfactory (and hence succinct) explanation of a large body of observed data. For instance, in the 1980s and 1990s there was work [40, 41] showing that it is
-hard to find “succinct explanations” that have size somewhat close to the optimal size, if these “explanations” are required to be finite automata or various other restricted formalisms. Ko studied this in a more general setting, allowing “explanations” to be efficient programs (in the setting of time-bounded Kolmogorov complexity).
Thus Ko studied not only the problem of computing
(where one can consider x to be a completely-specified Boolean function), but also the problem of finding the smallest description d such that U(d) agrees with a given list of “yes instances” Y and a list of “no instances” N (that is, x can be considered as a partial Boolean function, with many “don’t care” instances). Thus, following [28], we can call this problem
. In the setting that is most relevant for computational learning theory, the partial function x is presented compactly as separate lists Y and N, rather than as a string of length
over the alphabet
.
Ko showed in [31] that relativizing techniques would not suffice, in order to settle the question of whether
and
are
-complete. That is, by giving the universal Turing machine U that defines Kolmogorov complexity access to an oracle A, one obtains the problems
and
, and these sets can either be
-complete or not, depending on the choice of A.
Thus it is noteworthy that it has recently been shown that
is
-complete under
reductions [28]. I suspect (although I have not verified) that the proof also establishes that
is
-complete under
reductions. One lesson to take from this is that
and
complexity differ from each other in significant ways. There are other recent examples of related phenomena, which will be discussed below.
There are other strong connections between
and learning theory that have come to light recently. Using
as an oracle (or even using a set that shares certain characteristics with
) one can efficiently learn small circuits that do a good job of explaining the data [11]. For certain restricted classes of circuits, there are sets in
that one can use in place of
to obtain learning algorithms that don’t require an oracle [11]. This connection has been explored further [12, 36].
Completeness, Hardness, Reducibility
The preceding section mentioned a result about a problem being
-complete under
reductions. In order to discuss other results about the complexity of
and related problems it is necessary to go into more detail about different notions of reducibility.
Let
be either a class of functions or a class of circuits. The classes that will concern us the most are the standard complexity classes
as well as the circuit classes (both uniform and nonuniform):
![]() |
We refer the reader to the text by Vollmer [46] for background and more complete definitions of these standard circuit complexity complexity classes, as well as for a discussion of uniformity.
We say that
if there is a function
(or f computed by a circuit family in
, respectively) such that
iff
. We will make use of
and
reducibility. The more powerful notion of Turing reducibility also plays an important role in this work. Here,
is a complexity class that admits a characterization in terms of Turing machines or circuits, which can be augmented with an “oracle” mechanism, either by providing a “query tape” or “oracle gates”. We say that
if there is a oracle machine in
(or a family of oracle circuits in
) accepting A, when given oracle B. We make use of
, and
reducibility; instead of writing
or
, we will sometimes write
or
. Turing reductions that are “nonadaptive” – in the sense that the list of queries that are posed on input x does not depend on the answers provided by the oracle – are called truth table reductions. We make use of
reducibility.
Not much has changed, regarding what is known about the “hardness” of
, in the three years that have passed since my earlier survey [1]. Here is what I wrote at that time:
Table 1 presents information about the consequences that will follow if
is
-complete (or even if it is hard for certain subclasses of
). The table is incomplete (since it does not mention the influential theorems of Kabanets and Cai [30] describing various consequences if
were complete under a certain restricted type of
reduction). It also fails to adequately give credit to all of the papers that have contributed to this line of work, since – for example – some of the important contributions of [35] have subsequently been slightly improved [7, 25]. But one thing should jump out at the reader from Table 1: All of the conditions listed in Column 3 (with the exception of “FALSE”) are widely believed to be true, although they all seem to be far beyond the reach of current proof techniques.
Table 1.
Summary of what is known about the consequences of
being hard for
under different types of reducibility. If
is hard for the class in Column 1 under the reducibility shown in Column 2, then the consequence in Column 3 follows.
is the linear-time analog of the polynomial hierarchy. Problems in
are accepted by alternating Turing machines that make only O(1) alternations and run for linear time.
It is significant that neither
nor
is
-complete under
reductions, since
and many other well-known problems are complete under this very restrictive notion of reducibility – but it would be more satisfying to know whether these problems can be complete under more widely-used reducibilities such as
. These sublinear-time reductions are so restrictive, that even the
problem is not
-reducible to
or
. In an attempt to prove that
is not
-reducible to
, we actually ended up proving the opposite:
Theorem 1
[6].
is hard for
under non-uniform
reductions. This also holds for
and
.
Here,
is the class of problems
-Turing-reducible to computing the determinant. It includes the well-known complexity classes
and
. This remains the only theorem that shows hardness of
problems under any kind of
reductions.
As a corollary of this theorem it follows that
is not in
for any prime p. This was mentioned as an open question in [1] (see footnote 2 in [1]). (An alternate proof was given in [23].) It remained open whether
was in
until a lower bound was proved in [18].
It is still open whether
is hard for
. The proof of the hardness result in [6] actually carries over to a version of
where the “gap” is quite small. Thus one avenue for proving a hardness result for
had seemed to be to improve the hardness result of [6], so that it worked for a much larger “gap”. This avenue was subsequently blocked, when it was shown that
is not
-reducible to
(or to
) for a moderate-sized “gap” [8]. Thus, although it is still open whether
is
-complete under
reductions, we now know that
is not
-complete under this notion of reducibility.
When a much larger “gap” is considered, it was shown in [6] that, if cryptographically-secure one-way functions exist, then
and
are
-intermediate in the sense that neither problem is in
, and neither problem is complete for
under
-Turing reductions.
The strongest hardness results that are known for the
problems in
remain the results of [3], where it was shown that
,
, and
are all hard for
under
reductions.
is the class of problems that have statistical zero knowledge interactive proofs;
contains most of the problems that are assumed to be intractable, in order to build public-key cryptosystems. Thus it is widely assumed that
and related problems lie outside of
, and cryptographers hope that it requires nearly exponential-sized circuits.
also contains the Graph Isomorphism problem, which is
-reducible to
and
. In [4], Graph Isomorphism (and several other problems) were shown to be
reducible to
; it remains unknown if this also holds for
. In fact, there is no interesting example of a problem A that is not known to be in
that has been shown to be
reducible to
.
We close this section with a discussion of a very powerful notion of reducibility:
reductions. (Informally A is
reducible to B means that A is
-reducible to B.) Hitchcock and Pavan have shown that
is indeed
-complete under
reductions if
contains problems that require large circuits (which seems very plausible) [25]. It is interesting to note that, back in the early 1990’s, Ko explicitly considered the possibility that computing
might be
-complete under
reductions [31].
Completeness in
and Other Classes
There are problems “similar” to
that reside in many complexity classes. We can define
to be
for oracle circuits with A-oracle gates. That is,
has an A-oracle circuit of size at most i}. When A is complete for
, then
is thought of as being quite similar to
. Both of these problems, along with
, are complete for
under
and
reductions [2].
It is still open whether either of
or
is in
, and it had been open if
is in
for “small” exponential functions t such as
. But there is recent progress:
Theorem 2
[20].
is complete for
under
reductions.
This seems to go a long way toward addressing Open Question 3.6 in [1].
As a corollary,
is not in
. In fact, a much stronger result holds. Let t be any superpolynomial function. Then the set of
-random strings
is immune to
(meaning: it has no infinite subset in
) [20]. The proof does not seem to carry over to
complexity, highlighting a significant difference between
and
.
Although it remains open whether
, Hirahara does show that
is not in
-uniform
, and in fact the set of
-random strings is immune to
-uniform
. Furthermore, improved immunity results for the
-random strings are in some sense possible if and only if better algorithms for CircuitSAT can be devised for larger classes of circuits.
Oliveira has defined a randomized version of
complexity, which is conjectured to be nearly the same as
, but for which he is able to prove unconditional intractability results [37].
was known to be complete for
under
reductions [2]. In more recent work, for various subclasses
of
, when A is a suitable complete problem for
, then
has been shown to be complete for
under
reductions [29]. Crucially, the techniques used by [29] (and, indeed, by any of the authors who had proved hardness results for
previously for various A) failed to work for any A in the polynomial hierarchy. We will return to this issue in the following section.
In related work, it was shown [6] that the question of whether
is hard for
under a type of uniform
reductions is equivalent to the question of whether
contains any sets that require exponential-size A-oracle circuits. Furthermore, this happens if and only if
reduces to
. Note that this condition is more likely to be true if A is easy, than if A is complex; it is false if A is complete for
, and it is probably true if
. Thus, although
is almost certainly more complex than
(the former is
-complete, and the latter is in
), a reasonably-large subclass of
probably reduces to
via these uniform
reductions, whereas hardly anything
-reduces to
. The explanation for this is that a uniform
reduction cannot formulate any useful queries to a complex oracle, whereas it (probably) can do so for a simpler oracle.
-Hardness
Recall from the previous section that there were no
-hardness results known for any problem of the form
where A is in the polynomial hierarchy.
This is still true; however, there is some progress to report. Hirahara has shown that computing the “conditional” complexity
relative to
(i.e., given (x, y), finding the length of the shortest description d such that
in time
) is
-hard under
reductions [20].
It might be more satisfying to remove the
oracle, and have a hardness result for computing
– but Hirahara shows that this can’t be shown to be hard for
(or even hard for
) under
reductions without first separating
from
.
In a similar vein, if one were to show that
or
(or
or
for any set
) is hard for
under
reductions, then one will have shown that
[20].
A different kind of
-hardness result for conditional Kolmogorov complexity was proved recently by Ilango [27]. In [2], conditional
complexity
was studied by making the string y available to the universal Turing machine U as an “oracle”. Thus it makes sense to consider a “conditional complexity” version of
by giving a string y available to a circuit via oracle gates. This problem was formalized and shown to be
-complete under
reductions [27].
Many of the functions that we compute daily produce more than one bit of output. Thus it makes sense to study the circuit size that is required in order to compute such functions. This problem is called
in [28], where it is shown to be
-complete under
reductions. It will be interesting to see how the complexity of this problem varies, as the number of output bits of the functions under consideration shrinks toward one (at which point it becomes
).
It has been known since the 1970’s that computing the size of the smallest DNF expression for a given truth-table is
-complete. (A simple proof, and a discussion of the history can be found in [5].) However, it remains unknown what the complexity is of finding the smallest depth-three circuit for a given truth table. (Some very weak intractability results for minimizing constant-depth circuits can be found in [5], giving subexponential reductions from the problem of factoring Blum integers.) The first real progress on this front was reported in [22], giving an
-completeness result (under
reductions) for a class of depth three circuits (with MOD gates on the bottom level). Ilango proved that computing the size of the smallest depth-d
formula for a truth-table lies outside of
for any prime p [27], and he has now followed that up with a proof that computing the size of the smallest depth-d
formula is
-complete under
reductions [26]. Note that a constant-depth circuit can be transformed into a formula with only a polynomial blow-up; thus in many situations we are able to ignore the distinction between circuits and formulas in the constant-depth realm. However, the techniques employed in [26, 27] are quite sensitive to small perturbations in the size, and hence the distinction between circuits and formulae is important here. Still, this is dramatic progress on a front where progress has been very slow.
Average Case Complexity, One-Way Functions
Cai and Kabanets gave birth to the modern study of
in 2000 [30], in a paper that was motivated in part by the study of Natural Proofs [42], and which called attention to the fact that if
is easy, then there are no cryptographically-secure one-way functions. In the succeeding decades, there has been speculation about whether the converse implication also holds. That is, can one base cryptography on assumptions about the complexity of
?
First, it should be observed that, in some sense,
is very easy “on average”. For instance the hardness results that we have (such as reducing
to
) show that the “hard instances” of
are the ones where we want to distinguish between n-ary functions that require circuits of size
(the “NO” instances) and those that have circuits of size at most
(the “YES” instances). However, an algorithm that simply says “no” on all inputs will give the correct answer more than 99% of the time.
Thus Hirahara and Santhanam [23] chose to study a different notion of heuristics for
, where algorithms must always give an answer in {Yes, No, I don’t know}, where the algorithm never gives an incorrect answer, and the algorithm is said to perform well “on average” if it only seldom answers “I don’t know”. They were able to show unconditionally that
is hard on average in this sense for
for any prime p, and to show that certain well-studied hypotheses imply that
is hard on average.
More recently, Santhanam [43] has formulated a conjecture (which would involve too big of a digression to describe more carefully here), which – if true – would imply that a version of
is hard on average in this sense if and only if cryptographically-secure one-way functions exist. That is, Santhanam’s conjecture provides a framework for believing that one can base cryptography on the average-case complexity of
.
But how does the average-case complexity of
depend on its worst-case complexity? Hirahara [19] showed that
has no solution in
if and only if a version of
is hard on average. A related result stated in terms of
appears in the same paper. These results attracted considerable attention, because prior work had indicated that such worst-case-to-average-case reductions would be impossible to prove using black-box techniques. Additional work has given further evidence that the techniques of [19] are inherently non-black-box [24].
Complexity Classes and Noncomputable Complexity Measures
The title of this section is the same as the title of Sect. 4 of the survey that I wrote three years ago [1]. In that section, I described the work that had been done, studying the classes of sets that are reducible to the (non-computable) set of Kolmogorov-random strings
, and to
, including the reasons why it seemed reasonable to conjecture that
and
could be characterized in terms of different types of reductions to the Kolmogorov-random strings.
I won’t repeat that discussion here, because both of those conjectures have been disproved (barring some extremely unlikely complexity class collapses). Taken together, the papers [21, 24], and [20] give a much better understanding of the classes of languages reducible to the Kolmogorov-random strings.
Previously, it was known that
, and
. Hirahara [20] has now shown
.
This same paper also gives a surprising answer to Open Question 4.6 of [1], in showing that Quasipolynomial-time nonadaptive reductions to
suffice to capture
(and also some other classes in the polynomial hierarchy).
Magnification
Some of the most important and exciting developments relating to
and related problems deal with the emerging study of “hardness magnification”. This is the phenomenon whereby seemingly very modest lower bounds can be “amplified” or “magnified” and thereby be shown to imply superpolynomial lower bounds. I was involved in some of the early work in this direction [9] (which did not involve
), but much stronger work has subsequently appeared.
It is important to note, in this regard, that lower bounds have been proved for
that essentially match the strongest lower bounds that we have for any problems in
[16]. There is now a significant body of work, showing that slight improvements to those bounds, or other seemingly-attainable lower bounds for
or
or related problems, would yield dramatic complexity class separations [12–15, 34, 38, 39, 45].
This would be a good place to survey this work, except that an excellent survey already appears in [12]. Igor Carboni Oliveira has also written some notes entitled “Advances in Hardness Magnification” related to a talk he gave at the Simons Institute in December, 2019, available on his home page. These notes and [12] describe in detail the reasons that this approach seems to avoid the Natural Proofs barrier identified in the work of Razborov and Rudich [42]. But they also describe some potential obstacles that need to be overcome, before this approach can truly be used to separate complexity classes.
Footnotes
Contributor Information
Alberto Leporati, Email: alberto.leporati@unimib.it.
Carlos Martín-Vide, Email: carlos.martin@urv.cat.
Dana Shapira, Email: shapird@g.ariel.ac.il.
Claudio Zandron, Email: zandron@disco.unimib.it.
Eric Allender, Email: allender@cs.rutgers.edu, http://www.cs.rutgers.edu/~allender.
References
- 1.Allender E. The complexity of complexity. In: Day A, Fellows M, Greenberg N, Khoussainov B, Melnikov A, Rosamond F, editors. Computability and Complexity; Cham: Springer; 2017. pp. 79–94. [Google Scholar]
- 2.Allender E, Buhrman H, Koucký M, van Melkebeek D, Ronneburger D. Power from random strings. SIAM J. Comput. 2006;35:1467–1493. doi: 10.1137/050628994. [DOI] [Google Scholar]
- 3.Allender E, Das B. Zero knowledge and circuit minimization. Inf. Comput. 2017;256:2–8. doi: 10.1016/j.ic.2017.04.004. [DOI] [Google Scholar]
- 4.Allender E, Grochow J, van Melkebeek D, Morgan A, Moore C. Minimum circuit size, graph isomorphism and related problems. SIAM J. Comput. 2018;47:1339–1372. doi: 10.1137/17M1157970. [DOI] [Google Scholar]
-
5.Allender E, Hellerstein L, McCabe P, Pitassi T, Saks ME. Minimizing disjunctive normal form formulas and AC
circuits given a truth table. SIAM J. Comput. 2008;38(1):63–84. doi: 10.1137/060664537. [DOI] [Google Scholar] - 6.Allender E, Hirahara S. New insights on the (non)-hardness of circuit minimization and related problems. ACM Trans. Comput. Theory (ToCT) 2019;11(4):27:1–27:27. doi: 10.1145/3349616. [DOI] [Google Scholar]
- 7.Allender E, Holden D, Kabanets V. The minimum oracle circuit size problem. Comput. Complex. 2017;26(2):469–496. doi: 10.1007/s00037-016-0124-0. [DOI] [Google Scholar]
- 8.Allender E, Ilango R, Vafa N. The non-hardness of approximating circuit size. In: van Bevern R, Kucherov G, editors. Computer Science – Theory and Applications; Cham: Springer; 2019. pp. 13–24. [Google Scholar]
- 9.Allender E, Koucký M. Amplifying lower bounds by means of self-reducibility. J. ACM. 2010;57:14:1–14:36. doi: 10.1145/1706591.1706594. [DOI] [Google Scholar]
- 10.Allender E, Koucký M, Ronneburger D, Roy S. The pervasive reach of resource-bounded Kolmogorov complexity in computational complexity theory. J. Comput. Syst. Sci. 2010;77:14–40. doi: 10.1016/j.jcss.2010.06.004. [DOI] [Google Scholar]
- 11.Carmosino, M., Impagliazzo, R., Kabanets, V., Kolokolova, A.: Learning algorithms from natural proofs. In: 31st Conference on Computational Complexity, CCC. LIPIcs, vol. 50, pp. 10:1–10:24. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2016). 10.4230/LIPIcs.CCC.2016.10
- 12.Chen, L., Hirahara, S., Oliveira, I.C., Pich, J., Rajgopal, N., Santhanam, R.: Beyond natural proofs: hardness magnification and locality. In: 11th Innovations in Theoretical Computer Science Conference (ITCS). LIPIcs, vol. 151, pp. 70:1–70:48. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2020). 10.4230/LIPIcs.ITCS.2020.70
- 13.Chen, L., Jin, C., Williams, R.: Hardness magnification for all sparse NP languages. In: Symposium on Foundations of Computer Science (FOCS), pp. 1240–1255 (2019)
- 14.Chen, L., Jin, C., Williams, R.: Sharp threshold results for computational complexity (2019). Manuscript
- 15.Chen, L., McKay, D.M., Murray, C.D., Williams, R.R.: Relations and equivalences between circuit lower bounds and Karp-Lipton theorems. In: 34th Computational Complexity Conference (CCC). LIPIcs, vol. 137, pp. 30:1–30:21. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2019). 10.4230/LIPIcs.CCC.2019.30
- 16.Cheraghchi, M., Kabanets, V., Lu, Z., Myrisiotis, D.: Circuit lower bounds for MCSP from local pseudorandom generators. In: 46th International Colloquium on Automata, Languages, and Programming, (ICALP). LIPIcs, vol. 132, pp. 39:1–39:14. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2019). 10.4230/LIPIcs.ICALP.2019.39
- 17.Downey R, Hirschfeldt D. Algorithmic Randomness and Complexity. New York: Springer; 2010. [Google Scholar]
-
18.Golovnev, A., Ilango, R., Impagliazzo, R., Kabanets, V., Kolokolova, A., Tal, A.: AC
lower bounds against MCSP via the coin problem. In: 46th International Colloquium on Automata, Languages, and Programming, (ICALP). LIPIcs, vol. 132, pp. 66:1–66:15. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2019). 10.4230/LIPIcs.ICALP.2019.66
- 19.Hirahara, S.: Non-black-box worst-case to average-case reductions within NP. In: 59th IEEE Annual Symposium on Foundations of Computer Science (FOCS), pp. 247–258 (2018). 10.1109/FOCS.2018.00032
- 20.Hirahara, S.: Kolmogorov-randomness is harder than expected (2019). Manuscript
- 21.Hirahara, S.: Unexpected power of random strings. In: 11th Innovations in Theoretical Computer Science Conference, ITCS. LIPIcs, vol. 151, pp. 41:1–41:13. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2020). 10.4230/LIPIcs.ITCS.2020.41
- 22.Hirahara, S., Oliveira, I.C., Santhanam, R.: NP-hardness of minimum circuit size problem for OR-AND-MOD circuits. In: 33rd Conference on Computational Complexity, CCC. LIPIcs, vol. 102, pp. 5:1–5:31. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2018). 10.4230/LIPIcs.CCC.2018.5
- 23.Hirahara, S., Santhanam, R.: On the average-case complexity of MCSP and its variants. In: 32nd Conference on Computational Complexity, CCC. LIPIcs, vol. 79, pp. 7:1–7:20. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2017). 10.4230/LIPIcs.CCC.2017.7
- 24.Hirahara, S., Watanabe, O.: On nonadaptive reductions to the set of random strings and its dense subsets. In: Electronic Colloquium on Computational Complexity (ECCC), vol. 26, p. 25 (2019)
- 25.Hitchcock, J.M., Pavan, A.: On the NP-completeness of the minimum circuit size problem. In: Conference on Foundations of Software Technology and Theoretical Computer Science (FST&TCS). LIPIcs, vol. 45, pp. 236–245. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2015). 10.4230/LIPIcs.FSTTCS.2015.236
- 26.Ilango, R.: Personal communication (2019)
-
27.Ilango, R.: Approaching MCSP from above and below: Hardness for a conditional variant and AC
. In: 11th Innovations in Theoretical Computer Science Conference, ITCS. LIPIcs, vol. 151, pp. 34:1–34:26. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2020). 10.4230/LIPIcs.ITCS.2020.34
- 28.Ilango, R., Loff, B., Oliveira, I.C.: NP-hardness of minimizing circuits and communication (2019). Manuscript
- 29.Impagliazzo, R., Kabanets, V., Volkovich, I.: The power of natural properties as oracles. In: 33rd Conference on Computational Complexity, CCC. LIPIcs, vol. 102, pp. 7:1–7:20. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2018). 10.4230/LIPIcs.CCC.2018.7
- 30.Kabanets, V., Cai, J.Y.: Circuit minimization problem. In: ACM Symposium on Theory of Computing (STOC), pp. 73–79 (2000). 10.1145/335305.335314
- 31.Ko K. On the notion of infinite pseudorandom sequences. Theor. Comput. Sci. 1986;48(3):9–33. doi: 10.1016/0304-3975(86)90081-2. [DOI] [Google Scholar]
- 32.Levin LA. Randomness conservation inequalities; information and independence in mathematical theories. Inf. Control. 1984;61(1):15–37. doi: 10.1016/S0019-9958(84)80060-1. [DOI] [Google Scholar]
- 33.Li, M., Vitányi, P.M.B.: An Introduction to Kolmogorov Complexity and Its Applications. Texts in Computer Science, 4th edn. Springer (2019). 10.1007/978-3-030-11298-1
- 34.McKay, D.M., Murray, C.D., Williams, R.R.: Weak lower bounds on resource-bounded compression imply strong separations of complexity classes. In: Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC), pp. 1215–1225 (2019). 10.1145/3313276.3316396
- 35.Murray C, Williams R. On the (non) NP-hardness of computing circuit complexity. Theory Comput. 2017;13(4):1–22. doi: 10.4086/toc.2017.v013a004. [DOI] [Google Scholar]
- 36.Oliveira, I., Santhanam, R.: Conspiracies between learning algorithms, circuit lower bounds and pseudorandomness. In: 32nd Conference on Computational Complexity, CCC. LIPIcs, vol. 79, pp. 18:1–18:49. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2017). 10.4230/LIPIcs.CCC.2017.18
- 37.Oliveira, I.C.: Randomness and intractability in Kolmogorov complexity. In: 46th International Colloquium on Automata, Languages, and Programming, (ICALP). LIPIcs, vol. 132, pp. 32:1–32:14. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2019). 10.4230/LIPIcs.ICALP.2019.32
- 38.Oliveira, I.C., Pich, J., Santhanam, R.: Hardness magnification near state-of-the-art lower bounds. In: 34th Computational Complexity Conference (CCC). LIPIcs, vol. 137, pp. 27:1–27:29. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2019). 10.4230/LIPIcs.CCC.2019.27
- 39.Oliveira, I.C., Santhanam, R.: Hardness magnification for natural problems. In: 59th IEEE Annual Symposium on Foundations of Computer Science (FOCS), pp. 65–76 (2018). 10.1109/FOCS.2018.00016
- 40.Pitt L, Valiant LG. Computational limitations on learning from examples. J. ACM. 1988;35(4):965–984. doi: 10.1145/48014.63140. [DOI] [Google Scholar]
- 41.Pitt L, Warmuth MK. The minimum consistent DFA problem cannot be approximated within any polynomial. J. ACM. 1993;40(1):95–142. doi: 10.1145/138027.138042. [DOI] [Google Scholar]
- 42.Razborov A, Rudich S. Natural proofs. J. Comput. Syst. Sci. 1997;55:24–35. doi: 10.1006/jcss.1997.1494. [DOI] [Google Scholar]
- 43.Santhanam, R.: Pseudorandomness and the minimum circuit size problem. In: 11th Innovations in Theoretical Computer Science Conference (ITCS), LIPIcs, vol. 151, pp. 68:1–68:26. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2020). 10.4230/LIPIcs.ITCS.2020.68
- 44.Santhanam, R.: Why are proof complexity lower bounds hard? In: Symposium on Foundations of Computer Science (FOCS), pp. 1305–1324 (2019)
- 45.Tal, A.: The bipartite formula complexity of inner-product is quadratic. In: Electronic Colloquium on Computational Complexity (ECCC), vol. 23, p. 181 (2016)
- 46.Vollmer H. Introduction to Circuit Complexity: A Uniform Approach. New York Inc.: Springer; 1999. [Google Scholar]



























