Abstract
Given a k-node pattern graph H and an n-node host graph G, the subgraph counting problem asks to compute the number of copies of H in G. In this work we address the following question: can we count the copies of H faster if G is sparse? We answer in the affirmative by introducing a novel tree-like decomposition for directed acyclic graphs, inspired by the classic tree decomposition for undirected graphs. This decomposition gives a dynamic program for counting the homomorphisms of H in G by exploiting the degeneracy of G, which allows us to beat the state-of-the-art subgraph counting algorithms when G is sparse enough. For example, we can count the induced copies of any k-node pattern H in time if G has bounded degeneracy, and in time if G has bounded average degree. These bounds are instantiations of a more general result, parameterized by the degeneracy of G and the structure of H, which generalizes classic bounds on counting cliques and complete bipartite graphs. We also give lower bounds based on the Exponential Time Hypothesis, showing that our results are actually a characterization of the complexity of subgraph counting in bounded-degeneracy graphs.
Keywords: Subgraph counting, Tree decomposition, Degeneracy, Sparsity
Introduction
We address the following fundamental subgraph counting problem:
Input: an n-node graph G (the host graph) and a k-node graph H (the pattern)
Output: the number of induced copies of H in G
If no further assumptions are made, the best possible algorithm for this problem is likely to have running time . Indeed, the naive brute-force algorithm has running time , and under the Exponential Time Hypothesis [23] any algorithm for counting k-cliques has running time [8, 9]. The best algorithm known, which was given over 30 years ago by Nešetřil and Poljak [29] and is based on fast matrix multiplication, is only slightly faster than . Ignoring factors,1 the algorithm runs in time where is the matrix multiplication exponent. Since [25], this gives a state-of-the-art running time of .
In this work, we aim at breaking through this “ barrier” by assuming that G is sparse, and in particular, that G has bounded degeneracy. This assumption is often made for real-world graphs like social networks, since it agrees well with their structural properties [17]. The family of bounded-degeneracy graphs is rich from a theoretical point of view, too: it includes many important classes such as Barabási-Albert preferential attachment graphs, graphs excluding a fixed minor, planar graphs, bounded-treewidth graphs, bounded-degree graphs, and bounded-genus graphs, see [20]. Unfortunately, even when G has bounded degeneracy, the state of the art remains the -time algorithm by Nešetřil and Poljak, unless one makes further assumptions. For example, one can count the copies of any given pattern H in time O(n), provided G is planar [15] or has bounded treewidth [27] or has bounded degree [30]; all conditions that are stricter than bounded degeneracy. Alternatively, if G has bounded degeneracy, O(n)-time algorithms exist when H is the clique [1, 10, 16], or when H is a complete bipartite graph, if we do not require the copies of H to be induced [14]. Unfortunately, it is not clear how to extend the techniques behind these results to all patterns H and all G with bounded degeneracy. Thus, to what extent a small degeneracy of G makes subgraph counting easier remains an open question.
In this work we introduce a novel tree-like graph decomposition, to be applied to the pattern graph H, designed to exploit the degeneracy of G when counting the homomorphisms from H to G. When G is sparse enough, this decomposition yields subgraph counting algorithms faster than the state of the art. For example, we show how to count the induced copies of any k-node pattern H in time when G has bounded degeneracy, and in time when G has bounded average degree. These results are instantiations of a more general result which says that H can be counted in time , where d is the degeneracy of G, and is a certain measure of “width” of H arising from our decomposition. Assuming the Exponential Time Hypothesis, we also show that operations are required in the worst case, even if G has degeneracy 2. This provides a novel characterization of the complexity of subgraph counting in bounded-degeneracy graphs.
Results
We divide our results into bounds (Sect. 1.1.1) and techniques (Sect. 1.1.2). We denote by d the degeneracy of G, and we denote by , , the number of, respectively, homomorphisms, occurrences, and induced occurrences of H in G. See Sect. 1.2 for further definitions and notation. We remark that, unless otherwise specified, our bounds hold for every H including disconnected ones.
Bounds
Our first results are two running time bounds parameterized by the sparsity of G.
Theorem 1
For any k-node pattern H one can compute and in time , and one can compute in time , where d is the degeneracy of G.
This bound reduces the exponent of n to , down from the state-of-the-art of the Nešetřil–Poljak bound. This implies that our polynomial dependence on n is better whenever , and in any case (that is, even if ) whenever . As a corollary of Theorem 1, since where r is the average degree of G, we obtain:
Theorem 2
For any k-node pattern H one can compute and in time , and one can compute in time , where r is the average degree of G.
This bound has a polynomial dependence on n better than Nešetřil-Poljak whenever , and in any case (that is, even if ) whenever . In particular, we have a -time algorithm when . These are the first improvements over the Nešetřil-Poljak algorithm for graphs with small degeneracy or small average degree.
As a second result, we give improved bounds for some classes of patterns. The first is the class of quasi-cliques, a typical target pattern for social networks [5–7, 31–33]. We prove:
Theorem 3
If H is the clique minus edges, then one can compute and in time , and in time .
This generalizes the classic bound for counting cliques by Chiba and Nishizeki [10], at the price of an extra factor . Next, we consider complete quasi-multipartite graphs:
Theorem 4
If H is a complete multipartite graph, then one can compute and in time . If H is a complete multipartite graph plus edges, then one can compute and in time .
This generalizes an existing bound for counting the non-induced copies of complete (maximal) bi-partite graphs [14], again at the price of an extra factor .
Table 1 summarizes our upper bounds. We remark that our algorithms work for the colored versions of the problem (count only copies of H with prescribed vertex and/or edge colors) as well as the weighted versions of the problem (compute the total node or edge weight of copies of H in G). This can be obtained by a straightforward adaptation of our homomorphism counting algorithms.
Table 1.
Summary of upper bounds for the problem of counting the number of occurrences of H in G
| Pattern H | Time to compute | References |
|---|---|---|
| All (even disconnected) | [29] | |
| All (even disconnected) | This work | |
| All (even disconnected) | This work | |
| [10] | ||
| - edges | This work | |
| ( only) | [14] | |
| ( only) | This work | |
| + edges | ( only) | This work |
The graphs G and H have n and k vertices respectively, and d is the degeneracy of G. All except the last tree bounds hold for counting the number of induced occurrences
Techniques
The bounds of Sect. 1.1.1 are instantiations of a single, more general result. This result is based on a novel notion of width, the dag treewidth of H, which captures the relevant structure of H when counting its copies in a d-degenerate graph. In a simplified form, the bound is the following:
Theorem 5
For any k-node pattern H one can compute , , and in time .
Let us briefly explain this result. The heart of the problem is computing ; once we know how to do this, we can obtain and via inclusion-exclusion arguments at the price of an extra multiplicative factor f(k), like in [3, 11]. To compute , we give G an acyclic orientation with maximum outdegree d. Then, we take every possible acyclic orientation P of H, and compute where by we mean the number of homomorphisms from P to G that respect the orientations of the arcs. Note that the number of such homomorphisms can be even if G has bounded degeneracy (for example, if P is an independent set), so we cannot list them explicitly. At this point we introduce our technical tool, the dag tree decomposition of P. This is a tree T that captures the relevant reachability relations between the nodes of P. Given T, one can compute via dynamic programming in time , where is the width of T. The dynamic program computes by combining carefully the homomorphism counts of certain subgraphs of P. The dag-treewidth , which is the parameter appearing in the bound of Theorem 5, is the maximum width of the optimal dag tree decomposition of any acyclic orientation P of any graph obtainable by identifying nodes of and/or adding edges to H (this arises from the inclusion-exclusion arguments). With this, our technical machinery is complete. To obtain the bounds of the previous paragraph, we show how to compute efficiently dag tree decompositions of low width, and apply a more technical version of Theorem 5.
We conclude by complementing Theorem 5 with a lower bound based on the Exponential Time Hypothesis. This lower bound shows that in the worst case the dag-treewidth cannot be beaten, and therefore our decomposition captures, at least in part, the complexity of counting subgraphs in d-degenerate graphs.
Theorem 6
Under the Exponential Time Hypothesis [23] , no algorithm can compute or in time for all H.
Preliminaries and notation
Both and are simple graphs, possibly disconnected. For any subset we denote by the subgraph of G induced by ; the same notation applies to any graph. A homomorphism from H to G is a map such that implies . We write to highlight the edges that preserves. When H and G are oriented, must preserve the direction of the arcs. If is injective then we have an injective homomorphism. We denote by and the number of homomorphisms and injective homomorphisms from H to G. To avoid confusion, we will use the symbol to denote maps that are not necessarily homomorphisms. The symbol denotes isomorphism. A copy of H in G is a subgraph such that . If moreover then F is an induced copy. We denote by and the number of copies and induced copies of H in G; we may omit G if clear from the context. When we give an acyclic orientation to the edges of H, we denote the resulting dag by P. All the notation described above applies to directed graphs in the natural way.
The degeneracy of G is the smallest integer d such that there is an acyclic orientation of G with maximum outdegree bounded by d. Such an orientation can be found in time O(|E|) by repeatedly removing from G a minimum-degree node [27]. From now on we assume that G has this orientation. Equivalently, d is the smallest integer that bounds from above the minimum degree of every subgraph of G.
We assume the following operations take constant time: accessing the i-th arc of any node , and checking if (u, v) is an arc of G for any pair (u, v). Our upper bounds still hold if checking an arc takes time , which can be achieved via binary search if we first sort the adjacency lists of G. The factor in our bounds appears since we assume logarithmic access time for our dictionaries, each of which holds entries. This factor can be removed by using dictionaries with worst-case O(1) access time (e.g., hash maps), at the price of obtaining probabilistic/amortized bounds rather than deterministic ones.
Finally, we recall the tree decomposition and treewidth of a graph. For any two nodes X, Y in a tree T, we denote by T(X, Y) the unique path between X and Y in T.
Definition 1
(see [13], Ch. 12.3) Given a graph , a tree decomposition of G is a tree such that each node is a subset , and that:2
for every edge there exists such that
, if then
The width of a tree decomposition T is . The treewidth of a graph G is the minimum of over all tree decompositions T of G.
Related work
As discussed above, the fastest algorithm known for computing is the one by Nešetřil and Poljak [29] that runs in time where is the matrix multiplication exponent. With the current bound , this running time is in . Unfortunately, the algorithm is based on fast matrix multiplication, which makes it oblivious to the sparsity of G.
Under certain assumptions on G, faster algorithms are known. If G has bounded maximum degree, , then we can compute in time for some via multivariate graph polynomials [30]. If G has treewidth , and we are given a tree decomposition of G of such width, then we can compute in time ; see Lemma 18.4 of [27]. When G is planar, we obtain an algorithm where f is exponential in k [15]. All these assumptions are stronger than bounded degeneracy, and the techniques cannot be extended easily. A more general class that captures all these cases is that of nowhere-dense graphs [28], for which there exist fixed-parameter-tractable subgraph counting algorithms [22]. Nowhere dense graphs however do not include all bounded degeneracy graphs or all graphs with bounded average degree.
Even assuming G has bounded degeneracy, algorithms faster than Nešetřil-Poljak are known only when H belongs to special classes. The earliest result of this kind is the classic algorithm by Chiba and Nishizeki [10] to list all k-cliques in time . Eppstein showed that one can list all maximal cliques in time [16] and all non-induced complete bipartite subgraphs in time [14]. These algorithms exploit the degeneracy ordering of G in a way similar to ours. In fact, our techniques can be seen as a generalization of [10] that takes into account the structure of H. We note that a fundamental limitation of [10, 14, 16] is that they list all the copies of H, which for a generic H might be even if G has bounded degeneracy (for example if H is the independent set). In contrast, we list the copies of subgraphs H, and combine them to infer the number of copies of H. To be more precise, we list the homomorphisms of H, which is another difference we have with [10, 14, 16] and a point we have in common with previous work [11].
Regarding our “dag tree decomposition”, it is inspired by the standard notion of tree decomposition of a graph, and it yields a similar dynamic program. Yet, the similarity between the two decompositions is rather superficial; indeed, our dag-treewidth can be O(1) when the treewidth is , and vice versa. Our decomposition is unrelated to the several notions of tree decomposition for directed graphs already known [19]. Finally, our lower bounds are novel; no general lower bound in terms of d and of the structure of H was available before.
Manuscript organisation
In Sect. 2 we build the intuition with a gentle introduction to our approach. In Sect. 3 we give our dag tree decomposition and the dynamic program for counting homomorphisms. In Sect. 4 we show how to compute good dag tree decompositions. Finally, in Sect. 5 we prove the lower bounds.
Exploiting degeneracy orientations
We build the intuition behind our approach, starting from the classic algorithm for counting cliques by Chiba and Nishizeki [10]. The algorithm begins by orienting G acyclically so that , which takes time O(|E|). With G oriented acyclically, we take each in turn, enumerate every subset of out-neighbors of v, and check its edges. In this way we can explicitly find all k-cliques of G in time . Observe that the crucial fact here is that an acyclically oriented clique has exactly one source, that is, a node with no incoming arcs. We would like to extend this approach to an arbitrary pattern H. Since every copy of H in G appears with exactly one acyclic orientation, we take every possible acyclic orientation P of H, count the copies of P in G, and sum all the counts. Thus, the problem reduces to counting the copies of an arbitrary dag P in our acyclic orientation of G.
Let us start in the naive way. Suppose P has s sources. Fix a directed spanning forest F of P. This is a collection of s directed disjoint trees rooted at the sources of P (arcs pointing away from the roots). Clearly, each copy of P in G contains a copy of F. Hence, we can enumerate the copies of F in G, and for each one check if it is a copy of P. To this end, first we enumerate the possible s-tuples of V to which the sources of P can be mapped. For each such s-uple, we enumerate the possible mappings of the remaining nodes of the forest. This can be done in time by a straightforward extension of the out-neighbor listing algorithm above. Finally, for each mapping we check if its nodes induce P in G, in time . The total running time is . Unfortunately, if P is an independent set then and the running time is , so we have made no progress over the naive algorithm.
At this point we introduce our first idea. For reference we use the toy pattern P in Fig. 1. Instead of enumerating the copies of P in G, we decompose P into two pieces, P(1) and P(3, 5). Here, P(1) denotes the subgraph of P reachable from 1 (that is, the transitive closure of 1 in P). The same for P(3) and P(5), and we let . Now we count the copies of P(1), and then the copies of P(3, 5), hoping to combine the result in some way to obtain the count of P. To simplify the task, we focus on counting homomorphisms rather than copies (see below). Thus, we want to compute by combining and .
Fig. 1.

Toy example: an acyclic orientation P of , decomposed into two pieces
Now, clearly, knowing and is not sufficient to infer . Thus, we need to solve a slightly more complex problem. For every pair , let be the map given by and . We let be the number of homomorphisms of P in G whose restriction to is . By a counting argument one can immediately see that:
| 1 |
Thus, to compute we only need to compute for all possible . Now, define and with the same meaning as above. A crucial observation is that , the domain of , is precisely the set of nodes in . It is not difficult to see that this implies:
| 2 |
Thus, now our goal is to compute and for all . To this end, we list all with the technique above, and for each such we increment a counter associated to in a dictionary with default value 0. Thus, we obtain for all . Since P(1) has one source, we enumerate maps. If the dictionary takes time to access an entry, the total running time is . The same technique applied to P(3, 5) yields a running time of , since P(3, 5) has two sources. Finally, we apply Eq. 1 by running over all entries in the first dictionary and retrieving the corresponding value from the second dictionary. The total running time is , while enumerating the homomorphisms of P would have required time .
Let us abstract the general approach from this toy example. We want to decompose P into a set of pieces with the following properties: (i) Each piece has a small number of sources , and (ii) we can obtain by combining the homomorphism counts of the . This is achieved by the dag tree decomposition, which we introduce in Sect. 3. Like the tree decomposition for undirected graphs, the dag tree decomposition leads to a dynamic program to compute .
DAG tree decompositions
Let be a directed acyclic graph. We denote by , or simply , the set of nodes of P having no incoming arc. These are the sources of P. We denote by the transitive closure of u in P, i.e. the set of nodes of P reachable from u, and we let be the corresponding subgraph of P. For a subset of sources we let and . Thus, P(B) is the subgraph of P induced by all nodes reachable from B. We call B a bag of sources. We can now formally introduce our decomposition.
Definition 2
(Dag tree decomposition) Let be a dag. A dag tree decomposition (d.t.d.) of P is a (rooted) tree with the following properties:
each node is a bag of sources
for all , if then
One can see the similarity with the tree decomposition of an undirected graph (Definition 1). However, our dag tree decomposition differs crucially in two aspects. First, the bags are subsets of rather than subsets of . This is because the time needed to list the homomorphisms between and G is driven by . Second, the path-intersection property (3) concerns the pieces reachable from the bags rather than the bags themselves. The reason is that, to combine the counts of two pieces together, their intersection must form a separator in P (similarly to the tree decomposition of an undirected graph). The dag tree decomposition induces the following notions of width, used throughout the rest of the article.
Definition 3
The width of T is . The dag treewidth of P is the minimum of over all dag tree decompositions T of P.
Clearly for any k-node dag P. Figure 2 shows a pattern P together with a d.t.d. of width 1. We observe that has no obvious relation to the treewidth of H; see the discussion in Sect. 3.2.
Fig. 2.

Left: a dag P formed by five pieces. Right: a dag tree decomposition T for P. Since and the largest piece contains 4 nodes, we can compute in time
Counting homomorphisms via dag tree decompositions
For any let T(B) be the subtree of T rooted at B. We let be the down-closure of B in T, that is, the union of all bags in T(B). Consider , the subgraph of P induced by the nodes reachable from (note the difference with P(B), which contains only the nodes reachable from B). We compute in a bottom-up fashion over all B, starting with the leaves of T and moving towards the root. This is similar to the dynamic program given by the standard tree decomposition (see [18]).
As anticipated, we actually compute , the number of homomorphisms that extend a fixed mapping . We need the following concept:
Definition 4
Let be two subgraphs of P, and let and be two homomorphisms. We say and respect each other if for all .
Given some , we denote by the number of homomorphisms from to G that respect . We can now present our main algorithmic result.
Theorem 7
Let P be any k-node dag, and be a d.t.d. for P. Fix any as the root of T. There is a dynamic programming algorithm HomCount(P, T, B) that in time computes for all . This is also a bound on the time needed to compute .
The proof of Theorem 7 is given in the next subsection. Before continuing, let be an upper bound on the time needed to compute a d.t.d. of minimum width with at most bags for a pattern on k nodes. We can show that such a d.t.d. always exists:
Lemma 1
Any k-node dag P has a minimum-width d.t.d. on at most bags.
Proof
We show that, if a d.t.d. has two bags containing exactly the same sources, then one of the two bags can be removed. This implies that there exists a minimum-width d.t.d. where every bag contains a distinct source set, which therefore has at most bags. Suppose indeed T contains two bags X and formed by the same subset of sources. Let B be the neighbor of X on the unique path . Let be the tree obtained from T by replacing the edge with for every neighbor of X and then deleting X. Clearly , and properties (1) and (2) of Definition 2 are satisfied. Let us then check property (3). Consider a generic path and look at the corresponding path . If does not contain edges that we deleted, then . In this case the property holds for any bag in since it holds in T. Suppose instead contains edges that we deleted. Then contains the same bags of save that X is replaced by B. Thus we only need to check that . By property (3), . Moreover, since by construction , property (3) also gives . Thus . Therefore is a d.t.d. for P.
Then, as an immediate corollary of Theorem 7, we have:
Theorem 8
Let P be any k-node dag, and be a d.t.d. for P. We can compute in time .
Theorem 8 will be used in Sect. 3.2 to prove the bounds for our original problem of counting the copies of H via inclusion-exclusion arguments.
Proof of Theorem 7
The algorithm behind Theorem 7 is similar to the one for counting homomorphisms using a tree decomposition. To start, we prove that our dag tree decomposition enjoys a separator property similar to the one enjoyed by tree decompositions.
Lemma 2
Let T be a rooted d.t.d. and let be the children of B in T. Then for all :
for all
for any arc , if then
for any arc , if then
Proof
We prove (a). Suppose for some we have . So there exists some node such that , , and . By definition of , this implies and for some bags and . Observe however that . Thus, by point (3) of Definition 2, we have . This contradicts the third inclusion, .
Now we prove (b) and (c). For (b), since and , then too. For (c), suppose by contradiction . Therefore, either , or for some with . In both cases however we have : in the first case this holds since u is reachable from , and in the second case since and by point (3) of Definition 2. Thus in any case , which contradicts again .
Lemma 2 says that is a separator for the sub-patterns in P. This allows us to compute by combining .
Next, we show that each homomorphism of is the juxtaposition (definition below) of some homomorphism of B and some homomorphisms of , provided they respect . This establishes a bijection, implying that we can count the homomorphisms by multiplying the counts of the homomorphisms .
Definition 5
Let be any set of homomorphisms, where for all we have and respects for all . The juxtaposition of , denoted by , is the homomorphism such that whenever .
Note that the juxtaposition is always well-defined and unique, since the respect each other and the image of every u is determined by at least one among .
Lemma 3
Let T be a d.t.d. and let be the children of B in T. Fix any . Let , and for let . Then there is a bijection between and , and therefore:
| 3 |
Proof
First, we show there is an injection between and . Fix any , and consider the tuple where each is the restriction of to . Note that is unique, and that it respects since does. Thus . It follows that . Now we show there is an injection between and . Consider any tuple , and consider the juxtaposition . Then and respects . It follows that .
Last, we bound the cost of enumerating the homomorphisms of a piece of P.
Lemma 4
Given any , the set of homomorphisms has size and can be enumerated in time .
Proof
We prove the bound on the enumeration time; the proof gives immediately also the bound on . Let where . Fix a spanning forest of P(B), where each is a directed tree rooted at (arcs pointing away from the root). Consider any , and let be its restriction to . Clearly, . Note that is a homomorphism of in G. Thus, to enumerate we can enumerate each possible tuples where is a homomorphism of in G for all i. Note that not all such tuples give a valid juxtaposition that is a homomorphism . However, we can check if in time by checking the arcs between the vertices of G in the image of .
Let then be the set of homomorphisms of in G. We show how to enumerate in time , and thus all tuples in time . Together with the check on the arcs, this gives a total running time of for enumerating , as desired. To enumerate , we take each and enumerate all such that . To this end note that, once we have fixed , for each arc we have at most d choices for . Thus we can enumerate all that map to v in time . The total time to enumerate is therefore , as claimed.
We can now describe our dynamic programming algorithm, HomCount, to compute . Given a d.t.d. T of P, the algorithm goes bottom-up from the leaves of T towards the root, combining the counts using Lemma 3. For readability, we write the algorithm in a recursive fashion. We prove:
Lemma 5
Let P be any dag, any d.t.d. for P, and B any element of . HomCount(P, T, B) in time returns a dictionary that for all satisfies .
Proof
We first prove the correctness, by induction on the nodes of T. The base case is when B is a leaf of T. In this case , and the algorithm sets for each . Therefore as desired. The inductive case is when B is an internal node of T. As inductive hypothesis we assume that, for every child of B, the dictionary computed at line 8 satisfies for every . Let be the set of homomorphisms from to G, and let be the subset of elements of that respect . Thus, the inductive hypothesis says that for every .
Now consider the loop at lines 10–12. We claim that, after that loop, we have:
| 4 |
The first equality holds since the loop adds to if and only if the restriction of to is , that is, if and only if respects . The second equality holds since the keys of are a subset of and if is not in . The third equality holds by the inductive hypothesis above. The fourth equality holds since the sets form a partition of .
Finally, consider the loop at lines 13–15. We claim that line 15 sets:
| 5 |
The first equality holds by coupling line 15 and Eq. (4). The second equality holds since any element of respects if and only if it respects its restriction to , thus . The last equality holds by definition of and by Lemma 3. This proves the correctness.
Let us turn to the running time. For each dictionary C we let |C| be the number of distinct keys in C. Recall that reading or writing an entry in our dictionary takes time . We split the running time as follows:
-
(i)
the cost of the base case (lines 2–4). Since the loop has iterations, and each one costs , this cost is in .
-
(ii)
the cost of the iteration at lines 7–12, performed by the parent of B in T, where the considered child is B, excluding obviously the cost of the recursive call at line 8. Similarly to (i), this cost is bounded by .
-
(iii)
the same as (ii), but for the loop at lines 13–15. This cost is in where is the parent of B in T.
We charge B with every cost among (i), (ii), (iii) that applies (this depends on whether B is the root, a leaf, or an internal node of T). It is easy to check that the sum of all charged costs accounts for the total running time of HomCount(P, T, B) when B is the root of T. Now, by Lemma 4, for every we have , which is in by definition of . Thus each one of the three costs above is in . Summing over yields the claimed bound.
Inclusion–exclusion arguments and the dag-treewidth of undirected graphs
We turn to computing , and . We do so via standard inclusion-exclusion arguments, using our algorithm for computing as a primitive. To this end we shall define appropriate notions of width for undirected pattern graphs. Let be the set of all dags P that can be obtained by orienting H acyclically. Let be the set of all equivalence relations on (that is, all the partitions of ), and for let be the pattern obtained from H by identifying equivalent nodes according to and removing loops and multiple edges. Let D(H) be the set of all supergraphs of H on the node set , including H.
Definition 6
The dag treewidth of H is , where:
| 6 |
| 7 |
| 8 |
Note that is unrelated to the treewidth . For example, when H is a clique we have and ; when H is the independent set we have and , see Lemma 14; and when H is an expander we have , see again Lemma 14. In fact, is within constant factors of the independence number of H (see Sect. 4.4), and thus decreases as H becomes denser. This happens because adding arcs increases the number of nodes reachable from the sources of , so we may need fewer sources to reach a given piece of P. When H is a clique, P is reachable from just one source and thus .
Clearly, . The intuition behind is that, in G, each homomorphism of H corresponds to a homomorphism of some acyclic orientation P of H. Thus to compute we sum over all orientation P of H, and the running time is dominated by the P with largest treewidth. The intuition behind is similar but now we look at computing . Since homomorphisms can map different nodes of H to the same node of G, to recover we must combine for all possible through inclusion-exclusion arguments. The intuition behind is that to compute we must remove from the counts of for certain supergraphs of H. Indeed, the three measures yield:
Theorem 9
Consider any k-node pattern graph , and let be an upper bound on the time needed to compute a d.t.d. of minimum width on bags for any k-node dag. Then one can compute:
in time ,
in time ,
in time .
The claim still holds if we replace with upper bounds, and with the time needed to compute a d.t.d. on bags that satisfies those upper bounds.
Proof
We prove the three bounds in three separated steps. The last claim follows straightforwardly.
From dags to undirected patterns. Let H be any undirected pattern. First, note that:
| 9 |
Let indeed be the set of homomorphisms from H to G. Similarly, for any define (note that must preserve the direction of the arcs). Then, there is a bijection between and . Consider indeed any . Let be the orientation of H that assigns to the orientation of in G, and let . Then is a homomorphism of P in G. On the other hand consider any homomorphism for some acyclic orientation P of H. By ignoring the orientation of the edges, , too.
Thus, to compute we compute for all and apply Eq. 9. Clearly, enumerating takes time . For each P, by Lemma 1 in time we compute a d.t.d. of width such that . Then, by Lemma 5 we compute in time . Thus, we can compute every in time . Multiplying by gives the first bound of the theorem.
From homomorphisms to non-induced copies Recall that is the graph obtained from H by identifying the nodes in the same equivalence class and removing loops and multiple edges, where is an equivalence relation (or partition) over . Then, by Equation 15 of [3]:
| 10 |
where , where A runs over the equivalence classes (the sets) in . Thus, to compute , we enumerate all , compute , and apply Eq. 10. It is known that (see e.g. [2]), and clearly for each we can compute and in . Thus, the first bound of the theorem holds for computing too. Finally, we compute , where is the number of automorphisms of H, which can be computed in time [26]. This proves the second bound of the theorem.
From non-induced to induced. Finally, let D(H) be the set of all supergraphs of H on the same node set. Then from Equation 14 of [3]:
| 11 |
To compute , we take every , compute , and apply Eq. 11. Since , the third bound of the theorem follows.
The algorithmic part of our work is complete. We shall now focus on computing good dag tree decompositions, so to instantiate Theorem 9 and obtain the upper bounds of Sect. 1.1.
Computing good dag tree decompositions
In this section we show how to compute dag tree decompositions of low width. First, we show that for every k-node dag P we can compute in time a dag tree decomposition T that satisfies . This result requires a nontrivial proof. As a corollary, we prove Theorems 1 and 2. Second, we give improved bounds for cliques minus edges; as a corollary, we prove Theorem 3. Third, we give improved bounds for complete multipartite graphs plus edges; as a corollary, we prove Theorem 4. Finally, we show that where is the independence number of H, which is of independent interest. This implies that the trivial decomposition on one bag has width that is asymptotically optimal, since in any orientation of H, the set of sources is an independent set.
To proceed, we need some additional notation. For a dag P, we say is a joint if it is reachable from at least two sources, i.e., if for some with . Let be the set of joints of P. We write for the set of joints reachable from u, and for any we let . Similarly, we denote by the sources from which y is reachable, and we let .
A bound for all patterns
This subsection is devoted to prove:
Theorem 10
For any dag , in time we can compute a dag tree decomposition with and , where and .
By combining Theorem 10 with Definition 6 and Theorem 9, we obtain as a corollary Theorem 1. The proof of Theorem 10 is divided in four steps, as follows. First (step 1), we remove the “easy” pieces of P; this can break P into several connected components, and we show that their d.t.d.’s can be composed into a d.t.d. for P. Next, we show that if the i-th component has nodes, then it admits a d.t.d. of width . This requires to “peel” the component to remove its tree-like parts (step 2) and decomposing the remainder using a reduction to standard tree decompositions (step 3). Finally, we wrap up our results and conclude the proof (step 4).
Throughout the proof, the relevant structure of P is encoded by a graph that we call the skeleton of P, defined as follows.
Definition 7
The skeleton of a dag is the bipartite dag where and , and if and only if .
Figure 3 gives an example. Note that does not contain nodes that are neither sources nor joints, as they are irrelevant to the d.t.d.. Note also that computing takes time .
Fig. 3.

Left: a dag P. Right: its skeleton (sources above, joints below)
Let us now delve into the proof. For any node x, we denote by the current degree of x in the skeleton.
Step 1: Greedy bag construction Set and let . Set and proceed iteratively as follows, recalling that . For any source let be the transitive closure of u in . If there exists a source such that , then let , and let be obtained from by removing from . Repeat the procedure until for all u. Suppose the procedure stops at , producing the subset and the residual skeleton .
Lemma 6
, where and .
Proof
For the first term of the , note that at each step we remove at least 4 nodes from and add one node to . Hence , which implies . For the second term of the , we show that the set of nodes removed at step j identifies at least 4 unique arcs of P. To this end, consider the sub-pattern containing all nodes not reachable from . Note that is the skeleton of . Indeed, if then v is not reachable from any source in , since otherwise v would have been removed before step j. Thus, at step j we are removing at least 3 joint nodes of . Therefore, contains at least three arcs pointing to its joints. In addition, by definition, the joints of must be reachable from some node in . Thus there is an arc from to a node of , and this node is therefore a joint itself. Thus, contains at least 4 arcs pointing to the joints of . Since the joints of are then removed from , these arcs are counted only once. Hence , and .
Now, if , then is a d.t.d. of P whose width is . By Lemma 6, , so Theorem 10 is proven.
If instead , then has nonempty connected components. For each let be the i-th component of . Let , and let . Then, is the skeleton of ; this follows from the same argument used in the proof of Lemma 6. We shall now see that we can obtain a d.t.d. for P by arranging the d.t.d.’s of the into a tree, and adding to all bags.
Lemma 7
For each let be a d.t.d. of . Consider the tree T obtained as follows. The root of T is the bag , and the subtrees below are , where each bag has been replaced by . Then is a d.t.d. of P with and .
Proof
The claims on and are straightforward. Let us check that T is a d.t.d. of P, via Definition 2. Property (1) is immediate. For property (2), note that because is by hypothesis a d.t.d. of . Thus . We turn to property (3). Choose any two bags and of T, where and for some , and any bag . Suppose first ; thus by construction . Since is a d.t.d., then , and in T this implies . Suppose instead . Thus and this means that . But and , thus . It follows that for every bag of T we have .
Step 2: Peeling . We now remove the tree-like parts of . These include, for instance, sources that have only one reachable joint. For each such source, we create a dedicated bag which becomes the child of another bag that reaches the same joint. This removes a source without increasing the width of the decomposition.
The construction is recursive. Let and . Set . For any node , we denote by its degree in . We will show that the tree returned by our recursive construction is a d.t.d. for .
The base case is . In this case we set . Clearly, is a d.t.d. for of width 1 and we are done. Suppose instead . Recall that for all . Consider the first one of these three cases that applies (if none of them does, then we stop):
. Then choose any such u, and we choose any with .
for some with . Then, choose any such .
. Then choose any such v, let u be the unique source such that , and let be any source with .
Then, we define recursively as follows. Let , and let be the skeleton graph obtained from by removing u and (for the third case) the node that is reachable only from u. We invoke the procedure recursively on . Suppose the recursive procedure returns a d.t.d. of . Then, must contain a bag such that . Create the bag , set it as a child of in , and let the resulting tree be . Let us check that is a d.t.d. for . Properties (1) and (2) of Definition 2 are obviously satisfied. For property (3), since is a leaf, we only need to check that for all and all . To this end note that, by the choice of u and , for any we have . We repeat the entire procedure until we reach the base case, or until and none of the three cases above holds, in which case we move to the next phase.
Before continuing, we make sure that the procedure above is well defined; we must guarantee that, in each of the three cases, the node exists. One can see that exists whenever (which is true by hypothesis) and is connected. To see that is connected, note that if this was not the case then in some step we removed a source u with such that no other source has . However, this cannot happen by construction of the procedure.
Step 3: Decomposing the core Suppose the peeling phase stops at . Let and . We say is the core of ; this is the part that determines the dag treewidth. Now, since violates all three conditions of the peeling step, we have for every source u and for every joint v. Thus can be encoded as a simple graph. Formally, let where and , where for each . To ease the discussion, for the edges we use u and interchangeably. Figure 4 gives an example. Note that is the skeleton of , since are the sources of and the degree bound above implies that each is reachable from at least two sources of . In what follows we let .
Fig. 4.

Above: example of a skeleton component . Below: the core obtained from after peeling (left), and its encoding as (right)
We use to compute a good d.t.d. of via tree decompositions. First, we show that four our purposes it is sufficient to find a d.t.d. of width at most .
Lemma 8
If then .
Proof
First, suppose that . Since all nodes of have degree at least 2, then contains a 4-cycle, and thus an edge cover of size 2. We then build by setting as root, and for every as child of . This is clearly a d.t.d. for of width , and thus .
Suppose instead that . Note that by construction of . One can check that these conditions imply , which in turn gives:
| 12 |
concluding the proof.
Therefore, we compute a d.t.d. of width at most . We do this in two steps.
Lemma 9
In time one can compute a tree decomposition for with treewidth at most and bags.
Proof
By Theorem 2 of [24], the treewidth of a graph is at most . By Theorems 5.23-5.24 of [18], we can compute a minimum-width tree decomposition of an n-node graph in time . By Lemma 5.16 of [18], in time O(n) we can transform such a decomposition into one that contains at most 4n bags, leaving its width unchanged. Therefore, in time we can build a tree decomposition D for with bags that satisfies .
Lemma 10
Let be a tree decomposition of .In time we can build a d.t.d. for such that and .
Proof
To simplify the notation let us write in place of . We first show how to build the tree T. The tedious part is proving it is a valid d.t.d. for .
The intuition is that D covers the edges of , which correspond to the sources of . This gives a way to “convert” the bags of D into bags for . For every choose an arbitrary incident edge . Replace each bag by , and for every , choose a bag , and set the bag as child of B(Y). Let T be the resulting tree. To see that the construction is well-defined, note that, by point (2) of Definition 1, for any there exists some such that . Therefore assigning as child of some B(Y) with is licit. Now, follows immediately by the facts that for all and that for each of the bags above, and by Definitions 1 and 2 . The bound holds since T contains a bag for each bag of D, plus at most one bag for each node in , and .
Let us then check that T is a d.t.d. for via Definition 2. Clearly, T is a tree and satisfies property (1). For property (2), let . Observe that by construction . The right-hand expression is .
It remains to check property (3). First, if we have set as child of then by construction . Thus we can ignore any such and focus on the remaining bags of T, proving that every such that satisfy . Let the three bags of D from which the construction produced respectively . Observe that implies . Now suppose that, by contradiction, there exists such that . Note that, by construction, we must have put some with in and some with in , for some . Moreover, and , else we could not have and . Finally, bear in mind that and , otherwise , contradicting the hypothesis. Now we consider three cases. We use repeatedly properties (2) and (3) of Definition 1.
Case 1 . Then , a contradiction.
Case 2 and . Then and with is the edge chosen to cover , else we would not put . Moreover there must be such that . For the sake of the proof root D at Y, so and are in distinct subtrees. If and are in the same subtree then , but and thus , a contradiction. Otherwise , and since then and then , a contradiction.
Case 3 and . Then , , and are the sources chosen to cover respectively . Moreover there must be such that and . Root again D at Y. If then since it holds , a contradiction. Otherwise are in the same subtree of D. If the subtree is the same as , then , but and thus and thus , a contradiction. Otherwise we have ; but , thus and , again a contradiction.
Lemma 11
In time we can compute a d.t.d. for such that and .
With Lemma 11, we are almost done. It remains to wrap all our bounds together.
Step 4: Assembling the tree Recall the sub-patterns obtained after the greedy bag construction (step 1). Let be the d.t.d. for as returned by the recursive peeling and the core decomposition. Since the peeling phase only adds bags of size 1, then . Therefore, by Lemma 9, . Moreover, since each bag added in the peeling phase corresponds to a unique source, then .
Let now be the d.t.d. for P obtained by assembling the trees as in Lemma 7. By Lemma 7 itself, , thus:
| 13 |
Now, by Lemma 6 we know that has at least nodes and arcs. Similarly, since each has at least nodes and arcs, then has at least nodes and arcs. Then and , so . Moreover, . Finally, by Lemma 9 the time to build is , since the peeling phase clearly takes time . The total time to build T is therefore . This concludes the proof of Theorem 10.
Bounds for quasi-cliques (Theorem 3)
Lemma 12
If a k-node dag P has edges, then in time one can compute a d.t.d. T for P on two bags such that .
Proof
The source set of P is an independent set. Hence , and . Consider any tree T on two bags such that , , and . It is immediate to check that T satisfies the claim.
By coupling Lemma 12 and Theorem 9, for computing and we obtain a running time bound of . For , we refine the bound of Theorem 9 by observing that . This yields a running time bound of . This concludes the proof of Theorem 3.
Bounds for quasi-multipartite graphs (Theorem 4)
Lemma 13
If H is a complete multipartite graph, then . If H is a complete multipartite graph plus edges, then . In either case, for any , for any acyclic orientation P of we can compute in time a d.t.d. of P on O(k) bags whose width satisfies the bounds above.
Proof
First, suppose is complete multipartite, so where each is a maximal independent set in H. In any acyclic orientation P of H, the source set S satisfies for some . Moreover, for any . A d.t.d. T for P of width is the tree on |S| bags with one source per bag, which can be computed in time .
Suppose now we add arcs to P, with any orientation; this means H is a complete multipartite graph plus edges. Again we have , but now for some we might have , so the d.t.d above might not be valid anymore. Let and consider any d.t.d. T for . We argue that T is a valid d.t.d. for P as well. First, the source set of is same of P (that is, S). Thus, since T satisfies properties (1) and (2) of Definition 2 for , then it does so for P, too. For property (3), note that every node is reachable from every . Thus, all bags B of T satisfy . As a consequence, for any three bags , if then . Thus T satisfies property (3) and is a d.t.d. for P. Therefore, any d.t.d. for is a d.t.d. for P. Now, since has at most edges, by Theorem 10 in time we can compute a d.t.d. for it of width at most on O(k) bags.
Consider now any . For any , we denote by the node of corresponding to v, and for any we let be the set of nodes of H identified in x. Let P be any acyclic orientation of . Since the sources S of P form an independent set, then for some j. Moreover, for any node x of P, if for some with then x reachable from every node in S. Therefore, if we let and , the arguments above apply and we obtain the same bound.
By coupling Lemma 13 and Theorem 9, when H is complete multipartite, for computing and we obtain a time bound of . Similarly, when H is complete multipartite plus edges, we obtain a time bound of . This proves Theorem 4.
Independence number and dag treewidth
Recall that is the independence number of H. We show:
Lemma 14
Any k-node graph H satisfies .
Proof
For the upper bound, note that for any and any . Moreover, in any acyclic orientation P of the sources form an independent set. Thus . The bound follows by Definition 6.
For the lower bound, we exhibit a pattern obtained by adding edges to H such that for all its acyclic orientations P. Let be an independent set of H with and . We add edges to I, so as to obtain the 1-subdivision of an expander. Partition I into where and . Consider a 3-regular expander of linear treewidth . It is well known that such expanders exist (see e.g. Proposition 1 and Theorem 5 of [21]). Note that . For each edge , we choose a distinct node in , denoted by , and we add to H the edges and . Let be the resulting pattern. Observe that is the 1-subdivision of , and that since .
Let now be any acyclic orientation of where where are the sources of P. Such an orientation exists since is an independent set in . Let T be any d.t.d. of P. We show that , which implies and therefore the thesis . To this end, consider the tree D obtained from T by replacing each bag of sources B with the bag of nodes . We claim that D is a tree decomposition of of width at most . Let us start by checking the properties of Definition 1.
Property (1) By point (2) of Definition 2, the d.t.d. T satisfies . Therefore, by construction of D, we have .
Property (2) Let be any edge of where we recall that . By construction of , there exists such that in P. Since T is a d.t.d., it satisfies point (2) of Definition 2, hence for some . By construction of D this implies there is some bag such that .
Property (3) Fix any three bags such that . By construction, for some such that . Consider any ; we need to show that . By construction of D, we have . Thus, there exist and such that and . However, since , point (3) of Definition 1 implies . Therefore, as well. Moreover , and thus . But , so .
Hence, D is a tree decomposition of . Finally, note that any bag by construction satisfies since any source has at most 2 arcs towards . Then by Definition 1 and Definition 3 we have , that is, , as claimed.
Lower bounds
We prove Theorem 6, in a more technical form. Note that, since by Lemma 14, the bound still holds if one replaces by . The proof uses the following result:
Theorem 11
([12], Theorem I.2) The following problems are -hard and, assuming ETH, cannot be solved in time for any computable function f: counting (directed) paths or cycles of length k, and counting edge-colorful or uncolored k-matchings in bipartite graphs.
Let us now state the lower bound.
Theorem 12
Choose any function such that for all . There exists an infinite family of patterns such that (1) for all we have ,and (2) if there exists an algorithm that for all computes or in time , where d is the degeneracy of G, then ETH fails.
Proof
We reduce counting cycles in an arbitrary graph to counting a gadget pattern on k nodes and dag treewidth O(a(k)) in a d-degenerate graph.
First, fix a function such that . Now consider a simple cycle on nodes and any arbitrarily large . Our gadget pattern on k nodes is the following. For each edge of the cycle create a clique on nodes; delete e and connect both u and v to every node of . The resulting pattern H has nodes. Let us prove that ; since , this implies as desired. Consider again the generic edge . In any acyclic orientation P of H, the set induces a clique, and thus can contain at most one source. Applying the argument to all e shows that , hence . This holds also if we add edges and/or identify nodes of P, hence .
Now consider a simple graph on nodes and edges. We replace each edge of as described above, which takes time . The resulting graph G has nodes and degeneracy d. Every -cycle of is univocally associated to a copy of H in G (note that every non-induced copy of H in G is induced too). Suppose we have an algorithm that computes or in time . Since , , , and , for some functions , then the running is time . Invoking Theorem 11 concludes the proof.
Conclusions
We have shown how, by introducing a novel tree-like decomposition for directed acyclic graphs, one can improve on the decades-old state-of-the-art subgraph counting algorithms when the host graph is sufficiently sparse. This decomposition seems to capture the structure of the problem, and may therefore be of independent interest.
We leave open the (important) problem of finding a tight characterization of the class of patterns that are fixed-parameter tractable with respect to homomorphism, subgraph, and induced subgraph counting, when the complexity is parameterized by the degeneracy of the host graph. Our results represent one step, outlining properties (the boundedness of our notions of width) that are sufficient, but maybe not necessary. Indeed, our lower bounds only give evidence that such properties are necessary for counting induced or non-induced occurrences, and only for some classes of patterns. Finding a complete characterization, and therefore establishing a class pattern dichotomy for all three problems, would be an exciting development.
Acknowledgements
Most of this work was done while the author was affiliated with the Department of Computer Science of the Sapienza University of Rome. The author was supported in part by: BICI, the Bertinoro International Center for Informatics; a Google Focused Award “Algorithms and Learning for AI” (ALL4AI); the ERC Starting Grant DMAP 680153; the “Dipartimenti di Eccellenza 2018–2022” Grant awarded to the Department of Computer Science of the Sapienza University of Rome. The author thanks the anonymous reviewers for their precious feedback.
Funding
Open access funding provided by Università degli Studi di Milano within the CRUI-CARE Agreement.
Footnotes
In this paper we suppress factors by default; if needed, we explicit them in order to emphasize that the dependence on k is polynomial rather than exponential.
Formally, we should define a tree together with a mapping between its nodes and the subsets of V. However, the definition adopted here is sufficient for our purposes and lightens the notation.
A preliminary version of this article, [4], appeared in the proceedings of the 14th International Symposium on Parameterized and Exact Computation (IPEC 2019).
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Alon N, Dao P, Hajirasouliha I, Hormozdiari F, Sahinalp SC. Biomolecular network motif counting and discovery by color coding. Bioinformatics. 2008;24(13):i241–249. doi: 10.1093/bioinformatics/btn163. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Berend D, Tassa T. Improved bounds on Bell numbers and on moments of sums of random variables. Probab. Math. Stat. 2010;30(2):185–205. [Google Scholar]
- 3.Borgs, C., Chayes, J., Lovász, L., Sós, V.T., Vesztergombi, K.: Counting Graph Homomorphisms, pp. 315–371 (2006)
- 4.Bressan, M.: Faster subgraph counting in sparse graphs. In: Proceedings of IPEC, vol. 148, pp. 6:1–6:15 (2019)
- 5.Bressan, M., Chierichetti, F., Kumar, R., Leucci, S., Panconesi, A.: Counting graphlets: space vs. time. In: Proceedings of ACM WSDM, pp. 557–566 (2017)
- 6.Bressan M, Chierichetti F, Kumar R, Leucci S, Panconesi A. Motif counting beyond five nodes. ACM Trans. Knowl. Discov. Data. 2018;20(2):1–25. doi: 10.1145/3186586. [DOI] [Google Scholar]
- 7.Bressan M, Leucci S, Panconesi A. Motivo: fast motif counting via succinct color coding and adaptive sampling. Proc. VLDB Endow. 2019;12(11):1651–1663. doi: 10.14778/3342263.3342640. [DOI] [Google Scholar]
- 8.Chen J, Chor B, Fellows M, Huang X, Juedes D, Kanj IA, Xia G. Tight lower bounds for certain parameterized NP-hard problems. Inf. Comput. 2005;201(2):216–231. doi: 10.1016/j.ic.2005.05.001. [DOI] [Google Scholar]
- 9.Chen J, Huang X, Kanj IA, Xia G. Strong computational lower bounds via parameterized complexity. J. Comput. Syst. Sci. 2006;72(8):1346–1367. doi: 10.1016/j.jcss.2006.04.007. [DOI] [Google Scholar]
- 10.Chiba N, Nishizeki T. Arboricity and subgraph listing algorithms. SIAM J. Comput. 1985;14(1):210–223. doi: 10.1137/0214017. [DOI] [Google Scholar]
- 11.Curticapean, R., Dell, H., Marx, D.: Homomorphisms are a good basis for counting small subgraphs. In: Proceedings of ACM STOC, pp. 210–223 (2017)
- 12.Curticapean, R., Marx, D.: Complexity of counting subgraphs: only the boundedness of the vertex-cover number counts. In: Proceedings of IEEE FOCS, pp. 130–139 (2014)
- 13.Diestel R. Graph Theory. 5. Berlin: Springer; 2017. [Google Scholar]
- 14.Eppstein D. Arboricity and bipartite subgraph listing algorithms. Inf. Process. Lett. 1994;51(4):207–211. doi: 10.1016/0020-0190(94)90121-X. [DOI] [Google Scholar]
- 15.Eppstein D. Subgraph isomorphism in planar graphs and related problems. J. Graph Algorithms Appl. 1999;3(3):1–27. doi: 10.7155/jgaa.00014. [DOI] [Google Scholar]
- 16.Eppstein, D., Löffler, M., Strash, D.: Listing all maximal cliques in sparse graphs in near-optimal time. In: Algorithms and Computation, pp. 403–414. Springer, Berlin (2010)
- 17.Eppstein, David, Strash, Darren: Listing all maximal cliques in large sparse real-world graphs. In: Experimental Algorithms, pp. 364–375. Springer, Berlin (2011)
- 18.Fedor, V.: Fomin and Dieter Kratsch, 1st edn. Exact Exponential Algorithms. Springer, Berlin (2010)
- 19.Ganian R, Hliněný P, Kneis J, Meister D, Obdržálek J, Rossmanith P, Sikdar S. Are there any good digraph width measures? J. Comb. Theory Ser. B. 2016;116:250–286. doi: 10.1016/j.jctb.2015.09.001. [DOI] [Google Scholar]
- 20.Grohe M, Kreutzer S, Siebertz S. Characterisations of nowhere dense graphs (invited talk) Proc. FSTTCS. 2013;24:21–40. [Google Scholar]
- 21.Grohe M, Marx D. On tree width, bramble size, and expansion. J. Comb. Theory Ser. B. 2009;99(1):218–228. doi: 10.1016/j.jctb.2008.06.004. [DOI] [Google Scholar]
- 22.Grohe, M., Schweikardt, N.: First-order query evaluation with cardinality conditions. In: Proceedings of ACM SIGMOD, pp. 253–266 (2018)
- 23.Impagliazzo, R., Paturi, R., Zane, F.: Which problems have strongly exponential complexity? In: Proceedings of IEEE FOCS, pp. 653–662 (1998)
- 24.Kneis, J., Mölle, D., Richter, S., Rossmanith, P.: Algorithms based on the treewidth of sparse graphs. In: Graph-Theoretic Concepts in Computer Science, pp. 385–396. Springer, Berlin (2005)
- 25.Le Gall, F.: Powers of tensors and fast matrix multiplication. In: Proceedings of ISSAC, pp. 296–303 (2014)
- 26.Mathon R. A note on the graph isomorphism counting problem. Inf. Process. Lett. 1979;8(3):131–136. doi: 10.1016/0020-0190(79)90004-8. [DOI] [Google Scholar]
- 27.Nešetřil, J., de Mendez, P.O.: Sparsity: graphs, structures, and algorithms. In: Algorithms and Combinatorics. Springer, Berlin (2012)
- 28.Nešetřil J, de Mendez PO. On nowhere dense graphs. Eur. J. Combin. 2011;32(4):600–617. doi: 10.1016/j.ejc.2011.01.006. [DOI] [Google Scholar]
- 29.Nešetřil J, Poljak S. On the complexity of the subgraph problem. Comment. Math. Univ. Carol. 1985;026(2):415–419. [Google Scholar]
- 30.Patel V, Regts G. Computing the number of induced copies of a fixed graph in a bounded degree graph. Algorithmica. 2018;81(5):1844–1858. doi: 10.1007/s00453-018-0511-9. [DOI] [Google Scholar]
- 31.Sariyüce, A.E., Pinar, A.: Peeling bipartite networks for dense subgraph discovery. In: Proceedings of ACM WSDM, pp. 504–512 (2018)
- 32.Sariyüce AE, Seshadhri C, Pinar A, Çatalyürek ÜV. Nucleus decompositions for identifying hierarchy of dense subgraphs. ACM Trans. Web. 2017;11(3):16:1–16:27. doi: 10.1145/3057742. [DOI] [Google Scholar]
- 33.Tsourakakis, C.E., Pachocki, J., Mitzenmacher, M.: Scalable motif-aware graph clustering. In: Proceedings of WWW, pp. 1451–1460 (2017)

