Skip to main content
Springer logoLink to Springer
. 2022 May 7;84(8):2309–2334. doi: 10.1007/s00453-022-00970-8

On the Fine-grained Parameterized Complexity of Partial Scheduling to Minimize the Makespan

Jesper Nederlof 1, Céline M F Swennenhuis 2,
PMCID: PMC9304082  PMID: 35880200

Abstract

We study a natural variant of scheduling that we call partial scheduling: in this variant an instance of a scheduling problem along with an integer k is given and one seeks an optimal schedule where not all, but only k jobs, have to be processed. Specifically, we aim to determine the fine-grained parameterized complexity of partial scheduling problems parameterized by k for all variants of scheduling problems that minimize the makespan and involve unit/arbitrary processing times, identical/unrelated parallel machines, release/due dates, and precedence constraints. That is, we investigate whether algorithms with runtimes of the type f(k)nO(1) or nO(f(k)) exist for a function f that is as small as possible. Our contribution is two-fold: First, we categorize each variant to be either in P, NP-complete and fixed-parameter tractable by k, or W[1]-hard parameterized by k. Second, for many interesting cases we further investigate the runtime on a finer scale and obtain run times that are (almost) optimal assuming the Exponential Time Hypothesis. As one of our main technical contributions, we give an O(8kk(|V|+|E|)) time algorithm to solve instances of partial scheduling problems minimizing the makespan with unit length jobs, precedence constraints and release dates, where G=(V,E) is the graph with precedence constraints.

Keywords: Fixed-parameter tractability, Scheduling, Precedence constraints

Introduction

Scheduling is one of the most central application domains of combinatorial optimization. In the last decades, huge combined effort of many researchers led to major progress on understanding the worst-case computational complexity of almost all natural variants of scheduling: By now, for most of these variants it is known whether they are NP-complete or not. Scheduling problems provide the context of some of the most classic approximation algorithms. For example, in the standard textbook by Shmoys and Williamson on approximation algorithms [29] a wide variety of techniques are illustrated by applications to scheduling problems. See also the standard textbook on scheduling by Pinedo [24] for more background.

Instead of studying approximation algorithms, another natural way to deal with NP-completeness is Parameterized Complexity (PC).

While the application of general PC theory to the area of scheduling has still received considerably less attention than the approximation point of view, recently its study has seen explosive growth, as witnessed by a plethora of publications (e.g. [2, 13, 18, 22, 27, 28]). Additionally, many recent results and open problems can be found in a survey by Mnich and van Bevern [21], and even an entire workshop on the subject was recently held [20].

In this paper we advance this vibrant research direction with a complete mapping of how several standard scheduling parameters influence the parameterized complexity of minimizing the makespan in a natural variant of scheduling problems that we call partial scheduling. Next to studying the classical question of whether parameterized problems are in P, in FPT parameterized by k, or W[1]-hard parameterized by k, we also follow the well-established modern perspective of ‘fine-grained’ PC and aim at run times of the type f(k)nO(1) or nf(k) for the smallest function f of parameter k.

Partial Scheduling In many scheduling problems arising in practice, the set of jobs to be scheduled is not predetermined. We refer to this as partial scheduling. Partial scheduling is well-motivated from practice, as it arises naturally for example in the following scenarios:

  1. Due to uncertainties a close-horizon approach may be employed and only few jobs out of a big set of jobs will be scheduled in a short but fixed time-window,

  2. In freelance markets typically a large database of jobs is available and a freelancer is interested in selecting only a few of the jobs to work on,

  3. The selection of the jobs to process may resemble other choices the scheduler should make, such as to outsource non-processed jobs to various external parties.

Partial scheduling has been previously studied in the equivalent forms of maximum throughput scheduling [25] (motivated by the first example setting above), job rejection [26], scheduling with outliers [12], job selection [8, 16, 30] and its special case interval selection [5].

In this paper, we conduct a rigorous study of the parameterized complexity of partial scheduling, parameterized by the number of jobs to be scheduled. We denote this number by k. While several isolated results concerning the parameterized complexity of partial scheduling do exist, this parameterization has (somewhat surprisingly) not been rigorously studied yet.1 We address this and study the parameterized complexity of the (arguably) most natural variants of the problem. We fix as objective to minimize the makespan while scheduling at least k jobs, for a given integer k and study all variants with the following characteristics:

  • 1 machine, identical parallel machines or unrelated parallel machines,

  • release/due dates, unit/arbitrary processing times, and precedence constraints.

Note that a priori this amounts to 3×2×2×2×2=48 variants.

Our Results

We give a classification of the parameterized complexity of these 48 variants. Additionally, for each variant that is not in P, we give algorithms solving them and lower bounds under ETH. To easily refer to a variant of the scheduling problem, we use the standard three-field notation by Graham et al. [11]. See Sect. 2 for an explanation of this notation. To accommodate our study of partial scheduling, we extend the α|β|γ notation as follows:

Definition 1

We let k-sched in the γ-field indicate that we only schedule k out of n jobs.

We study the fine-grained parameterized complexity of all problems α|β|γ, where α{1,P,R}, the options for β are all combinations for rj, dj, pj=1, prec, and γ is fixed to γ=k-sched,Cmax. Our results are explicitly enumerated in Table 1.

Table 1.

The fine-grained parameterized complexity of partial scheduling, where γ denotes k-sched, Cmax and S.I. abbreviates Subgraph Isomorphism (Color table online)

graphic file with name 453_2022_970_Tab1_HTML.jpg

Since pj=1 implies that the machines are identical, the mentioned number of 48 combinations reduces to 40 different scheduling problems. The O notation omits factors polynomial in the input size. The highlighted table entries are new results from this paper

The rows of Table 1 are lexicographically sorted on (i) precedence relations/no precedence relations, (ii) a single machine, identical machines or unrelated machines (iii) release dates and/or deadlines. Because their presence has a major influence on the character of the problem we stress the distinction between variants with and without precedence constraints.2 On a high abstraction level, our contribution is two-fold:

  1. We present a classification of the complexity of all aforementioned variants of partial scheduling with the objective of minimizing the makespan. Specifically, we classify all variants to be either solvable in polynomial time, to be fixed-parameter tractable in k and NP-hard, or to be W[1]-hard.

  2. For most of the studied variants we present both an algorithm and a lower bound that shows that our algorithm cannot be significantly improved unless the Exponential Time Hypothesis (ETH) fails.

Thus, while we completely answer a classical type of question in the field of Parameterized Complexity, we pursue in our second contribution a more modern and fine-grained understanding of the best possible runtime with respect to the parameter k. For several of the studied variants, the lower bounds and algorithms listed in Table 1 follow relatively quickly. However, for many other cases we need substantial new insights to obtain (almost) matching upper and lower bounds on the runtime of the algorithms solving them. We have grouped the rows in result types [A][G] depending on our methods for determining their complexity.

Our New Methods

We now describe some of our most significant technical contributions for obtaining the various types (listed as [A][G] in Table 1) of results. Note that we skip some less interesting cases in this introduction; for a complete argumentation of all results from Table 1 we refer to Sect. 6. The main building blocks and logical implications to obtain the results from Table 1 are depicted in Fig. 1. We now discuss these building blocks of Fig. 1 in detail.

Fig. 1.

Fig. 1

An illustration of the various result types as indicated in Table 1. Arrows indicate how a problem is generalized by another problem

Precedence Constraints

Our main technical contribution concerns result type [C]. The simplest of the two cases, P|prec,pj=1|k-sched,Cmax, cannot be solved in O(2o(klogk)) time assuming the Exponential Time Hypothesis and not in 2o(k) unless sub-exponential time algorithms for the Biclique problem exist, due to reductions by Jansen et al. [14]. Our contribution lies in the following theorem that gives an upper bound for the more general of the two problems that matches the latter lower bound:

Theorem 1

P|rj,prec,pj=1|k-sched,Cmax can be solved in O(8kk(|V|+|E|)) time,3 where G=(V,E) is the precedence graph given as input.

Theorem 1 will be proved in Sect. 3. The first idea behind the proof is based on a natural4 dynamic programming algorithm indexed by anti-chains of the partial order naturally associated with the precedence constraints. However, evaluating this dynamic program naïvely would lead to an nO(k) time algorithm, where n is the number of jobs.

Our key idea is to only compute a subset of the table entries of this dynamic programming algorithm, guided by a new parameter of an antichain called the depth. Intuitively, the depth of an antichain A indicates the number of jobs that can be scheduled after A in a feasible schedule without violating the precedence constraints.

We prove Theorem 1 by showing we may restrict attention in the dynamic programming algorithm to antichains of depth at most k, and by bounding the number of antichains of depth at most k indirectly by bounding the number of maximal antichains of depth at most k. We believe this methodology should have more applications for scheduling problems with precedence constraints.

Surprisingly, the positive result of Theorem 1 is in stark contrast with the seemingly symmetric case where only deadlines are present: Our next result, indicated as [B] in Fig. 1 shows it is much harder:

Theorem 2

The problem P|dj,prec,pj=1|k-sched,Cmax is W[1]-hard, and it cannot be solved in no(k/logk) time assuming the ETH.

Theorem 2 is a consequence of a reduction outlined in Sect. 4. Note the W[1]-hardness follows from a natural reduction from the k-Clique problem (presented originally by Fellows and McCartin [9]), but this reduction increases the parameter k to Ω(k2) and would only exclude no(k) time algorithms assuming the ETH. To obtain the tighter bound from Theorem 2, we instead provide a non-trivial reduction from the 3-Coloring problem based on a new selection gadget.

For result type [D], we give a lower bound by a (relatively simple) reduction from Partitioned Subgraph Isomorphism in Theorem 6 and Corollary 4. Since it is conjectured that Partitioned Subgraph Isomorphism cannot be solved in no(k) time assuming the ETH, our reduction is a strong indication that the simple nO(k) time algorithm (see Sect. 6) cannot be improved significantly in this case.

No Precedence Constraints

The second half of our classification concerns scheduling problems without precedence constraints, and is easier to obtain than the first half. Results [E], [F] are consequences of a greedy algorithm and Moore’s algorithm [23] that solves the problem 1||jUj in O(nlogn) time. Notice that this also solves the problem 1|rj|k-sched,Cmax, by reversing the schedule and viewing the release dates as the deadlines. For result type [G] we show that a standard technique in parameterized complexity, the color coding method, can be used to get a 2O(k) time algorithm for the most general problem of the class, being R|rj,dj|k-sched,Cmax. All lower bounds on the runtime of algorithms for problems of type [G] are by a reduction from Subset Sum, but for 1|rj,dj|k-sched,Cmax this reduction is slightly different.

Related Work

The interest in parameterized complexity of scheduling problems recently witnessed an explosive growth, resulting in e.g. a workshop [20] and a survey by Mnich and van Bevern [21] with a wide variety of open problems.

The parameterized complexity of partial scheduling parameterized by the number of processed jobs, or equivalently, the number of jobs ‘on time’ was studied before: Fellows et al. [9] studied a problem called k-Tasks On Time that is equivalent to 1|dj,prec,pj=1|k-sched,Cmax and showed that it is W[1]-hard when parameterized by k,5 and in FPT parameterized by k and the width of the partially ordered set induced by the precedence constraints. Van Bevern et al. [27] showed that the Job Interval Selection problem, where each job is given a set of possible intervals to be processed on, is in FPT parameterized by k. Bessy et al. [2] consider partial scheduling with a restriction on the jobs called ‘Coupled-Task’, and also remarked the current parameterization is relatively understudied.

Another related parameter is the number of jobs that are not scheduled, that also has been studied in several previous works [4, 9, 22]. For example, Mnich and Wiese [22] studied the parameterized complexity of scheduling problems with respect to the number of rejected jobs in combination with other variables as parameter. If n denotes the number of given jobs, this parameter equals n-k. The two parameters are somewhat incomparable in terms of applications: In some settings only few jobs out of many alternatives need to be scheduled, but in other settings rejecting a job is very costly and thus will happen rarely. However, a strong advantage of using k as parameter is in terms of its computational complexity: If the version of the problem with all jobs mandatory is NP-complete it is trivially NP-complete for n-k=0, but it may still be in FPT parameterized by k.

Organization of this Paper

This paper is organized as follows: We start with some preliminaries in Sect. 2. In Sect. 3 we present the proof of Theorem 1, and in Sect. 4 we describe the reductions for result types [B] and [D]. In Sect. 5 we give the algorithm for result type [G] and in Sect. 6 we motivate all cases from Table 1. Finally, in Sect. 7 we present a conclusion.

Preliminaries

The Three-Field Notation by Graham Et al

Throughout this paper we denote scheduling problems using the three-field notation by Graham et al. [11]. Problems are classified by parameters α|β|γ. The α describes the machine environment. We use α{1,P,R}, indicating whether there are one (1), identical (P) or unrelated (R) parallel machines available. Here identical refers to the fact that every job takes a fixed amount of time process independent of the machine, and unrelated means a job could take different time to process per machine. The β field describes the job characteristics, which in this paper can be a combination of the following values: prec (precedence constraints), rj (release dates), dj (deadlines) and pj=1 (all processing times are 1). We assume without loss of generality that all release dates and deadlines are integers.

The γ field concerns the optimization criteria. A given schedule determines Cj, the completion time of job j, and Uj, the unit penalty which is 1 if Cj>dj, and 0 if Cjdj. In this paper we use the following optimization criteria

  • Cmax: minimize the makespan (i.e. the maximum completion time Cj of any job),

  • jUj: minimize the number of jobs that finish after their deadline,

  • k-sched: maximize the number of processed jobs; in particular, process at least k jobs.

A schedule is said to be feasible if no constraints (deadlines, release dates, precedence constraints) are violated.

Notation for Posets

Any precedence graph G is a directed acyclic graph and therefore induces a partial order on V(G). Indeed, if there is a path from x to y, we let xy. An antichain is a set AV(G) of mutually incomparable elements. We say that A is maximal if there is no antichain A with AA, where ‘’ denotes strict inclusion. The set of predecessors of A is pred(A)={xV(G):aA:xa}, and the the set of comparables of A is comp(A)={xV(G):aA:xaorxa}. Note comp(A)=V(G) if and only if A is maximal.

An element xV(G) is a minimal element if xy for all ycomp({x}). An element xV(G) is a maximal element if xy for all ycomp({x}). Furthermore, min(G)={xxis a minimal element inG} and max(G)={xxis a maximal element inG}.

Notice that max(G) is exactly the antichain A such that pred(A)=V(G). We denote the subgraph of G induced by S with G[S]. We may assume that rj<rj if jj since job j will be processed later than rj in any schedule. To handle release dates we use the following:

Definition 2

Let G be a precedence graph. Then Gt is the precedence graph restricted to all jobs that can be scheduled on or before time t, i.e. all jobs with release date at most t.

We assume G=GCmax, since all jobs with release date greater than Cmax can be ignored.

Parameterized Complexity

We say a problem is Fixed-Parameter Tractable (and in the complexity class FPT) parameterized by parameter k, if there exists an algorithm with runtime O(f(k)·nc), where n denotes the size of the instance, f is a computable function and c some constant. There also exist problems for which inclusion in FPT for some parameter is unlikely, such as k -Clique. This is because k -Clique is complete for the complexity class W[1] and it is conjectured that FPTW[1]. One could view FPT as the parameterized version of P and W[1] of the parameterized version of NP. To prove a problem P to be W[1]-hard, one can use a parameterized reduction from another problem P that is W[1]-hard, where the reduction is a polynomial-time reduction with the following two additional restrictions: (1) the parameter k of P is bounded by g(k) for some function computable g and k the parameter of P, (2) the runtime of the reduction is bounded by f(k)·nc for f some computable function, n the size of the instance of P and c a constant.

We exclude fixed-parameter tractable algorithms for problems that are W[1]-hard. To exclude runtimes in a more fine-grained manner, we use the Exponential Time Hypothesis (ETH). Roughly speaking, the ETH conjectures that no 2o(n) algorithm for 3-SAT exists, where n is the number of variables of the instance. As a consequence we can, for example, exclude algorithms with runtime 2o(n) for Subset Sum where n is the number of input integers, and algorithms with runtime no(k) for k -Clique where n is the number of vertices of the input graph and k the size of the clique that we are after. The function g(k) bounding the size of k in the parameterized reductions plays an important role in these types of proofs, as for example a reduction with g(k) from k -Clique yields a lower bound under ETH of no(g-1(k)).

Result Type C: Precedence Constraints, Release Dates and Unit Processing Times

In this section we provide a fast algorithm for partial scheduling with release dates and unit processing times parameterized by the number k of scheduled jobs (Theorem 1). There exists a simple, but slow, algorithm with runtime O(2k2) that already proves that this problem is in FPT parameterized by k: This algorithm branches k times on jobs that can be processed next. If more than k jobs are available at a step, then processing these jobs greedily is optimal. Otherwise, we can recursively try to schedule all non-empty subsets of jobs to schedule next, and a O(2k2) time algorithm is obtained via a standard (bounded search-tree) analysis. To improve on this algorithm, we present a dynamic programming algorithm based on table entries indexed by antichains in the precedence graph G describing the precedence relations. Such an antichain describes the maximal jobs already scheduled in a partial schedule. Our key idea is that, to find an optimal solution, it is sufficient to restrict our attention to a subset of all antichains. This subset will be defined in terms of the depth of an antichain. With this algorithm we improve the runtime to O(8kk(|V|+|E|)).

By binary search, we can restrict attention to a variant of the problem that asks whether there is a feasible schedule with makespan at most Cmax, for a fixed universal deadline Cmax.

The Algorithm

We start by introducing our dynamic programming algorithm for P|rj,prec, pj=1|k-sched,Cmax. Let m be the number of machines available. We start with defining the table entries. For a given antichain AV(G) and integer t we define

S(A,t)=1,if there exists a feasible schedule of makespantthat processes pred(A),0,otherwise.

Computing the values of S(At) can be done by trying all combinations of scheduling at most m jobs of A at time t and then checking whether all remaining jobs of pred(A) can be scheduled in makespan t-1. To do so, we also verify that all the jobs in A actually have a release date at or before t. Formally, we have the following recurrence for S(At):

Lemma 1

S(A,t)=(AV(Gt))XA:|X|mS(A,t-1):A=max(pred(A)\X).

Proof

If AV(Gt), then there is a job jA with rj>t. And thus S(A,t)=0.

For any XA, X is a set of maximal elements with respect to G[pred(A)], and consists of pair-wise incomparable jobs, since A is an antichain. So, we can schedule all jobs from X at time t without violating any precedence constraints. Define A=max(pred(A)\X) as the unique antichain such that pred(A)\X=pred(A). If S(A,t-1)=1 and |X|m, we can extend the schedule of S(A,t-1) by scheduling all X at time t. In this way we get a feasible schedule processing all jobs of pred(A) before or at time t. So if we find such an X with |X|m and S(A,t-1)=1, we must have S(A,t)=1.

For the other direction, if for all XA with |X|m, S(A,t-1)=0, then no matter which set XA we try to schedule at time t, the remaining jobs cannot be scheduled before t. Note that only jobs from A can be scheduled at time t, since those are the maximal jobs. Hence, there is no feasible schedule and S(A,t)=0.

The above recurrence cannot be directly evaluated, since the number of different antichains of a graph can be big: there can be as many as nk different antichains with |pred(A)|k, for example in the extreme case of an independent set. Even when we restrict our precedence graph to have out degree k, there could be kk different antichains, for example in k-ary trees. To circumvent this issue, we restrict our dynamic programming algorithm only to a specific subset of antichains. To do this, we use the following new notion of the depth of an antichain.

Definition 3

Let A be an antichain. Define the depth (with respect to t) of A as

dt(A)=|pred(A)|+|min(Gt-comp(A))|.

We also denote d(A)=dCmax(A).

The intuition behind this definition is that it quantifies the number of jobs that can be scheduled before (and including) A without violating precedence constraints. See Fig. 2 for an example of an antichain and its depth. We restrict the dynamic programming algorithm to only compute S(At) for A satisfying dt(A)k. This ensures that we do not go ‘too deep’ into the precedence graph unnecessarily at the cost of a slow runtime.

Fig. 2.

Fig. 2

Example of an antichain and its depth in a perfect 3-ary tree. We see that |pred(A)|=2, but d(A)=4. If k=2, the dynamic programming algorithm will not compute S(At) since d(A)>k. The only antichains with depth 2 are the empty set and the root node r on its own as a set. Indeed, d()=d({r})=1. Note that for instances with k=2, a feasible schedule may exist. If so, we will find that R({r},1)=1, which will be defined later. In this way, we can still find the antichain A as a solution

Because of this restriction in the depth, it could happen that we check no antichains with k or more predecessors, while there are corresponding feasible schedules. It is therefore possible that for some antichains A with dt(A)>k, there is a feasible schedule for all k jobs in pred(A) before time Cmax, but the value S(A,Cmax) will not be computed. To make sure we still find an optimal schedule, we also compute the following condition R(At) for all tCmax and antichains A with dt(A)k:

R(A,t)=1,if there exists a feasible schedule with makespan at mostCmaxthat processes pred(A)on or beforetand processesjobs frommin(G-pred(A))aftert,with a total ofkjobsprocessed,0,otherwise.

By definition of R(At), if R(A,t)=1 for any A and tCmax, then we find a feasible schedule that processes k jobs on time.6 We show that there is an algorithm, namely fill(A,t), that quickly computes R(At). The algorithm fill(A,t) does the following: first it checks if S(A,t)=1 and if so, greedily schedules jobs from min(G-pred(A)) after t in order of smallest release date. If k-|pred(A)| jobs can be scheduled before Cmax, it returns ‘true’ (R(A,t)=1). Otherwise, it returns ‘false’ (R(A,t)=0).

Lemma 2

There is an O(|V|k+|E|) time algorithm that, given an antichain A, integer t, and value S(At), computes R(At).

Proof

We show that fill(A,t), defined above, fulfills all requirements. First we prove that if fill(A,t) returns ‘true’, it follows that R(A,t)=1. Since S(A,t)=1, all jobs from pred(A) can be finished at time t. Take that feasible schedule and process k-|pred(A)| jobs from min(G-pred(A)) between t and Cmax. This is possible because fill(A,t) is true. All predecessors of jobs in min(G-pred(A)) are in pred(A) and therefore processed before t. Hence, no precedence constraints are violated and we find a feasible schedule with the requirements, i.e. R(A,t)=1.

For the other direction, assume that R(A,t)=1, i.e. we find a feasible schedule σ where exactly the jobs from pred(A) are processed on or before t and only jobs from min(G-pred(A)) are processed after t. Thus S(A,t)=1. Define M as the set of jobs processed after t in σ. If M equals the set of jobs with the smallest release dates of min(G-pred(A)), we can also process the jobs of M in order of increasing release dates. Then fill(A,t) will be ‘true’, since M has size at least k-|pred(A)|. However, if M is not that set, we can replace a job which does not have one of the smallest k-|pred(A)| release dates, by one which has and was not in M yet. This new set can then still be processed between t+1 and Cmax because smaller release dates impose weaker constraints. We keep replacing until we end up with M being exactly the set of jobs with smallest release dates, which is then proved to be schedulable between t and Cmax. Hence, fill(A,t) will return ‘true’.

Computing the set min(G-pred(A)) can be done in O(|V|+|E|) time. Sorting them on release date can be done in O(|V|k) time, as there are at most k different release dates. Finally, greedily scheduling the jobs while checking feasibility can be done in O(|V|) time. Hence this algorithm runs in time O(|V|k+|E|).

Combining all steps gives us the algorithm as described in Algorithm 1. It remains to bound its runtime and argue its correctness.graphic file with name 453_2022_970_Figb_HTML.jpg

Runtime

To analyze the runtime of the dynamic programming algorithm, we need to bound the number of checked antichains. Recall that we only check antichains A with dt(A)k for each time tCmax. We first analyze the number of antichains A with d(A)k in any graph and use this to upper bound the number of antichains checked at time t.

To analyze the number of antichains A with d(A)k, we give an upper bound on this number via an upper bound on the number of maximal antichains. Recall from the notations for posets, that for a maximal antichain A we have comp(A)=V(G), and therefore d(A)=|pred(A)|. The following lemma connects the number of antichains and maximal antichains of bounded depth:

Lemma 3

For any antichain A, there exists a maximal antichain Amax such that AAmax and d(A)=d(Amax).

Proof

Let Amax=Amin(G-comp(A)). By definition, all elements in min(G-comp(A)) are incomparable to each other and incomparable to any element of A. Hence Amax is an antichain. Since comp(Amax)=V(G), Amax is a maximal antichain. Moreover,

d(A)=|pred(A)|+|min(G-comp(A))|=|pred(Amax)|=d(Amax),

since the elements in min(G-comp(A)) are minimal elements and all their predecessors are in pred(A) besides themselves.

For any (maximal) antichain A with d(A)k, we derive that |A|k and so each maximal antichain of depth at most k has at most 2k subsets. By Lemma 3, we see that each antichain is a subset of a maximal antichain with the same depth.

Corollary 1

|{A:Aantichain,d(A)k}|2k|{A:Amaximal antichain,d(A)k}|.

This corollary allows us to restrict attention to only upper bounding the number of maximal antichains of bounded depth.

Lemma 4

There are at most 2k maximal antichains A with d(A)k in any precedence graph G=(V,E), and they can be enumerated in O(2kk(|V|+|E|)) time.

Proof

Let Ak(G) be the set of maximal antichains in G with depth at most k. We prove that |Ak(G)|2k for any graph G by induction on k. Clearly, |A0(G)|1 for any graph G, since the only antichain with d(A)0 is A= if G=.

Let k>0 and assume |Aj(G)|2j for j<k for any graph G. If we have a precedence graph G with minimal elements s1,,s, we partition Ak(G) into +1 different sets B1,B2,,B+1. For i=1,,, the set Bi is defined as the set of maximal antichains A of depth at most k in which {si:i<i}A, but siA (and no restrictions on elements in the set {si:i>i}). If siA, then sipred(A) since A is maximal, so any such maximal antichain has a successor of si in A. If we define Sj as the set of all successors of sj (including sj), we see that Bi=Ak-iG-i=1i-1Si{si}. Indeed, if ABi, then {si:i<i}A. Hence we can remove those elements and its successors from the graph, as they are comparable to any such antichain. Moreover, we can also remove si (but not its successors) from the graph, since it is in pred(A). Thus Bi is then exactly the set of maximal antichains with depth i less in the remaining graph. The set B+1 is defined as all antichains not in some Bi, which is all maximal antichains of A of depth at most k for which {s1,,s}A. Note that B+1={s1,,s}. We get the following recurrence relation:

|Ak(G)|=i=1Ak-iG-j=1i-1Sj{si}+1, 1

since |Bl+1|=1. Notice that we may assume that k, because otherwise the depth of the antichain will be greater than k. Then if we use the induction hypothesis that |Aj(G)|2j for j<k for any graph G, we see by (1) that:

|Ak(G)|=i=1Ak-iG-j=1i-1Sj{si}+1,2ki=1k12i+12k=2k.

The lemma follows since the above procedure can easily be modified in a recursive algorithm to enumerate the antichains, and by using a Breadth-First Search we can compute G-j=1i-1Sj{si} in O(|V|+|E|) time. Thus, each recursion step takes O(k(|V|+|E|)) time.

Returning to (non-maximal) antichains, we see that we can enumerate all maximal antichains of depth at most k with Lemma 4 and by Corollary 1 we can find all antichains of depth at most k by taking all subsets of the found maximal antichains.

Corollary 2

There are at most 4k antichains A with d(A)k in any precedence graph G=(V,E), and they can be enumerated within O(4k(|V|+|E|)) time.

Notice that the runtime is indeed correct, as it dominates both the time needed for the construction of the set Ak(G) and the time needed for taking the subsets of Ak(G) (which is 2k|Ak(G)|).

We now restrict the number of antichains A in Gt with dt(A)k. Take Gt to be the graph in Corollary 2 and notice that dt(A)=d(A) for any antichain A in Gt. By Corollary 2 we obtain Lemma 5.

Lemma 5

For any t, there are at most 4k antichains A with dt(A)k in any precedence graph G=(V,E), and they can be enumerated within O(4k(|V|+|E|)) time.

To compute each S(At), we look at a maximum of km2k different sets X. Computing the antichain A such that A=max(pred(A)\X) takes O(|V|+|E|) time. After this computation, R(At) is directly computed in O(|V|k+|E|) time. For each time t{1,,Cmax}, there are at most 4k different antichains A for which we compute S(At) and R(At). Since Cmaxk, we therefore have total runtime of O(4kk(2k(|V|+|E|)+(|V|k+|E|))). Hence, Algorithm 1 runs in time O(8kk(|V|+|E|)).

Correctness of Algorithm

To show that the algorithm described in Algorithm 1 indeed returns the correct answer, the following lemma is clearly sufficient:

Lemma 6

A feasible schedule for k jobs with makespan at most Cmax exists if and only if R(A,t)=1 for some tCmax and antichain A with dt(A)k.

Before we are able to prove Lemma 6, we need one more definition.

Definition 4

Let σ be a feasible schedule. Then A(σ) is the antichain such that pred(A(σ)) is exactly the set of jobs that was scheduled in σ.

Equivalently, if X is the set of jobs processed by σ, then A(σ)=max(G[X]).

Proof (Lemma 6)

Clearly, if R(A,t)=1 for some tCmax and antichain A with dt(A)k, we have a feasible schedule with k jobs by definition of R(At). Hence, it remains to prove that if a feasible schedule for k jobs exists, then R(B,t)=1 for some tCmax and antichain B with dt(B)k. Let

Σ={σ|σis a feasible schedule that processeskjobsand has a makespan of at mostCmax},

so Σ is the set of all possible solutions. Define

σ=argminσ{d(A(σ))|σΣ},

i.e. σ is a schedule for which A(σ) has minimal depth (with respect to Cmax). We now define t and B such that R(B,t)=1.

  • Let t=max{t: job not in max(G[pred(A(σ))]) was scheduled at time t}, so from t+1 and on, only maximal jobs (with respect to G[pred(A(σ))]) are scheduled.

  • Let M={x: job x was scheduled at t+1 or later in σ }.

  • Let B=max(pred(A(σ))\M), so pred(B) is exactly the set of jobs scheduled on or before time t in σ.

See Fig. 3a for an illustration of these concepts. There are two cases to distinguish:

Fig. 3.

Fig. 3

Visualization of the definitions of M and B and the schedule σ in the proof of Lemma 6 is shown in a. b Depicts the schedule σ as chosen in the subcase d(B)>k. The grey boxes indicate which jobs are processed in the schedules. We will prove that |D(A(σ))|<|D(A(σ))|

dt(B)k. In this case we prove that R(B,t)=1. The feasible schedule we are looking for in the definition of R(Bt) is exactly σ. Indeed, all jobs from pred(B) were finished at time t. Furthermore, all jobs in M are maximal, so all their predecessors are in pred(B). Hence, Mmin(G-pred(B)). So, by definition R(B,t)=1.

dt(B)>k. In this case we prove that there is a schedule σ such that d(A(σ))<d(A(σ)), i.e. we find a contradiction to the fact that d(A(σ)) was minimal. This σ can be found as follows: take schedule σ only up until time t. Let C be a subset of min(Gt-comp(B)) such that |C|=k-|pred(B)|. This C can be found since dt(B)k. Process the jobs in C after time t in σ. These can all be processed without precedence constraint or release date violations, since their predecessors were already scheduled and CGt. So, we find a feasible schedule that processes k jobs, called σ. The choice of σ is depicted in Fig. 3. Note that Cmin(Gt-comp(B))min(G-comp(B)) and not all jobs of min(G-comp(B)) are necessarily processed in σ.

It remains to prove that d(A(σ))<d(A(σ)). Define D(A)=pred(A)min(G-comp(A)) for any antichain A. So D(A) is the set of jobs that contribute to d(A) and so |D(A)|=d(A). We will prove that D(B)=D(A(σ))D(A(σ)). This will be done in two steps, first we show that

D(B)=D(A(σ))D(A(σ)).

In the last step we prove D(B)D(A(σ)), which gives us d(A(σ))<d(A(σ)).

Notice that CD(B) since Cmin(G-comp(B)), hence D(B)=D(BC). Since A(σ)=BC it follows that D(A(σ))=D(B). Next we prove that D(B)D(A(σ)). Clearly, if xpred(B) then xpred(A(σ)). It remains to show that xmin(G-comp(B)) implies that xD(A(σ)). If xmin(G-comp(B)), then either xM or xM. If xM, then xA(σ) so xpred(A(σ)). If xM, then xcomp(BM) since x was a minimal element in min(G-comp(B)). Since A(σ)BM, and thus comp(A(σ))comp(BM), we observe that xmin(G-comp(A(σ))). We then conclude that D(B)D(A(σ)).

We are left to show that D(B)D(A(σ)). Remember that t was chosen such that there is a job processed at time t that was not in max(G[pred(A(σ))]). In other words, there was a job xB in σ at time t with yM such that yx. Note that yD(B), since yM, so y is not in pred(B) and y is clearly comparable to x. However, yD(A(σ)), so we find that d(A(σ))=d(B)<d(A(σ)). Hence, we found a schedule with smaller d(A(σ)), which leads to a contradiction.

Result Types B and D: One Machine and Precedence Constraints

In this section we show that Algorithm 1 cannot be even slightly generalized further: if we allow job-dependent deadlines or non-unit processing times, the problem becomes W[1]-hard parameterized by k and cannot be solved in no(k/logk) time unless the ETH fails. In the following reductions we reduce to a variant of the scheduling problems that asks whether there is a feasible schedule with makespan at most Cmax, where Cmax is given as input. If for a given instance such a schedule exists, we call the instance a yes instance, and a no instance otherwise. We may restrict ourselves to this variant because of binary search.

Job-Dependent Deadlines

The fact that combining precedence constraints with job-dependent deadlines makes the problem W[1]-hard, is a direct consequence from the fact that 1|prec,pj=1|jUj is W[1]-hard, parameterized by n-jUj=k where n is the number of jobs [9]. It is important to notice that the notation of these problems implies that each job can have its own deadline. Hence, we conclude from this that 1|dj,prec,pj=1|k-sched,Cmax is W[1]-hard parameterized by k. This is a reduction from k-Clique that yields a quadratic blow-up on the parameter, giving a lower bound on algorithms for the problem of nΩ(k). Based on the Exponential Time Hypothesis, we now sharpen this lower bound with a reduction from 3-Coloring:

Theorem 3

1|dj,prec,pj=1|k-sched,Cmax is W[1]-hard parameterized by k. Furthermore, there is no algorithm solving 1|dj,prec,pj=1|k-sched,Cmax in 2o(n) time where n is the number of jobs, assuming ETH.

Proof

The proof will be a reduction from 3-Coloring, for which no 2o(|V|+|E|) algorithm exists under the Exponential Time Hypothesis [7,  pages 471–473]. Let the graph G=(V,E) be the instance of 3-Coloring with |V|=n and |E|=m. We label the vertices v1,,vn and the edges e1,,em. We then create the following instance for 1|dj,prec,pj=1|k-sched,Cmax.

  • For each vertex viV, create 6 jobs:
    • vi1, vi2 and vi3 with deadline dvi=i,
    • wi1, wi2 and wi3 with deadline dwi=n+2m+1-i,
    add precedence constraints vi1wi1, vi2wi2 and vi3wi3. These jobs represent which color for each vertex will be chosen (for instance if vi1 and wi1 are processed, vertex i gets color 1).
  • For each edge ejE, create 12 jobs:
    • ej12, ej13, ej21, ej23, ej31 and ej32 with deadline dej=n+j,
    • fj12, fj13, fj21, fj23, fj31 and fj32 with deadline dfj=n+m+1-j,
    add precedence constraints ejabfjab for a,b{1,2,3} with ab. These jobs represent what the colors of the endpoints of an edge will be. So if the jobs ejab and fjab are processed for e={u,v}, then vertex u has color a and vertex v has color b. Since the endpoints should have different colors, the jobs ejaa and fjaa do not exist.
  • For each ejab with e={vi,vi} and a,b{1,2,3} with ab, add the precedence constraints viaejab and vibejab.

  • Set Cmax=k=2n+2m.

We now prove that the created instance is a yes instance if and only if the original 3-Coloring instance is a yes instance. Assume that there is a 3-coloring of the graph G=(V,E). Then there is also a feasible schedule: For each vertex vi with color a, process the jobs via and wia at their respective deadlines. For each edge ej={u,v} with u colored a and v colored b, process the jobs ejab and fjab exactly at their respective deadlines. Notice that because it is a 3-coloring, each edge has endpoints of different colors, so these jobs exist. Also note that no two jobs were processed at the same time. Exactly 2n+2m jobs were processed before time 2n+2m. Furthermore, no precedence constraints were violated.

For the other direction, assume that we have a feasible schedule in our created instance of 1|dj,prec,pj=1|k-sched,Cmax. Let Vi={vi1,vi2,vi3}, Wi={wi1,wi2,wi3} for all i=1,,n, and let Ej={ej12,ej13,ej21,ej23,ej31,ej32} and Fj={fj12,fj13,fj21,fj23,fj31,fj32} for all j=1,,m. We show by induction on i that out of each of the sets Vi, Wi, Ej and Fj, exactly one job was scheduled at its deadline.

Since we have a feasible schedule, at time 2m+2n one of the jobs of W1 must be scheduled, since they are the only jobs with a deadline greater than 2n+2m-1. If w1a was scheduled at time 2m+2n, then the job v1a must be processed at time 1 because of precedence constraints and since its deadline is 1. No other jobs from V1 and W1 can be processed, due to their deadlines and precedence constraints.

Now assume that all sets V1,,Vi-1,W1,,Wi-1 have exactly one job scheduled at their respective deadline, and no more can be processed. Since we have a feasible schedule, one job should be scheduled at time 2n+2m-(i-1). However, since no more jobs from W1,,Wi-1 can be scheduled, the only possible jobs are from Wi since they are the only other jobs with a deadline greater than 2n+2m-i. However, if wia was scheduled at time 2n+2m-(i-1), then the job via must be processed at time i because of precedence constraints, its deadline at i and because at times 1,,i-1 other jobs had to be processed. Also, no other job from Vi can be processed in the schedule, since they all have deadline i. As a consequence, no other jobs from Wi can be processed, as they are restricted to precedence constraints. So the statement holds for all sets Vi and Wi. In the exact same way, one can conclude the same about all sets Ej and Fj.

Because of this, we see that each job and each vertex have received a color from the schedule. They must form a 3-coloring, because a job from Ej could only be processed if the two endpoints got two different colors. Hence, the 3-Coloring instance is a yes instance.

As k=2n+2m we therefore conclude there is no 2o(n) algorithm under the ETH.

Note that this bound significantly improves the old lower bound of 2Ω(lognk) implied by the the reduction from k-Clique reduction. Since kn and n/logn is an increasing function, Theorem 3 implies that

Corollary 3

Assuming ETH, there is no algorithm solving 1|dj,prec,pj=1|k-sched,Cmax in no(k/log(k)) where n is the number of jobs.

Non-unit Processing Times

We show that having non-unit processing times combined with precedence constraints make the problem W[1]-hard even on one machine. The proof of Theorem 4 heavily builds on the reduction from k-Clique to k-Tasks On Time by Fellows and McCartin [9].

Theorem 4

1|prec|k-sched,Cmax is W[1]-hard, parameterized by k, even when pj{1,2} for all jobs j.

Proof

The proof is a reduction from k-Clique. We start with G=(V,E), an instance of k-Clique. For each vertex vV, create a job Jv with processing time p(Jv)=2. For each edge eE, create a job Je with processing time p(Je)=1. Now for each edge (uv), add the following two precedence relations: JuJe and JvJe, so before one can process a job associated with an edge, both jobs associated with the endpoints of that edge need to be finished. Now let k=k+12k(k-1) and Cmax=2k+12k(k-1). We will now prove that 1|prec|k-sched,Cmax is a yes instance if and only if the k-Clique instance is a yes instance.

Assume that the k-Clique instance is a yes instance, then process first the k jobs associated with the vertices of the k-clique. Next process the 12k(k-1) jobs associated with the edges of the k-clique. In total, k+12k(k-1)=k jobs are now processed with a makespan of 2k+12k(k-1). Hence, the instance of 1|prec|k-sched,Cmax is a yes instance.

For the other direction, assume 1|prec|k-sched,Cmax to be a yes instance, so there exists a feasible schedule. For any feasible schedule, if one schedules jobs associated with vertices, then at most 12(-1) jobs associated with edges can be processed, because of the precedence constraints. However, because k=k+12k(k-1) jobs were done in the feasible schedule before Cmax=2k+12k(k-1), at most k jobs associated with vertices can be processed, because they have processing time of size 2. Hence, we can conclude that exactly k vertex-jobs and 12k(k-1) edge-jobs were processed. Hence, there were k vertices connected through 12k(k-1) edges, which is a k-clique.

The proofs of Theorem 6 and Corollary 4 are reductions from Partitioned Subgraph Isomorphism. Let P=(V,E) be a ‘pattern’ graph, G=(V,E) be a ‘target’ graph, and χ:VV a ‘coloring’ of the vertices of G with elements from P. A χ-colorful P-subgraph of G is a mapping φ:VV such that (1) for each {u,v}E it holds that {φ(u),φ(v)}E and (2) for each uV it holds that χ(φ(u))=u. If χ and G are clear from the context they may be omitted in this definition.

Definition 5

(Partitioned Subgraph Isomorphism) Given graphs G=(V,E) and P=(V,E), χ:VV, determine whether there is a χ-colorful P-subgraph of G.

Theorem 5

(Marx [19]) Partitioned Subgraph Isomorphism cannot be solved in no(|E|/log|E|) time assuming the Exponential Time Hypothesis (ETH), where n is the size of the input.

We will now reduce Partitioned Subgraph Isomorphism to 1|prec,rj|k-sched,Cmax.

Theorem 6

1|prec,rj|k-sched,Cmax cannot be solved in no(k/logk) time assuming the Exponential Time Hypothesis (ETH).

Proof

Let G=(V,E), P=(V,E) and χ:VV. We will write V={1,,s}. Define for i=0,,s the following important time stamps:

ti:=j=1i3s+1-j.

Construct the following jobs for the instance of the 1|prec,rj|k-sched,Cmax problem:

  • For i=1,,s:
    • For each vertex vV such that χ(v)=i, create a job Jv with processing time p(Jv)=3s+1-i and release date ti-1.
  • For each {v,w}E such that {χ(v),χ(w)}E, create a job J{v,w} with p(J{v,w})=1 and release date ts. Add precedence constraints JvJ{v,w} and JwJ{v,w}.

Then ask whether there exists a solution to the scheduling problem for k=s+|E| with makespan Cmax=ts+|E|.

Let the Partitioned Subgraph Isomorphism instance be a yes-instance and let φ:VV be a colorful P-subgraph. We claim the following schedule is feasible:

  • For i=1,,s:
    • Process Jφ(i) at its release date ti-1.
  • Process for each {i,i}E the job J{φ(i),φ(i)} somewhere in the interval [ts,ts+|E|].

Notice that all jobs are indeed processed after their release date and that in total there are k=s+|E| jobs processed before Cmax=ts+|E|. Furthermore, all precedence constraints are respected as any edge job is processed after both its predecessors. Also, the edge jobs J{φ(i),φ(i)} must exist, as φ is a properly colored P-subgraph. Therefore, we can conclude that indeed this schedule is feasible.

For the other direction, assume that there is a solution to the created instance of 1|prec,rj|k-sched,Cmax. Define Ji={Jv:χ(v)=i}. We will first prove that at most one job from each set Ji can be processed in a feasible schedule. To do this, we first prove that at most one job from each set Ji can be processed before ts. Any job in Ji has release date ti-1=j=1i-13s+1-j. Therefore, there is only ts-ti-1=j=is3s+1-j time left to process the jobs from Ji before time ts. However, the processing time of any job in Ji is 3s+1-i, and since 2·3s+1-i>j=is3s+1-j, at most one job from Ji can be processed before ts. Since all jobs not in some Ji have their release date at ts, at most s jobs are processed at time ts. Thus, at time ts, there are |E| time units left to process |E| jobs, because of the choice of k and makespan. Hence, the only way to get a feasible schedule is to process exactly one job from each set Ji at its respective release date and process exactly |E| edge jobs after ts.

Let vi be the vertex, such that Jv was processed in the feasible schedule with color i. We will show that φ:VV, defined as φ(i)=vi, is a properly colored P-subgraph of G. Hence, we are left to prove that for each {i,i}E, the edge {φ(i),φ(i)}E, i.e. that for each {i,i}E, the job J{φ(i),φ(i)} was processed. Because only the vertex jobs Jφ(1),Jφ(2),,Jφ(s) were processed, the precedence constraints only allow for edge jobs J{φ(i),φ(i)} to be processed. We created edge job J{v,w} if and only if {v,w}E and {χ(v),χ(w)}E, hence the |E| edge jobs have to be exactly the edge jobs J{φ(i),φ(i)} for {i,i}E. Therefore, we proved indeed that φ is a colorful P-subgraph of G.

Notice that k=s+|E|3|E| as we may assume the number of vertices in P is at most 2|E|. The given bound follows.

Corollary 4

2|prec|k-sched,Cmax cannot be solved in no(k/logk) time assuming the Exponential Time Hypothesis (ETH).

Proof

We can use the same idea for the reduction from Partitioned Subgraph Isomorphism as in the proof of Theorem 6, except for the release dates, as they are not allowed in this type of scheduling problem. To simulate the release dates, we use the second machine as a release date machine, meaning that we will create a job for each upcoming release date and will require these new jobs to be processed. More formally: For i=1,,s, create a job Jri with processing time 3s+1-i and precedence constraints JriJ for any job J that had release date ti in the original reduction. Furthermore, let JriJri+1. Then we add |E| jobs J with processing time 1 and with precedence relations JrsJ. We then ask whether there exists a feasible schedule with k=2s+2|E| and with makespan ts+|E|. All newly added jobs are required in any feasible schedule and therefore, all other arguments from the previous reduction also hold. Finally, note that k is again linear in |E|.

Result Type G: k-scheduling without Precedence Constraints

The problem P||k-sched,Cmax cannot be solved in O(2o(k)) time assuming the ETH, since there is a reduction to Subset Sum for which 2o(n) algorithms were excluded by Jansen et al. [15].

We show that the problem is fixed-parameter tractable with a matching runtime in k, even in the case of unrelated machines, release dates and deadlines, denoted by R|rj,dj|k-sched,Cmax.

Theorem 7

R|rj,dj|k-sched,Cmax is fixed-parameter tractable in k and can be solved in O((2e)kkO(logk)) time.

Proof

We give an algorithm that solves any instance of R|rj,dj|k-sched,Cmax within O((2e)kkO(logk)) time. The algorithm is a randomized algorithm that uses the color coding method; it can can be derandomized as described by Alon et al. [1]. The algorithm first (randomly) picks a coloring c:{1,,n}{1,,k}, so each job is given one of the k available colors. We then compute whether there is a feasible colorful schedule, i.e. a feasible schedule that processes exactly one job of each color. If this colorful schedule can be found, then it is possible to schedule at least k jobs before Cmax.

Given a coloring c, we compute whether there exists a colorful schedule in the following way. Define for 1im and X{1,,k}:

Bi(X)=minimum makespan of all schedules on machineiprocessing|X|jobs, each from a different color inX.

Clearly Bi()=0, and all values Bi(X) can be computed in O(2kn) time using the following:

Lemma 7

Let min{}=. Then

Bi(X)=minXminj:c(j)={Cj=max{rj,Bi(X\{})}+pij:Cjdj}.

Proof

In a schedule on one machine with |X| jobs using all colors from X, one job should be scheduled as last, defining the makespan. So for all possible jobs j, we compute what the minimal end time would be if j was scheduled at the end of the schedule. This j cannot start before its release date or before all other colors are scheduled.

Next, define for 1im and X[k], Ai(X) to be 1 if Bi(X)Cmax, and to be 0 otherwise. So Ai(X)=1 if and only if |X| jobs, each from a different color of X, can be scheduled on machine i before Cmax. A colorful feasible schedule exists if and only if there is some partition X1,,Xm of {1,..,k} such that Πi=1mAi(Xi)=1. The subset convolution of two functions is defined as (AiAi)(X)=YXAi(Y)Ai(X\Y). Then there is some partition X1,,Xm of {1,,k} such that Πi=1mAi(Xi)=1 if and only if (A1Am)({1,,k})>0. The value of (A1Am)({1,,k})>0 can be computed in 2kkO(1) time using fast subset convolution [3].

An overview of the randomized algorithm is given in Algorithm 2. If the k jobs that are processed in an optimal solution are all in different colors, the algorithm outputs true. By standard analysis, k jobs are all assigned different colors with probability at least 1/ek, and thus ek independent trials to boost the error probability of the algorithm to at most 1/2.graphic file with name 453_2022_970_Figc_HTML.jpg

By using the standard methods by Alon et al. [1], Algorithm 2 can be derandomized.

Argumentation of the Results in Table 1

For completeness and the readers convenience, we explain in this section for each row of Table 1 how the upper and lower bounds are obtained.

First notice that the most general variant R|rj,dj,prec|k-sched,Cmax can be solved in nO(k) time as follows: Guess for each machine the set of jobs that are scheduled on it, and guess how they are ordered in an optimal solution, to get sequences σ1,,σm with a joint length equal to k. For each such (σ1,,σm), run the following simple greedy algorithm to determine whether the minimum makespan achieved by a feasible schedule that schedules for each machine i the jobs as described in σi: Iterate t=1,,n and schedule the job σi(t) at machine i as early as possible without violating release dates/deadline and precedence constraints (if this is not possible, return NO). Since each optimal schedule can be assumed to be normalized in the sense that no single job can be executed earlier, it is easy to see that this algorithm always returns an optimal schedule for some choice of σ1,,σm. Since there are only nO(k) different sequences σ1,,σm of combined length k, the runtime follows.

Cases 1–2

The polynomial time algorithms behind result [A] are obtained by a straightforward greedy algorithm: For 1|rj,prec,pj=1|k-sched,Cmax, build the schedule from beginning to end, and schedule an arbitrary job if any is available; otherwise wait until one becomes available.

Cases 3–4, 7–8

The given lower bound is by Corollary 3.

Cases 5–6

The upper bound is by the algorithm of Theorem 1. The lower bound is due to reduction by Jansen et al. [14]. In particular, if no subexponential time algorithm for the Biclique problem exists, there exist no algorithms in no(k) time for these problems.

Case 9

The lower bound is by Theorem 4, which is a reduction from k -Clique and heavily builds on the reduction from k -Clique to k-Tasks On Time by Fellows and McCartin [9]. This reduction increases the parameter k to Ω(k2), hence the lower bound of no(k).

Cases 10–20

The given lower bound is by Theorem 6, which is a reduction from Partitioned Subgraph Isomorphism. It is conjectured that there exist no algorithms solving Partitioned Subgraph Isomorphism in no(k) time assuming ETH, which would imply that the nO(k) algorithm for these problems cannot be improved significantly.

Cases 21–28

Result [E] is established by a simple greedy algorithm that always schedules an available job with the earliest deadline.

Cases 29–31

Result [F] is a consequence of Moore’s algorithm [23] that solves the problem 1||jUj in O(nlogn) time. The algorithm creates a sequence j1,,jn of all jobs in earliest due date order. It then repeats the following steps: It tries to process the sequence (in the given order) on one machine. Let ji be the first job in the sequence that is late. Then a job from j1,,ji with maximal processing time is removed from the sequence. If all jobs are on time, it returns the sequence followed by the jobs that have been removed from the sequence. Notice that this also solves the problem 1|rj|k-sched,Cmax, by reversing the schedule and viewing the release dates as the deadlines.

Cases 32

The lower bound for this problem is a direct consequence of the reduction from Knapsack to 1|rj|jUj by Lenstra et al. [17], which is a linear reduction. Jansen et al. [15] showed that Subset Sum (and thus also Knapsack) cannot be solved in 2o(n) time assuming ETH.

Cases 33–40

Since 2||Cmax is equivalent to Subset Sum and can therefore not be solved in 2o(n) time assuming ETH, as shown by Jansen et al. [15]. Therefore, its generalizations, in particular those mentioned in cases 33–40, have the same lower bound on run times assuming ETH. The upper bound is by the algorithm of Theorem 7.

Concluding Remarks

We classify all studied variants of partial scheduling parameterized by the number of jobs to be scheduled to be either in P, NP-complete and fixed-parameter tractable by k, or W[1]-hard parameterized by k. Our main technical contribution is an O(8kk(|V|+|E|)) time algorithm for P|rj,prec,pj=1|k-sched,Cmax.

In a fine-grained sense, the cases we left open are cases 3–20 from Table 1. We believe in fact algorithms in rows 5–6 and 10–20 are optimal: An no(k) time algorithm for any case from result type [C] or [D] would imply either a 2o(n) time algorithm for Biclique or an no(k) time algorithm for Partitioned Subgraph Isomorphism, which both would be surprising. It would be interesting to see whether for any of the remaining cases with precedence constraints and unit processing times a ‘subexponential’ time algorithm exists.

A related case is P3|prec,pj=1|Cmax (where P3 denotes three machines). It is a famously hard open question (see e.g. [10]) whether this can be solved in polynomial time, but maybe it is doable to try to solve this question in subexponential time, e.g. 2o(n)?

Acknowledgements

The authors would like to thank the anonymous referees for useful comments.

Footnotes

1

We compare the previous works and other relevant studied parameterization in the end of this section.

2

A precedence constraint a b enforces that job a needs to be finished before job b can start.

3

We assume basic arithmetic operations with the release dates take constant time.

4

A similar dynamic programming approach was also present in, for example, [6].

5

Our results [C] and [D] build on and improve this result.

6

The reverse direction is more difficult and postponed to Lemma 6.

J. Nederlof ERC project No. 617951. and No. 853234. and NWO project No. 024.002.003. C.M.F. Swennenhuis NWO project No. 613.009.031b, ERC project No. 617951.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Jesper Nederlof, Email: j.nederlof@uu.nl.

Céline M. F. Swennenhuis, Email: c.m.f.swennenhuis@tue.nl

References

  • 1.Alon N, Yuster R, Zwick U. Color-coding. J. ACM. 1995;42(4):844–856. doi: 10.1145/210332.210337. [DOI] [Google Scholar]
  • 2.Bessy S, Giroudeau R. Parameterized complexity of a coupled-task scheduling problem. J. Sched. 2019;22(3):305–313. doi: 10.1007/s10951-018-0581-1. [DOI] [Google Scholar]
  • 3.Björklund, A., Husfeldt, T., Kaski, P., Koivisto, M.: Fourier meets Möbius: fast subset convolution. In: Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, pp. 67–74. ACM (2007)
  • 4.Bodlaender HL, Fellows MR. W[2]-hardness of precedence constrained k-processor scheduling. Oper. Res. Lett. 1995;18(2):93–97. doi: 10.1016/0167-6377(95)00031-9. [DOI] [Google Scholar]
  • 5.Chuzhoy J, Ostrovsky R, Rabani Y. Approximation algorithms for the job interval selection problem and related scheduling problems. Math. Oper. Res. 2006;31(4):730–738. doi: 10.1287/moor.1060.0218. [DOI] [Google Scholar]
  • 6.Cygan M, Pilipczuk M, Pilipczuk M, Wojtaszczyk JO. Scheduling partially ordered jobs faster than 2n. Algorithmica. 2014;68(3):692–714. doi: 10.1007/s00453-012-9694-7. [DOI] [Google Scholar]
  • 7.Cygan M, Fomin FV, Kowalik Ł, Lokshtanov D, Marx D, Pilipczuk M, Pilipczuk M, Saurabh S. Parameterized Algorithms. Berlin: Springer; 2015. [Google Scholar]
  • 8.Eun J, Sung CS, Kim ES. Maximizing total job value on a single machine with job selection. J. Oper. Res. Soc. 2017;68(9):998–1005. doi: 10.1057/s41274-017-0238-z. [DOI] [Google Scholar]
  • 9.Fellows MR, McCartin C. On the parametric complexity of schedules to minimize tardy tasks. Theoret. Comput. Sci. 2003;298(2):317–324. doi: 10.1016/S0304-3975(02)00811-3. [DOI] [Google Scholar]
  • 10.Garey MR, Johnson DS. Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: W. H. Freeman; 1979. [Google Scholar]
  • 11.Graham RL, Lawler EL, Lenstra JK, Rinnooy Kan AHG. Optimization and approximation in deterministic sequencing and scheduling: a survey. Ann. Discret. Math. 1979;5(2):287–326. doi: 10.1016/S0167-5060(08)70356-X. [DOI] [Google Scholar]
  • 12.Gupta, A., Krishnaswamy, R., Kumar, A., Segev, D.: Scheduling with outliers. In: Dinur, I., Jansen, K., Naor, J., Rolim, J.D.P. (eds.) Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 12th International Workshop, APPROX 2009, and 13th International Workshop, RANDOM 2009, Berkeley, CA, USA, August 21–23, 2009. Proceedings, Lecture Notes in Computer Science, vol. 5687, pp. 149–162. Springer, Berlin (2009). 10.1007/978-3-642-03685-9_12
  • 13.Hermelin, D., Mnich, M., Omlor, S.: Single machine batch scheduling to minimize the weighted number of tardy jobs. CoRR (2019)
  • 14.Jansen, K., Land, F., Kaluza, M.: Precedence scheduling with unit execution time is equivalent to parametrized biclique. In: International Conference on Current Trends in Theory and Practice of Informatics, pp. 329–343. Springer, Berlin (2016)
  • 15.Jansen K, Land F, Land K. Bounding the running time of algorithms for scheduling and packing problems. SIAM J. Discret. Math. 2016;30(1):343–366. doi: 10.1137/140952636. [DOI] [Google Scholar]
  • 16.Koulamas C, Panwalkar SS. A note on combined job selection and sequencing problems. Nav. Res. Logist. 2013;60(6):449–453. doi: 10.1002/nav.21543. [DOI] [Google Scholar]
  • 17.Lenstra JK, Rinnooy Kan AHG, Brucker P. Complexity of machine scheduling problems. Ann. Discret. Math. 1977;1:343–362. doi: 10.1016/S0167-5060(08)70743-X. [DOI] [Google Scholar]
  • 18.Lenté, C., Liedloff, M., Soukhal, A., T’Kindt, V.: Exponential algorithms for scheduling problems. Tech. rep. (2014). https://hal.archives-ouvertes.fr/hal-00944382
  • 19.Marx D. Can you beat treewidth? Theory Comput. 2010;6:85–112. doi: 10.4086/toc.2010.v006a005. [DOI] [Google Scholar]
  • 20.Megow, N., Mnich, M., Woeginger, G.: Lorentz Workshop ‘Scheduling Meets Fixed-Parameter Tractability’ (2019)
  • 21.Mnich, M., van Bevern, R.: Parameterized complexity of machine scheduling: 15 open problems. Comput. Oper. Res. (2018)
  • 22.Mnich M, Wiese A. Scheduling and fixed-parameter tractability. Math. Program. 2015;154(1–2):533–562. doi: 10.1007/s10107-014-0830-9. [DOI] [Google Scholar]
  • 23.Moore JM. An n job, one machine sequencing algorithm for minimizing the number of late jobs. Manag. Sci. 1968;15(1):102–109. doi: 10.1287/mnsc.15.1.102. [DOI] [Google Scholar]
  • 24.Pinedo, M.L.: Scheduling: Theory, Algorithms, and Systems, 3rd edn. Springer Publishing Company, Incorporated (2008)
  • 25.Sgall, J.: Open problems in throughput scheduling. In: Epstein, L., Ferragina, P. (eds.) Algorithms—ESA 2012—20th Annual European Symposium, Ljubljana, Slovenia, September 10–12, 2012. Proceedings, Lecture Notes in Computer Science, vol. 7501, pp. 2–11. Springer, Berlin (2012). 10.1007/978-3-642-33090-2_2
  • 26.Shabtay D, Gaspar N, Kaspi M. A survey on offline scheduling with rejection. J. Sched. 2013;16(1):3–28. doi: 10.1007/s10951-012-0303-z. [DOI] [Google Scholar]
  • 27.van Bevern R, Mnich M, Niedermeier R, Weller M. Interval scheduling and colorful independent sets. J. Sched. 2015;18(5):449–469. doi: 10.1007/s10951-014-0398-5. [DOI] [Google Scholar]
  • 28.van Bevern, R., Bredereck, R., Bulteau, L., Komusiewicz, C., Talmon, N., Woeginger, G.J.: Precedence-constrained scheduling problems parameterized by partial order width. In: Discrete Optimization and Operations Research—9th International Conference, DOOR 2016, Vladivostok, Russia, September 19–23, 2016, Proceedings, pp. 105–120 (2016)
  • 29.Williamson DP, Shmoys DB. The Design of Approximation Algorithms. Cambridge: Cambridge University Press; 2011. [Google Scholar]
  • 30.Yang B, Geunes J. A single resource scheduling problem with job-selection flexibility, tardiness costs and controllable processing times. Comput. Ind. Eng. 2007;53(3):420–432. doi: 10.1016/j.cie.2007.02.005. [DOI] [Google Scholar]

Articles from Algorithmica are provided here courtesy of Springer

RESOURCES