Skip to main content
Springer logoLink to Springer
. 2018 Jul 30;85(2):391–415. doi: 10.1007/s10670-018-0032-6

Classical Harmony and Separability

Julien Murzi 1,
PMCID: PMC7328295  PMID: 32647400

Abstract

According to logical inferentialists, the meanings of logical expressions are fully determined by the rules for their correct use. Two key proof-theoretic requirements on admissible logical rules, harmony and separability, directly stem from this thesis—requirements, however, that standard single-conclusion and assertion-based formalizations of classical logic provably fail to satisfy (Dummett in The logical basis of metaphysics, Harvard University Press, Harvard, MA, 1991; Prawitz in Theoria, 43:1–40, 1977; Tennant in The taming of the true, Oxford University Press, Oxford, 1997; Humberstone and Makinson in Mind 120(480):1035–1051, 2011). On the plausible assumption that our logical practice is both single-conclusion and assertion-based, it seemingly follows that classical logic, unlike intuitionistic logic, can’t be accounted for in inferentialist terms. In this paper, I challenge orthodoxy and introduce an assertion-based and single-conclusion formalization of classical propositional logic that is both harmonious and separable. In the framework I propose, classicality emerges as a structural feature of the logic.


According to logical inferentialists, the meanings of logical expressions are fully determined by the rules for their correct use. Two key proof-theoretic requirements on admissible logical rules, harmony and separability, directly stem from this thesis—requirements, however, that standard single-conclusion and assertion-based formalizations of classical logic provably fail to satisfy (Dummett 1991; Prawitz 1977; Tennant 1997; Humberstone and Makinson 2011). On the plausible assumption that our logical practice is both single-conclusion and assertion-based, it seemingly follows that classical logic, unlike intuitionistic logic, can’t be accounted for in inferentialist terms. In this paper, I challenge orthodoxy and introduce an assertion-based and single-conclusion formalization of classical propositional logic which is both harmonious and separable. In the framework I propose, classicality emerges as a structural feature of the logic.

Section 1 provides some background. Section 2 introduces the inferentialist argument against classical logic. Sections 36 present a novel axiomatisation of classical logic and prove that it is both harmonious and separable.1 Section 7 responds to some possible objections. Section 8 concludes.

Harmony and Separability

Logical inferentialists typically contend that some basic inference rules are, in Michael Dummett’s terminology, self-justifying, in that they fully determine the meanings of the expressions they either eliminate or introduce. As Dummett puts it:

we are entitled simply to stipulate that [self-justifying laws] shall be regarded as holding, because by so doing we fix, wholly or partly, the meanings of the logical constants that they govern. (Dummett 1991, p. 246)

On their most common interpretation, introduction rules in a natural deduction system (henceforth, I-rules) state the sufficient, and perhaps necessary, conditions for introducing dominant operators in conclusions (in inferentialist parlance, the canonical grounds for introducing such conclusions); elimination rules (henceforth, E-rules) tell us what can be legitimately deduced from sentences containing dominant occurrences of logical operators. Logical inferentialism, then, becomes the claim that the meanings of logical expressions are fully determined by their I- and E-rules.2

As is well known, not any pair of I- and E-rules can determine the meaning of a logical expression, if ill-behaved connectives such as Prior’s tonkgraphic file with name 10670_2018_32_Figa_HTML.jpg are to be ruled out (see Prior 1960). If the consequence relation is transitive, and at least one theorem can be proved, then any sentence can be proved. The inventor of natural deduction, Gerhard Gentzen, first sketched a solution to the problem. In a famous passage, Gentzen writes:

To every logical symbol&, ,,,,¬, belongs precisely one inference figure which ‘introduces’ the symbol—as the terminal symbol of a formula—and which ‘eliminates’ it. The fact that the inference figures&-E and -I each have two forms constitutes a trivial, purely external deviation and is of no interest. The introductions represent, as it were, the ‘definitions’ of the symbols concerned, and the eliminations are no more, in the final analysis, than the consequences of these definitions. This fact may be expressed as follows: in eliminating a symbol, we may use the formula with whose terminal symbol we are dealing only ‘in the sense afforded it by the introduction of that symbol’. (Gentzen 1969, p. 80)

Gentzen argues that the I-rules of his newly invented calculus of natural deduction ‘fix’, or ‘define’, the meanings of the expressions they introduce. He also observes that, on this assumption, E-rules cannot be chosen randomly. They must be justified by the corresponding I-rules: they are, in some sense, their ‘consequences’. This key thought expresses in nuce the idea that I- and E-rules must be, in Dummett’s phrase, in harmony with each other. Conversely, if it is thought that E-rules are meaning-constitutive, I-rules cannot be chosen arbitrarily either (see e.g. Dummett 1991, p. 215).

This intuitive idea can be spelled out in a number of ways. Dummett (1991, p. 250) and Prawitz (1974, p. 76) define harmony as the possibility of eliminating maximum formulae, that is, formulae that occur both as the conclusion of an I-rule and as the major premise of the corresponding E-rule (see also Prawitz 1965, p. 34).3 The following reduction procedure for , for instance, shows that any proof of B via -I and -E can be converted into a proof from the same or fewer assumptions that avoids the unnecessary detour through (the introduction and elimination of) AB.

Example 1

(-reduction) graphic file with name 10670_2018_32_Figb_HTML.jpg

where r reads ‘reduces to’.

Dummett (1991, p. 250) calls the availability of such procedures intrinsic harmony. He correctly points out, though, that intrinsic harmony only prevents E-rules from being stronger than the corresponding introductions, as in the case of Prior’s tonk. It does not rule out the possibility that they be, so to speak, too weak (see 1991, 287).4 A way to ensure that E-rules be strong enough is to require that they allow one to reintroduce complex sentences, as shown by the following expansion:

Example 2

(-expansion) graphic file with name 10670_2018_32_Figc_HTML.jpg where e reads ‘expands to’.

This shows that any derivation Π of AB can be expanded into a longer derivation which makes full use of both -I and -E. The availability of an expansion procedure for a pair of I- and E-rules is sometimes referred to as local completeness. Accordingly, a pair of I- and E-rules for a constant $ can be taken to be harmonious tout court (or, in Dummett’s terminology, ‘stable’), if and only if there exist both reduction and expansion procedures for $-I and $-E. Alternative conceptions of harmony are developed in e.g. Read (2000) and Tennant (1997, 2008).5

But why should logical expressions be governed by harmonious rules? One motivating thought behind the requirement of harmony is that logic is innocent: it shouldn’t allow one to prove atomic sentences that we couldn’t otherwise prove (Steinberger 2009a). Yet another motivating thought has it that I-rules determine, in principle, necessary and sufficient conditions for introducing dominant occurrences of logical operators. For this reason, the thought goes, E-rules should ‘give us back’ the grounds specified by the corresponding I-rules, on the assumption that such grounds are in principle necessary (see e.g. Moriconi and Tesconi 2008, p. 105 and ff). This is in effect what Dummett calls the Fundamental Assumption, that ‘[i]f a statement whose principal operator is one of the logical constants in question can be established at all, it can be established by an argument ending with one of the stipulated I-rules’ (Dummett 1991, p. 251). The Assumption lies at the heart of the proof-theoretic accounts of validity (Prawitz 1985; Dummett 1991). As Prawitz puts it,

it is the whole [inferentialist] project that is in danger when the fundamental assumption cannot be upheld. (Prawitz 2006, p. 523)

If harmony is a necessary condition for logicality, Prior’s challenge is easily met: the tonk rules are spectacularly disharmonious, and hence cannot define a logical connective.6 But the tonk rules are also non-conservative: they allow one to prove sentences in the tonk-free language that were not previously provable in the absence of the rule for tonk (indeed they allow one to prove any such sentence). And indeed, the first response to Prior’s tonk, published by Nuel Belnap in 1962, was precisely that admissible rules should yield conservative extensions of the base systems to which they may be added.7

The demand for conservativeness is equivalent to the requirement that an admissible logical system be separable, i.e. such that every provable sentence or rule in the system has a proof that only involves either structural rules or rules for the logical operators that figure in that sentence or rule. This requirement is sometimes motivated by the further inferentialist thesis that to understand a linguistic expression is to know its role in inference (Boghossian 2003), i.e. to be able in principle to derive all correct uses of any logical expression one understands. Given separability, the totality of uses of $ (i.e. the derivations of rules and theorems involving sentences with $ as their main logical operator) is derivable from the basic rules for $, and, given the inferentialist account of understanding, one’s grasp of $’s rules is thereby sufficient for knowing $’s meaning.

Logical inferentialists typically assume an atomistic conception of our understanding of logical expressions. That is, they assume that in principle a speaker could understand e.g. without understanding , without understanding ¬, and so forth. Thus, Kent Bendall writes that ‘the order in which [...] logical rules are introduced should not matter’ (Bendall 1978, p. 255), since ‘it should not matter in what order one learns [...] the logical operators’ (Tennant 1997, p. 315). In a similar spirit, Dummett claims that ‘to understand AB, one need not understand AB or AB’ (Dummett 1991, p. 223). If to understand a logical expression is to know its role in inference, and if the understanding of logical expressions is atomistic, then it is natural to assume that basic logical rules should be, in Dummett’s terminology, pure, i.e. such that exactly one logical operator figures in them.8

Let orthodox inferentialism be the view that the I- and E-rules of logical expressions must be harmonious and pure, and that any adequate axiomatisation of logic ought to be separable. The view can be traced back to Gentzen and has more recently been defended by Tennant in a number of writings (see e.g. Tennant 1997). Inferentialists such as Dummett and Prawitz relax the requirement of purity, and only require that basic logical rules be harmonious and that admissible axiomatisations of logic be separable. As Dummett puts it:

An impure $-introduction rule will make the understanding of $ depend on the prior understanding of the other logical constants figuring in the rule. Certainly we do not want such a relation of dependence to be cyclic; but there would be nothing in principle objectionable if we could so order the logical constants that the understanding of each depended only on the understanding of those preceding it in the ordering. (Dummett 1991, p. 257)

However, even relaxing the purity requirement in the way Dummett suggests, it is well known that harmony and separability alone are already incompatible with standard axiomatisations of classical logic.

The Inferentialist Argument Against Classical Logic

Proof-theoretic constraints such as harmony and separability rule out Prior’s tonk. But, it may be argued, they rule out much more. For while the rules of intuitionistic logic are harmonious, standard formalizations of classical logic typically aren’t.9 For instance, the classical rule of double negation elimination graphic file with name 10670_2018_32_Figd_HTML.jpg is not in harmony with the standard rule of negation introduction: graphic file with name 10670_2018_32_Fige_HTML.jpg The harmonious rule of negation elimination is the following intuitionistic rule: graphic file with name 10670_2018_32_Figf_HTML.jpg Negation elimination, unlike its classical counterpart, allows one to infer from ¬A precisely what was required to assert ¬A: a derivation of from A. It is easy to show that the rule is harmonious in the sense of satisfying both intrinsic harmony and local completeness.

Example 3

(Intuitionistic negation) graphic file with name 10670_2018_32_Figg_HTML.jpg

By contrast, the classical rule of double negation elimination is left, so to speak, in the cold. The same goes for any other classical rule, such as e.g. classical reductio or the Law of Excluded Middle: graphic file with name 10670_2018_32_Figh_HTML.jpg Classical negation appears to be not harmonious.

It might be thought that the problem can be solved by simply supplying an extra set of harmonious I- and E-rules for one of the classical connectives, such as e.g. negation: graphic file with name 10670_2018_32_Figi_HTML.jpg In this spirit, Weir (1986) proposes the following rules for disjunction: graphic file with name 10670_2018_32_Figj_HTML.jpg The rules are pairwise harmonious, but they do not collectively satisfy intrinsic harmony, as the following derivation shows (see Weir 1986, pp. 476–478): graphic file with name 10670_2018_32_Figk_HTML.jpg Here there is no way one can in general derive B from a derivation of A from ¬B, without appealing to Weir’s rules for disjunction.

Weir’s rules allow one to prove A¬A by means of an argument ending by just one application of disjunction introduction (Weir 1986, p. 469): graphic file with name 10670_2018_32_Figl_HTML.jpg The rule of double negation elimination is derived as follows: graphic file with name 10670_2018_32_Figm_HTML.jpg However, it is easy to see that the idea of defining a single logical operator by means of multiple sets of harmonious introduction and elimination rules doesn’t work.10 For consider the following seemingly innocuous rules: graphic file with name 10670_2018_32_Fign_HTML.jpg If they are taken to define a single connective, they validate Prior’s rules for tonk: graphic file with name 10670_2018_32_Figo_HTML.jpg In effect, Weir’s rules could be regarded as defining two harmless, and indeed harmonious, connectives and , one governed by -IW1 and -EW1 and one governed by -IW2 and -EW2, but neither of the two being equivalent to classical disjunction. In Sect. 3, I introduce genuinely harmonious classical rules for .

Similarly, standard axiomatisations of classical logic are not separable. For instance, some uses of such as Peirce’s Law, that ((AB)A)A, are only derivable by means of rules for both and ¬. Intuitionists such as Dummett, Prawitz and Tennant have taken the lack of harmony and separability of standard axiomatisations of classical logic to show that classical rules such as double negation elimination are not logical (or that they are in some other sense defective), and that the logical rules we should adopt are those of intuitionistic logic, i.e. classical logic without the Law of Excluded Middle, double negation elimination and other equivalent rules [or perhaps of a weaker logic still (Tennant 1987, 1997)].11

However, while it is true that standard axiomatisations of classical logic are not harmonious, a number of non-standard axiomatisations of classical logic are both harmonious and separable. In particular, classical logic can be shown to be as proof-theoretically respectable as intuitionistic logic provided rules are given both for asserting and for denying complex statements (Rumfitt 2000; Incurvati and Smith 2010), where denial is taken to be a primitive speech act distinct from the assertion of a negated sentence (Parsons 1984; Smiley 1996). The resulting axiomatisation of classical logic is compatible with the orthodox inferentialist’s strictures (Rumfitt 2000). In particular, the rules for classical negation are as harmonious as the intuitionistic ones: they allow one to deny ¬A given the assertion of A and vice versa, and to deny A given the assertion of ¬A and vice versa. Alternatively, harmonious, pure, and separable axiomatisations of classical logic can be given once multiple conclusions are allowed (Read 2000; Cook 2005), either in a natural deduction or in a sequent-calculus setting.12

Inferentialists typically dismiss both of these moves. For one thing, it is unclear whether denial really is on a par with assertion. On the face of it, our linguistic practice appears to be assertion-based, as opposed to assertion-and-denial-based. For another, while it is possible to make sense of multiple-conclusion calculi, it would also seem that our inferential practice features arguments for at most one conclusion (Rumfitt 2008; Steinberger 2011c). As Ian Rumfitt puts it:

The rarity, to the point of extinction, of naturally occurring multiple-conclusion arguments has always been the reason why mainstream logicians have dismissed multiple-conclusion logic as little more than a curiosity. (Rumfitt 2008, p. 79)

While by no means decisive, these simple considerations make it worthwhile to ask whether an axiomatisation of classical logic that is both assertion-based and single-conclusion can be made consistent with the requirements of harmony, purity, and separability. The next four sections argue that it can, provided absurdity is interpreted as a punctuation sign and we allow for higher-level rules. New rules for disjunction will further make the axiomatisation to be presented in Sect. 6 compatible with Dummett’s Fundamental Assumption. I consider classical disjunction first (Sect. 3), before turning to absurdity (Sect. 4) and higher-level rules (Sect. 5).

Classical Disjunction

From a classical inferentialist perspective, the standard rules for disjunction can be seen as unsatisfactory for at least two reasons.

To begin with, if the logic is classical, the standard introduction rules for are guaranteed not to respect Dummett’s Fundamental Assumption that, if one can introduce a complex statement, one could in principle introduce it by means of an argument ending with an application of one of the introduction rules for its main logical operator. The classical Law of Excluded Middle is a case in point: since in the present state of information it is not the case that, for every statement A, we can assert either A or its negation, we cannot introduce A¬A by means of an argument ending with an application of disjunction introduction, as the Fundamental Assumption requires.

Second, and relatedly, one often hears that the standard introduction rules for disjunction do not actually represent the way disjunctions are asserted in everyday practice, and that the meaning of ‘or’ in ordinary language is radically different from its meaning in logic. The complaint seems reasonable enough: we typically assert AB on the grounds that A and B cannot both be false—not because we already know that one of the disjuncts is true. As Scott Soames puts it:

nearly always when we assert the disjunction of A and B in ordinary language, we do so not because we already know that A is true, or because we already know that B is true. Rather, we assert the disjunction because we have some reason for thinking that it is highly unlikely, perhaps even impossible, that both A and B will fail to be true. (Soames 2003, p. 207)

This suggests the following new rules for disjunction:13graphic file with name 10670_2018_32_Figp_HTML.jpg Here the discharge of ¬A and ¬B might be vacuous, i.e. one does not need to actually use, and discharge, both of ¬A and ¬B in order to infer AB by one step of -I. Thus for instance, graphic file with name 10670_2018_32_Figq_HTML.jpg counts as a legitimate application of -I This in turn highlights -I’s classicality: what in textbook natural deduction systems would be an application of classical reductio (CR) immediately followed by one step of the standard rule of -I is here turned into a single primitive step.14

The above rules are obviously harmonious: the elimination rule allows one to introduce precisely what was required to introduce AB in the first place, viz. a derivation of from ¬A and ¬B. More precisely, the reduction step is as follows (where, since -I can discharge assumptions vacuously, only one of D2 and D3 might be present):

Definition 4

(-reduction) graphic file with name 10670_2018_32_Figr_HTML.jpg

And here is the corresponding expansion step:

Definition 5

(-expansion) graphic file with name 10670_2018_32_Figs_HTML.jpg

With these rules in place, the Law of Excluded Middle is provable on no assumptions via an argument ending with an application of -I, as required by the Fundamental Assumption; one only needs to assume ¬A and ¬¬A: graphic file with name 10670_2018_32_Figt_HTML.jpg The standard rules for disjunction and the new ones are interderivable given classical reductio or some equivalent rule such as double negation elimination. The standard two-part rule -I can be derived using the new rule -I as follows: graphic file with name 10670_2018_32_Figu_HTML.jpg As for the standard rule -E, it can be derived using classical reductio and the new rule -E: graphic file with name 10670_2018_32_Figv_HTML.jpg Conversely, the new rule -I can be derived using CR from the standard two-part rule -I, as follows: graphic file with name 10670_2018_32_Figw_HTML.jpg Likewise, the new rule -E is derivable from the standard rule -E: graphic file with name 10670_2018_32_Figx_HTML.jpg

Classical though they may be, -I and -E do not suffice to yield a proof-theoretically acceptable axiomatisation of classical logic. For one thing, although they allow one to derive the Law of Excluded Middle, they do not yield either double negation elimination or classical reductio. And, absent double negation elimination (or some equivalent rule, such as classical reductio), they do not even yield the standard rule of disjunction elimination. For another, the revised rules are impure, since more than one logical operator figures in their schematic form. They are therefore unacceptable by orthodox inferentialist standards.

Both problems can be solved, provided that classical logicians interpret absurdity as a logical punctuation sign and are willing to allow for higher-level rules in their formalisation of logic. The next two sections introduce these two ingredients in turn.

Absurdity as a Punctuation Sign

It is notoriously difficult to offer an adequate inferentialist account of absurdity. Dag Prawitz suggests that be defined by the empty I- rule. That is, in his view, there is no canonical way of introducing . He writes:

the introduction rule for is empty, i.e. it is the rule that says that there is no introduction whose conclusion is . (Prawitz 2005, p. 685)

In Prawitz’s view, the rule can be shown to be in harmony with ex falso quodlibet15: graphic file with name 10670_2018_32_Figy_HTML.jpg On the other hand, Dummett has claimed that should rather be defined by the following infinitary rule of -introduction graphic file with name 10670_2018_32_Figz_HTML.jpg where the Pn are all the atoms of the language, which Dummett takes to be jointly inconsistent (see Dummett 1991, pp. 295–256). The idea is to specify canonical grounds for that can never obtain: no rich enough language will allow for the possibility in which all atoms, including basic contraries such as ‘This table is all red’ and ‘This table is all white’, can be proved—or so the thought goes. The rule is evidently harmonious with EFQ: one can derive from an assertion of precisely what was required for asserting in the first place.

Both Prawitz’s and Dummett’s accounts are problematic, however. Dummett’s rule is non recursive and makes the meaning of dependent on the expressiveness of one’s language. After all, it may be argued that atoms need not be in general incompatible. As for Prawitz’s account of , the very thought that has content makes the meaning of negation dependent on the meaning of absurdity, and hence violates the orthodox inferentialist’s demand for purity.

An alternative, and more promising, proposal views as a logical punctuation sign (Tennant 1999; Rumfitt 2000). Thus, Tennant writes that

an occurrence of ‘’ is appropriate only within a proof [...] as a kind of structural punctuation mark. It tells us where a story being spun out gets tied up in a particular kind of knot—the knot of a patent absurdity, or self contradiction. (Tennant 1999, p. 204)

Similarly, Rumfitt suggests that ‘marks the point where the supposition [...] has been shown to lead to a logical dead end, and is thus discharged, prior to an assertion of its negation’ (Rumfitt 2000, pp. 793–794). On such a view, EFQ becomes a structural rule, i.e. a form of weakening on the right (Steinberger 2009a, 2011b).

Formally, to treat as a logical punctuation sign is to switch from a set-formula framework (SET-FMLA), i.e. a framework in which the premises of an argument form a set and its conclusion is always a singleton, to a to a set-formula-or-empty-set framework (SET-SET), i.e. a framework in which the premises of an argument form a set and its conclusion is always either a singleton or the empty set. Clearly, both options are compatible with the orthodox inferentialist’s rejection of multiple-conclusions.16 In the remainder of this paper, I will treat as a logical punctuation sign.17

Higher-Level Rules

Now to higher-level rules. Natural deduction systems involve rules, such as arrow introduction, which allow one to discharge assumptions. But what exactly is an assumption? Schroeder-Heister (1984) suggests that to assume some formulae β1,,βn is technically just to treat these formulae as temporary axioms:

Assumptions in sentential calculi technically work like additional axioms. A formula α is derivable from formulas β1,,βn in a calculus C if α is derivable in the calculus C resulting from C by adding β1,,βn as axioms. But whereas “genuine” axioms belong to the chosen framework and are usually assumed to be valid in some sense, assumptions bear an ad hoc character: they are considered only within the context of certain derivations. (Schroeder-Heister 1984, p. 1284)

But if assumptions just are ad hoc axioms, one should also be free to use ad hoc rules in the context of a derivation. Thus Schroeder-Heister again:

Instead of considering only ad hoc axioms (i.e. assumption formulas) we can also regard ad hoc inference rules, that is, inference rules [...] used as assumptions. Assumption rules technically work like additional basic rules: α is derivable from assumption formulas β1,,βn and assumption rules ρ1,,ρm, in C if α is derivable in C, where C results from C by adding β1,,βn as axioms and ρ1,,ρm as basic inference rules. (Schroeder-Heister 1984, p. 1285)

Armed with Tennant’s account of absurdity as a logical punctuation sign and with Schroeder-Heister’s higher-level rules, let us now turn to classical logic.

On the foregoing assumptions, modus ponens can be formulated as a higher-level rule, as follows: graphic file with name 10670_2018_32_Figaa_HTML.jpg The standard rule of arrow elimination is obtained by setting C equal to B (then, given a derivation of A, one may conclude B from Inline graphic and A). Similarly, classical reductio can be rewritten as a structural rule, as follows: graphic file with name 10670_2018_32_Figac_HTML.jpg If one can derive a contradiction from the assumption that A itself leads to a contradiction, one can discharge that assumption and infer A. The rule is structural since no logical operator figures in it: recall, following Tennant, we are interpreting as shorthand for the empty set, rather than as a propositional constant.18 Finally, our proposed impure rules for disjunction can now be presented as pure harmonious rules. The I-rule can be read as stating that, if one can derive absurdity from the rules Inline graphic and Inline graphic one may discharge the rules and infer AB. More formally: graphic file with name 10670_2018_32_Figaf_HTML.jpg Conversely, the corresponding E-rule states that, given a proof of AB, one may infer from the rules Inline graphic and Inline graphic: graphic file with name 10670_2018_32_Figai_HTML.jpg It is easy to show that this pair of I- and E-rules is just as harmonious as its impure counterpart {-I,-E}.

The new rules -Ip and -Ep,CRhl, and the standard I- and E-rules for conjunction, implication, and negation, together afford a harmonious and pure axiomatisation of classical propositional logic (henceforth, CPL), in which each of the connectives is treated as a primitive.19 Call this formalization Ncp.

In keeping with Schroeder-Heister’s original treatment of higher-level rules, Ncp only allows for the assumption of rules. However, once rules can be assumed, it is difficult to see why rules couldn’t also figure as conclusions. Consider the following structural rule, where depending on graphic convenience I sometimes write Inline graphic as A / B: graphic file with name 10670_2018_32_Figak_HTML.jpg The rule allows one to derive the rule A / B from a derivation of B from A, discharging A. The parentheses ensure unique readability: they indicate that the object Inline graphic, as opposed to simply A, follows from a derivation of B from A.20 The rule is naturally paired with the following, also purely structural, rule: graphic file with name 10670_2018_32_Figam_HTML.jpg This says that, given the rule Inline graphic, B can be derived given A.

The introduction and immediate elimination of  /  gives rise to what we may call a maximum rule, i.e. a rule occurrence that is both the consequence of an application of /-I and the major premise of an application of /-E. Unsurprisingly, maximum rules can be ‘levelled’, as shown by the following reduction:

Definition 6

( / -reduction) graphic file with name 10670_2018_32_Figao_HTML.jpg

The definition of intrinsic harmony given in Sect. 1 can be generalised accordingly, as the possibility of eliminating maximum formulae and rules.

Although they bear a close resemblance to -I and -E, the structural rules  / -I and  / -E should be sharply distinguished from the operational rules -I and -E: while -I and -E allow one to respectively introduce and eliminate an operator,  / -I and  / -E allow one to respectively introduce and eliminate a rule.

It might be insisted that  / -I and  / -E are just -I and -E in disguise. However, the objection would miss the point: from the fact that Inline graphic could be interpreted as AB, it doesn’t follow that it is to be so interpreted. An analogy helps to illustrate the point. Consider a bilateralist setting, where + and − are force signs, +A and -A are to be respectively read as ‘A? Yes’ and ‘A? No’, the assumption of +A is to be interpreted as ‘A? Suppose yes’, and is interpreted as the empty set. Now consider the following bilateralist form of indirect proof: graphic file with name 10670_2018_32_Figaq_HTML.jpg Since + and − are force signs that don’t affect propositional content, RED is effectively a structural rule that, in a bilateralist framework, allows one to deny A given a derivation of from the assumption +A. It could be objected that RED is a form of negation introduction in disguise (Murzi and Hjortland 2009, p. 486). But the point would not be well taken. For while the denial force sign in RED could be interpreted as an external negation, it doesn’t follow from this that it is be so interpreted (Incurvati and Smith 2010, pp. 9–10).

Now let Ncp+ be the result of closing Ncp under  / -I and  / -E. To give the reader a feel of the new system, we prove two classical principles. We first prove the Excluded Middle:

Example 7

(Excluded middle) graphic file with name 10670_2018_32_Figar_HTML.jpg

We then prove Peirce’s Law in rule form, only making use of rules for (and structural rules):

Example 8

(Peirce’s rule) graphic file with name 10670_2018_32_Figas_HTML.jpg

The next section shows that Ncp+ is not only harmonious, but also satisfies the more demanding requirement of separability.

Normalization for Ncp+

Following (and generalising) Prawitz (1965), we prove normalization and subformula property theorems for Ncp+. The subformula property theorem entails the separability property as an immediate corollary. First, we define Ncp+.

Definition 9

Formulae of Ncp+ are built up from atoms and from the standard binary connectives ,,, and the unary connective ¬. Absurdity () is a logical ‘punctuation sign’, and hence not an atom. The rules for ,, and ¬ are the standard ones: -I, -E, -I, -E, ¬-I, ¬-E. The rules for are non-standard: -Ip and -Ep. There are three structural rules: CRhl,/-I, and  / -E.

Definition 10

Objects of Ncp+ are divided into levels. Atomic formulae and compound formulae of the form ¬A,AB,AB, and AB are of level 0. Rules of the form A / B are of level 1. Rules of the form Inline graphic or Inline graphic are of level 2. And so on.

I use Greek letters γ,δ (possibly with subscripts) as metavariables ranging over formula occurrences, occurrences of , and rule occurrences. We then prove in three easy steps that Ncp+ really gives us classical propositional logic.

Fact 11

The operational rules of Ncp+ are pure.

Lemma 12

The standard disjunction rules -I and -E are interderivable with -Ip and -Ep, given CR.

Proof: left as an exercise to the reader (the proof is essentially already given in Sect. 3).

Lemma 13

CRhl and CR are interderivable in minimal logic.

Proof: It is enough to observe that ¬A and Inline graphic are interderivable. We first prove that Inline graphic follows from ¬A: graphic file with name 10670_2018_32_Figax_HTML.jpg

We then prove the converse implication: graphic file with name 10670_2018_32_Figay_HTML.jpg

Corollary 14

Ncp+ is a sound and complete axiomatisation of CPL.

Proof: this follows from Lemmas 12 and 13, given the observation that minimal logic together with CR yields a sound and complete axiomatisation of CPL.

Next, we define the notions of maximum rule, local peak, normal deduction, and subformula:

Definition 15

(Maximum formula) A maximum formula in Π is a formula occurrence in Π that is the consequence of an application of an I- rule or a -rule (namely, CR, CRhl, or EFQ) and the major premise of an E-rule.

Definition 16

(Maximum rule) A maximum rule in Π is a rule occurrence in Π that is the consequence of an application of an I- rule and the major premise of an E-rule.

Definition 17

(Local peak) A local peak in Π is either a maximum formula or a maximum rule in Π.

Definition 18

(Normal deduction) A normal deduction is a deduction which contains no local peaks.

Definition 19

(Subformula) The notion of a subformula in Ncp+ is inductively defined by the following clauses:

  1. A is a subformula of A;

  2. A is a subformula of ¬A;

  3. A and B are subformulae of A / B;

  4. If BC,BC, or BC is a subformula of γ (where γ may be a formula or a rule), then so are B and C.

We can now prove that every deduction in Ncp+ converts into a normal deduction. To this end, we first need to show that local peaks can always be removed.

Let Π be a derivation of E from Γ that contains a local peak γ that is a consequence of an application of an I-rule and major premise of an application of an E-rule. Then, following Prawitz (1965, p. 36), we say that Π is a reduction ofΠatγ if Π is obtained from Π by removing γ by an application of a reduction procedure. The reduction for our modified disjunction rules is as follows

Definition 20

(-reduction) graphic file with name 10670_2018_32_Figaz_HTML.jpg

The reduction for / has been introduced in Definition 6. The remaining conversion steps are standard (see Prawitz 1965, Chapter 2).

In our next step, we prove that we can restrict applications of CRhl to the case where its conclusion is atomic.

Theorem 21

(CRhl-restriction) Applications of CRhl can always be restricted to the case where the conclusion is atomic.

Proof

We generalise Prawitz’s original proof (Prawitz 1965, pp. 39–40) to the present case involving our higher-level rules for disjunction and the higher-level structural rule CRhl. Let Π be a deduction in Ncp+ of A from Γ in which the highest degree of a consequence of an application α of CRhl is d, where d>0 and the degree of a formula A is defined as the number of occurrences of logical operators in A (see Prawitz 1965, p. 16). Let F be a consequence of an application α of CRhl in Π such that its degree is d but no consequence of an application of CRhl in Π that stands above F is of degree greater than or equal to d. Then Π has the form graphic file with name 10670_2018_32_Figba_HTML.jpg where Inline graphic is the set of derivations discharged by α, and F has one of the following forms: ¬A,AB,AB, or AB.21 In the respective cases, we transform Π into derivations which either do not contain applications of CRhl or have consequences of applications of CRhl of degree less than d. Here are the transformations for negation graphic file with name 10670_2018_32_Figbb_HTML.jpg conjunction graphic file with name 10670_2018_32_Figbc_HTML.jpg and the conditional graphic file with name 10670_2018_32_Figbd_HTML.jpg

The case for disjunction can be dealt with similarly as follows: graphic file with name 10670_2018_32_Figbe_HTML.jpg The new applications of CRhl (if any) have consequences of degrees less than d. Hence, by successive applications of the above procedures we finally obtain a deduction of A from Γ in which every consequence of every application of CRhl is atomic.

We now generalise Prawitz’s proof that his axiomatisation of CPL is normalisable. We begin with some definitions, largely following Prawitz (1965, p. 25 and ff; and p. 41).

Definition 22

(Top and end formulae) A top-formula in a formula-tree Π is a formula-occurrence or an occurrence of that does not stand immediately below any formula occurrence or occurrence of in Π. An end-formula in a formula-tree Π is a formula-occurrence or occurrence of that does not stand immediately above any formula occurrence or occurrence of in Π.

Definition 23

(Top and end rules) A top-rule in a formula-tree Π is a rule-occurrence that does not stand immediately below any formula occurrence in Π or occurrence of . An end-rule in a formula-tree Π is a rule-occurrence that does not stand immediately above any formula occurrence or occurrence of in Π.

Definition 24

(Thread) A sequence γ1,γ2,,γn of formula occurrences, or occurrences of , or rule occurrences, in a formula-tree Π is a thread in Π if (i) γ1 is a top-formula or a top-rule in Π, (2) γi stands immediately above γi+1 in Π for each i<n, and (3) γn is the end-formula or end-rule of Π. We say that γi stands above (below) γj if i<j(i>j).

Definition 25

(Subtree) If δ is a formula occurrence, or occurrence of , or rule occurrence, in the tree Π, the subtree ofΠdetermined byγ is the tree obtained from Π by removing all formula occurrences or occurrences of except those in γ and the ones above γ.

Definition 26

(Side-connectedness) Let γ be a formula occurrence, or occurrence of , or rule occurrence in Π, let (Π1,Π2,,Πn/γ) be the subtree of Π determined by γ and let γ1,γ2,,γn be the end-formulae or end-rules of, respectively, Π1,Π2,,Πn. We then say that γi is side-connected with γj, for i,jn

Definition 27

(Branches) A branch in a deduction is the initial part γ1,γ2,,γn of a thread in the deduction such that either (i) γn is the first formula occurrence in the thread that is the minor premise of an application of either -E or ¬-E, or the formula occurrence or occurrence of in the thread that is the minor premise of  / -E or a minor premise of -Ep; or (ii) γn is the last formula occurrence of the thread (i.e. the end formula of the deduction) if there is no such minor premise in the thread. A branch that is also a thread that thus contains no minor premise of -E, ¬-E, or  / -E, or -Ep is a main branch.

Theorem 28

(Normalization) If ΓNcp+γ, then there is a normal deduction in Ncp+ of γ from Γ (where Γ is a possibly empty set of formulae or rules).

Proof

Let Π be a deduction in Ncp+ of γ that is as described in Theorem 21. Let the degree of rule R be the number of occurrences of logical operators in R (recall, is not a logical operator). Now let δ be a local peak in Π such that there is no other local peak in Π of higher degree than that of δ and such that local peaks in Π that stand above a formula occurrence side-connected with δ (if any) have lower degrees than δ. Let Π be a reduction of Π at δ. The new local peaks that may arise from this reduction are all of lower degrees than that of δ. Moreover, Π is still as described above. Hence, by a finite number of reductions, we obtain a normal deduction of γ from Γ.22

Theorem 29

Let Π be a normal deduction in Ncp+, and let β=γ1,γ2,,γn be a branch in Π. Then, there is a formula occurrence, or occurrence of , or rule occurrence γi, called the local valley in β, which separates two (possibly empty) parts of β, respectively called the E- and I-part of β, with the properties:

  1. Each formula or rule occurrence γj in the E-part (i.e. j<i) is a major premise of an E-rule and contains γj+1 as a subformula.

  2. γi, provided in, is a premise of an I-rule or of CRhl.

  3. Each formula γj in the I-part except the last one (i.e. i<j<n) is a premise of an I-rule and is a subformula of γj+1.

Proof

The formula or rule occurrences in β that are major premises of E-rules precede all formula occurrences or occurrences of in β that are premises of I-rules or of CRhl. Otherwise, there would be a first formula or rule occurrence in β which is a major premise of an E-rule but succeeds a premiss of an I-rule or of CRhl, and such a formula or rule occurrence would be a local peak, contrary to the assumption that Π is normal. Now let γi be the first formula occurrence or occurrence of in β that is premise of an I-rule or of CRhl or, if there is no such segment, let γi be γn. Then, γi is a local valley as described in the theorem. Obviously, γi satisfies both 1. and 2. Moreover, every formula occurrence or occurrence of γj such that i<j<n is a premise of an I-rule or of CRhl. However, the latter possibility is excluded, since a premise of CRhl is an occurrence of and can be consequence of an E-rule only. Hence, 3. is also satisfied.

Corollary 30

(Subformula property) Each formula occurring in a normal deduction Π of γ from Γ is a subformula of γ or of one of the formulae in Γ.

Prawitz (1965, pp. 42–43) proves this result for his own formalization of CPL, which includes the rules for ,, and CR, and where ¬A is defined as A. In Prawitz’s system, the theorem holds for every formula in Π, ‘except for assumptions discharged by applications of CR and for occurrences of that stand immediately below such assumptions’. Prawitz’s proof carries over to Ncp+, this time without exceptions. Informally, this can be shown by considering, in the new Ncp+ setting, the exceptions to Prawitz’s original theorem, viz. that (i) assumptions discharged by applications of CR and (ii) occurrences of that stand immediately below such assumptions may not be subformulae of either γ or some of the formulae in Γ. Concerning (i), we then notice that it is a consequence of Prawitz’s theorem that, if B/ is an assumption discharged by CRhl in a normal deduction of A from Γ, then B is a subformula of A or of some subformula of Γ. As for (ii), the problem disappears as soon as we treat as a logical punctuation sign. For a fuller proof, we first order branches according to the following definition, still following and generalising Prawitz’s original proof.

Definition 31

(Ordering of branches) A main branch (i.e. a branch that ends with an end-formula of Π) has order 0. A branch that ends with a minor premise of an application of -E, /-E, ¬-E, or -Ep is of order n+1 if the major premise of this application has order n.

We now prove Corollary 30 by induction on the order of branches.

Proof

Let Π be a normal deduction in Ncp+. We show that the corollary holds for all formula occurrences or occurrences of in a branch of order p if it holds for formula occurrences in branches of order less than p. Let β be γ1,γ2,,γn and let γi be the local valley of β. For γn the assertion is immediate: either γn=γ, or γn is a minor premise of an application of -E, ¬-E, -Ep, or  / -E with a major premise of the form either AB, or ¬A, or AB, or A / B that belongs to a branch of order p-1. Hence, by Theorem 29, the corollary holds for all γj such that i<j<n. If γ1 is not discharged by an application of CRhl or -Ip, then either γΓ or γ1 is a formula A1 discharged by an application α of either -I, ¬-I, or  / -I such that the consequence of α has the form either A1B, or ¬A1, or A1/B and belongs to the I-part of β or to some branch of order less than p. Hence, in this case, A1 is a subformula of the required kind, and, by Theorem 29, the same holds for all Aj such that ji. Finally, if γ1 is a rule discharged by an application of CRhl or of -Ip, then γ1 is a minor premise of -Ep, and so γ1=γn; hence, also in the latter three cases, the proof is complete.

Theorem 32

(Separation property) Any normal deduction only consists of applications of the rules for the connectives occurring in the undischarged assumptions, if any, or in the conclusion, plus possibly CRhl.

Proof

This follows at once from Corollary 30, by inspection of the inference rules.

Objections and Replies

Recall, the intuitionist’s contention was that classical logic cannot be regimented in a proof-theoretically acceptable way: classical logic, the intuitionist complained, is bound to be inharmonious or inseparable. The foregoing formalization of classical logic, if acceptable at all, shows that this accusation is misplaced. Ncp+ provides a single-conclusion and assertion-based axiomatisation of CPL satisfying the orthodox inferentialist’s requirements of harmony, purity, and separability. The intuitionist’s error, classical inferentialists may diagnose, was to think that the extra deductive power enjoyed by negation, disjunction, and implication in classical logic had to be owed to their respective I- and E-rules. But, classical inferentialists may argue, this was a mistake: the extra deductive power essentially derives from a different (and richer) understanding of .

Intuitionists might object that the foregoing axiomatisation of classical logic, if proof-theoretically kosher, is incompatible with inferentialism. Rumfitt has recently made the point. As he puts it:

A set/formula sequent represents an actual argument, in which a reasoner passes from a set of premises to a conclusion. Hence the correctness of such a sequent can be related to the intuitive acceptability of the corresponding inferential passage. Where a speaker fails to reach such a conclusion, however, we do not have an inference; we merely have a list of statements. Accordingly, we cannot explain the correctness of a set/formula-or-empty sequent directly in terms of the intuitive acceptability of an inference. (Rumfitt 2017, p. 237)

The argument fails to convince, though. Consider, for instance, the rule of negation elimination: graphic file with name 10670_2018_32_Figbf_HTML.jpg where is interpreted as a logical punctuation sign, i.e. as the empty set. Then, the rule correctly represents a plausible pattern of inference: upon deriving both A and ¬A, a rational agent stops her reasoning and examines instead which of the assumptions on which A and ¬A depend must be given up.

It may be insisted that, qua structural rule, CRhl, and hence classicality, has not been proof-theoretically justified. As Priest puts it:

[I]ntroduction and elimination rules are superimposed on structural inferential rules [...] and the question therefore arises as how these are to be justified. (Priest 2006, p. 179)

However, a parallel argument would show that intuitionistic logic cannot be fully proof-theoretically justified either, since intuitionistically valid structural principles such as (say) weakening and contraction do not appear to be justifiable by means of proof-theoretic requirements such as harmony and separability. The inferentialist requirements of harmony, purity, and separability pertain (and have always pertained) to logical operators, and it is consistent with these requirements that structural rules be justified, or criticised, in non-proof-theoretic ways.

Intuitionists might retort that, although this may well be true, classical logicians need stronger structural assumptions, which, they may add, still makes classical logic epistemically suspect. But all that follows from this is that the proper intuitionist challenge to the classical logician is not a proof-theoretic one. Rather, it must be directed to the classicist’s extra structural assumptions. More precisely, in the foregoing framework, the challenge should be directed to the classicist’s logic of absurdity. Stephen Read makes the point, albeit in a slightly different context:

The constructivist can still mount a challenge to classical logic. But we now see where that challenge should be concentrated—and where it is misguided. The proper challenge is to Bivalence, and to the classical willingness to assert disjunctions, neither of whose disjuncts is separately justified [...]. (Read 2000, pp. 151–152)

In the present framework, the challenge should be mounted to the inferentialist’s willingness to infer A if the assumption that A leads to a dead end (less figuratively: the rule ‘From A, infer ’) itself leads to a dead end (yields ).

Conclusions

Dummett once wrote that the proof-theoretic deficiency of the classical rules for negation (in a standard SET-FMLA setting) is ‘a strong ground for suspicion that the supposed understanding [of classical negation] is spurious’ (Dummett 1991, p. 299). However, even conceding that the meaning and understanding of logical connectives are inexorably tied to the proof-theoretic demands of harmony, purity, and separability, Dummett’s conclusion is unwarranted: pace the intuitionist’s contention that classical logic is proof-theoretically defective, Ncp+ enjoys exactly the same proof-theoretic properties as the standard axiomatisations of intuitionistic logic. Moreover, the new rules for disjunction allow one to prove directly the Law of Excluded Middle, thus vindicating the inferentialist thought that ‘what is implicit in the totality of cases of the introduction-rule for a connective is that they exhaust the grounds for assertion of that specific conclusion’ (Read 2008, p. 289). Classical logic may well be shown to be defective—for instance, on the grounds that it is incompatible with an anti-realist metaphysics (see e.g. Wright 1992, Chapter 2), or that it does not accommodate the semantic and soritical paradoxes (see e.g. Field 2008). But, even assuming an orthodox inferentialist account of logical expressions, the grounds for restricting certain classical principles are not proof-theoretic.

Acknowledgements

Open access funding provided by Paris Lodron University of Salzburg.

Footnotes

1

I should stress at the outset that, even though the axiomatisation is novel, some of its main ingredients have been present in the literature for some time. For one thing, the De Morgan-like rules for disjunction to be introduced in Sect. 3 are already briefly discussed in Murzi (2010 Ch. 7, §4.12), Murzi and Steinberger (2013, p. 181, fn. 37), and, more recently, Prawitz (2015, p. 29) and Pereira and Rodriguez (2017, p. 1156). For another, the interpretation of classical reductio as a structural rule offered in Sect. 5 can already be found, in essence, in Schroeder-Heister’s dissertation, only available in German (see Schroeder-Heister 1981, §18, Absurdität als Grundbegriff, p. 241 and ff). Among other things, Schroeder-Heister also indicates how to modify his normalisation theorem for classical logic in order for it to apply to an axiomatisation of classical logic in which classical reductio is structural. The ideas and results presented in this paper were found independently of Schroeder-Heister, and the proof strategies adopted for proving the normalisation and separability results of Sect. 6 are imported from Prawitz rather than Schroeder-Heister. I am grateful to an anonymous reviewer for bringing his dissertation to my attention in February 2018.

2

See Popper (1947, p. 220), Kneale (1956, pp. 254–255), and Dummett (1991, p. 247).

3

We will give a precise and slightly more general definition of a maximum formula in Sect. 6 below.

4

For instance, a connective satisfying the standard I-rules for but only one of its E-rules would be intrinsically harmonious, and yet intuitively disharmonious: its E-rule would not allow one to infer from AB all that was required to introduce AB in the first place.

5

For an overview see Steinberger (2011b). Tennant’s conception of harmony is further discussed in Steinberger (2009b), Tennant (2010), and Steinberger (2011a).

6

Whether harmony is also a sufficient condition for logicality is a more delicate question. See Read (2000).

7

See also e.g. Hacking (1979, pp. 237–238) and Dummett (1991, pp. 217–218), and the discussion in Steinberger (2011b). For a recent critical discussion of the requirement of harmony, see Rumfitt (2017).

8

The requirement of purity is compatible with multiple occurrences of the same logical operator within the same rule.

9

See Prawitz (1977, p. 36), Dummett (1991, pp. 296–300) and Tennant (1997, pp. 308–321).

10

I owe this point to Dominic Gregory, to whom I am very much indebted.

11

See e.g. Dummett (1991, p. 299).

12

Sequent calculi axiomatisations of intuitionistic and classical logic are exactly alike, except that classical sequent calculi allow for sequents with multiple premises and multiple conclusions. In turn, such sequents can be plausibly interpreted as saying that one may not assert all the antecedents and deny all the succedents, where, again, assertion and denial are both seen as primitive speech acts (Restall 2005).

13

To my knowledge, these rules were first discussed in Murzi (2010, Ch. 7, §4.12) and Murzi and Steinberger (2013, p. 181, fn. 37). For a more recent discussion, see Prawitz (2015, p. 29) and Pereira and Rodriguez (2017, p. 1156).

14

Many thanks to an anonymous referee for suggesting this observation.

15

See Prawitz (1973, p. 243), Read (2000, p. 139), and Negri and von Plato (2001, p. 8).

16

In a recent paper, Lloyd Humberstone and David Makinson argue against justifications of intuitionistic logic based on proof-theoretic properties of basic I- and E-rules (what they call ‘elementary rules’), essentially on the assumption that any acceptable axiomatisation of logic ought to be assertion-based and SET-FMLA (Humberstone and Makinson 2011). As they show, on this assumption, it is not even possible to provide acceptable I-rules for ¬ and , let alone harmonious such rules. However, as they also observe, their results do not carry over to a SET-SET framework.

17

It worth noticing that, as an added bonus, the empty set interpretation of also helps to solve Carnap’s so-called Categoricity Problem (see e.g. Carnap 1943), without resorting to multiple-conclusions or rules for denial, and without postulating the existence of a necessarily false sentence, as in Garson (2013).

18

As a referee pointed out to me, a version of CRhl is discussed in Schroeder-Heister (1981, §18, p. 241 and ff).

19

Given negation introduction and negation elimination, -Ip and -Ep are equivalent to -I and -E, which we have already shown to be interderivable, given CRhl or some classically equivalent rule, with the standard I- and E-rules for disjunction. There is no need for ex falso quodlibet, which is just a special case of CRhl, if we are allowed vacuous discharge of assumptions.

20

Without parentheses,  / -I would allow one to invalidly infer A, and then B, from a derivation of B from A.

21

Prawitz’s original proof only covers the cases of and , since, in his system, is defined.

22

See Prawitz (1965, pp. 40–41). Notice that Prawitz’s Lemma on permutative reductions (see Prawitz 1965, pp. 49–51) need not be repeated here, since Ncp+ does not contain general elimination rules such as the standard rule of disjunction elimination.

This paper has long been in the making. I wish to thank audiences in Sheffield, Amsterdam, Canterbury, Bristol, and Munich for valuable comments on some of the materials presented herein. Many thanks to Bob Hale, Dominic Gregory, Ole Hjortland, Dick de Jong, David Makinson, Dag Prawitz, Stephen Read, Lorenzo Rossi, Ian Rumfitt, Florian Steinberger, and Neil Tennant, for discussion and feedback over the years, and to two anonymous referees for invaluable comments on previous drafts of this paper that led to substantial improvements. I finally wish to thank the Analysis Trust, the Alexander von Humboldt Foundation, the British Academy, and the FWF (Grant No. P2971-G24) for generous financial support during the time this paper was written. This paper is dedicated to the memory of my Ph.D. supervisor, Bob Hale.

References

  1. Bendall K. Natural deduction, separation, and the meaning of the logical operators. Journal of Philosophical Logic. 1978;7:245–276. doi: 10.1007/BF00245930. [DOI] [Google Scholar]
  2. Boghossian P. Blind reasoning. Proceedings of the Aristotelian Society. 2003;77(1):225–248. doi: 10.1111/1467-8349.00110. [DOI] [Google Scholar]
  3. Carnap R. Formalization of logic. Cambridge, MA: Harvard University Press; 1943. [Google Scholar]
  4. Cook R. Intuitionism reconsidered. In: Shapiro S, editor. The Oxford handbbook for philosophy of mathematics and logic. Oxford: Oxford University Press; 2005. pp. 387–411. [Google Scholar]
  5. Dummett M. The logical basis of metaphysics. Harvard, MA: Harvard University Press; 1991. [Google Scholar]
  6. Field H. Saving truth from paradox. Oxford: Oxford University Press; 2008. [Google Scholar]
  7. Garson J. What logics mean. Cambridge: Cambridge University Press; 2013. [Google Scholar]
  8. Gentzen G. Collected papers. Amsterdam: North Holland; 1969. [Google Scholar]
  9. Hacking I. What is logic? Journal of Philosophy. 1979;76:285–319. doi: 10.2307/2025471. [DOI] [Google Scholar]
  10. Humberstone L, Makinson D. Intuitionistic logic and elementary rules. Mind. 2011;120(480):1035–1051. doi: 10.1093/mind/fzr076. [DOI] [Google Scholar]
  11. Incurvati L, Smith P. Rejection and valuations. Analysis. 2010;69(4):3–10. doi: 10.1093/analys/anp134. [DOI] [Google Scholar]
  12. Kneale W. The province of logic. In: Lewis HD, editor. Contemporary british philosophy. London: George Allen and Unwin Ltd; 1956. pp. 237–261. [Google Scholar]
  13. Moriconi E, Tesconi L. On inversion principles. History and Philosophy of Logic. 2008;29(2):103–113. doi: 10.1080/01445340701830334. [DOI] [Google Scholar]
  14. Murzi, J. (2010). Intuitionism and logical revision. Ph.D. thesis, University of Sheffield.
  15. Murzi J, Hjortland OT. Inferentialism and the categoricity problem: Reply to Raatikainen. Analysis. 2009;69(3):480–488. doi: 10.1093/analys/anp071. [DOI] [Google Scholar]
  16. Murzi J, Steinberger F. Is knowledge of logic dispositional? Philosophical Studies. 2013;166:165–183. doi: 10.1007/s11098-012-0063-9. [DOI] [Google Scholar]
  17. Negri S, von Plato J. Structural proof theory. Cambridge: Cambridge University Press; 2001. [Google Scholar]
  18. Parsons T. Assertion, denial and the liar paradox. Journal of Philosophical Logic. 1984;13:136–152. doi: 10.1007/BF00453019. [DOI] [Google Scholar]
  19. Pereira LC, Rodriguez RO. Normalization, soundness and completeness for the propositional fragment of Prawitz’ ecumenical system. Revista Portuguesa de Filosofia. 2017;73(3/4):1153–1168. doi: 10.17990/RPF/2017_73_3_1153. [DOI] [Google Scholar]
  20. Popper K. New foundations for logic. Mind. 1947;56(223):193–235. doi: 10.1093/mind/LVI.223.193. [DOI] [Google Scholar]
  21. Prawitz D. Natural deduction. Stockholm: Almqvist and Wiksell; 1965. [Google Scholar]
  22. Prawitz D. Towards a fundation of a general proof theory. In: Suppes P, Henkin L, Joja A, Moisil GC, editors. Logic, methodology and the philosophy of science IV: Proceedings of the fourth international congress. Amsterdam: North Holland; 1973. pp. 225–250. [Google Scholar]
  23. Prawitz D. On the idea of a general proof theory. Synthese. 1974;27:63–77. doi: 10.1007/BF00660889. [DOI] [Google Scholar]
  24. Prawitz D. Meaning and proofs: On the conflict between classical and intuitionistic logic. Theoria. 1977;43:1–40. [Google Scholar]
  25. Prawitz D. Remarks on some approaches to the concept of logical consequence. Synthese. 1985;62:153–171. doi: 10.1007/BF00486044. [DOI] [Google Scholar]
  26. Prawitz D. Logical consequence: A constructivist view. In: Shapiro S, editor. The Oxford handbbook of philosophy of mathematics and logic. Oxford: Oxford University Press; 2005. pp. 671–695. [Google Scholar]
  27. Prawitz D. Meaning approached via proofs. Synthese. 2006;148:507–524. doi: 10.1007/s11229-004-6295-2. [DOI] [Google Scholar]
  28. Prawitz D. Classical versus intuitionistic logic: Why is this a proof? London: College Publications; 2015. pp. 15–32. [Google Scholar]
  29. Priest G. Doubt truth to be a liar. Oxford: Oxford University Press; 2006. [Google Scholar]
  30. Prior A. The runabout inference-ticket. Analysis. 1960;21(2):38–39. doi: 10.1093/analys/21.2.38. [DOI] [Google Scholar]
  31. Read S. Harmony and autonomy in classical logic. Journal of Philosophical Logic. 2000;29:123–154. doi: 10.1023/A:1004787622057. [DOI] [Google Scholar]
  32. Read S. Harmony and necessity. In: Dégremont LKC, Rückert H, editors. On dialogues, logics and other strange things. London: King’s College Publications; 2008. pp. 285–303. [Google Scholar]
  33. Restall G. Multiple conclusions. In: Petr Hajek LV-V, Westerthal D, editors. Logic, methodology and the philosophy of science: Proceedings of the twelfth international congress. London: King’s College Publications; 2005. pp. 189–205. [Google Scholar]
  34. Rumfitt I. Yes” and ”No. Mind. 2000;109:781–824. doi: 10.1093/mind/109.436.781. [DOI] [Google Scholar]
  35. Rumfitt I. Knowledge by deduction. Grazer Philosophische Studien. 2008;77:61–84. doi: 10.1163/18756735-90000844. [DOI] [Google Scholar]
  36. Rumfitt I. Against harmony. In: Hale B, Wright C, editors. A companion to the philosophy of language. 2. London: Blackwell; 2017. pp. 225–249. [Google Scholar]
  37. Schroeder-Heister, P. (1981). Untersuchungen zur regellogischen Deutung von Aussagenverknüpfungen. Ph.D. thesis, University of Bonn.
  38. Schroeder-Heister P. A natural extension of natural deduction. Journal of Symbolic Logic. 1984;49:1284–1299. doi: 10.2307/2274279. [DOI] [Google Scholar]
  39. Smiley T. Rejection. Analysis. 1996;56(1):1–9. doi: 10.1093/analys/56.1.1. [DOI] [Google Scholar]
  40. Soames S. Philosophical analysis in the twentieth century: The age of meaning. Princeton: Princeton University Press; 2003. [Google Scholar]
  41. Steinberger, F. (2009a). Harmony and logical inferentialism. Ph.D. thesis, University of Cambridge, Cambridge.
  42. Steinberger F. Not so stable. Analysis. 2009;69:665–661. doi: 10.1093/analys/anp100. [DOI] [Google Scholar]
  43. Steinberger F. Harmony in a sequent setting: A reply to Tennant. Analysis. 2011;71:273–280. doi: 10.1093/analys/anr022. [DOI] [Google Scholar]
  44. Steinberger F. What harmony could and could not be. Australasian Journal of Philosophy. 2011;89(4):617–639. doi: 10.1080/00048402.2010.528781. [DOI] [Google Scholar]
  45. Steinberger F. Why conclusions should remain single. Journal of Philosophical Logic. 2011;40(3):333–355. doi: 10.1007/s10992-010-9153-3. [DOI] [Google Scholar]
  46. Tennant N. Anti-realism and logic. Oxford: Clarendon Press; 1987. [Google Scholar]
  47. Tennant N. The taming of the true. Oxford: Oxford University Press; 1997. [Google Scholar]
  48. Tennant N. Negation, absurdity and contrariety. In: Gabbay DM, Wansing H, editors. What is negation? Dordrecht: Kluwer Academic; 1999. pp. 199–222. [Google Scholar]
  49. Tennant N. Inferentialism, logicism, harmony, and a counterpoint. In: Miller A, editor. Essays for crispin wright: Logic, language and mathematics. Oxford: Oxford University Press; 2008. [Google Scholar]
  50. Tennant N. Harmony in a sequent setting. Analysis. 2010;70:463–468. doi: 10.1093/analys/anq026. [DOI] [Google Scholar]
  51. Weir A. Classical harmony. Notre Dame Journal of Formal Logic. 1986;27(4):459–482. doi: 10.1305/ndjfl/1093636761. [DOI] [Google Scholar]
  52. Wright C. Truth and objectivity. Cambridge, MA: Harvard University Press; 1992. [Google Scholar]

Articles from Erkenntnis are provided here courtesy of Springer

RESOURCES