Abstract

Many logicians now think that in order to give a uniform solution to the paradoxes of self-reference one must revise logic by dropping one of the usual structural rules. To date almost all such approaches have focused on dropping structural transitivity or structural contraction, and have largely overlooked or ignored the prospects for resolving the paradoxes by dropping structural reflexivity. Here we argue that we have paradox-independent grounds to be dubious of structural reflexivity, and present a way of resolving the paradoxes of self-reference by dropping structural reflexivity altogether, the resulting approach allowing us to recapture classical reasoning as being enthymematic in a particular way.

1. A Crime Scene

Let us put on our deerstalkers and imagine that we are investigating a crime wave. We have good evidence to think that all (or at least most) of the crimes being perpetrated, due to their similarity of method, etc., have a common culprit. How should we go about determining the culprit? One plausible way of doing this is to investigate the various crime scenes and see which of our various suspects are present at each of them, and which have alibis. So we go off and we eliminate all the suspects who appear to have alibis, the red-herrings that seemed likely culprits at the time, and we end up with a small handful of suspects who were present at every crime scene. Your colleagues are all convinced that the perpetrator must be one of two characters which turn up at every crime scene. All of them immediately disregard one of the individuals who is present at every crime scene and also has no alibi, claiming that ‘They’re a pillar of society’ and that ‘prosecuting them would land us in anarchy’. Yet the evidence at hand appears to make it just as likely that they’re the culprit as the two who are favoured (and being actively investigated) by your colleagues. Something has gone wrong.

I think a situation analogous to the above has occurred in current debates over non-classical (and in particular sub-structural) solutions to the paradoxes. There is a great variety of paradoxes sweeping the city—liar paradoxes, Curry paradoxes, validity Curries, Hinnion-Libert style paradoxes, and so on. All of them appear to have a common modus operandi. Investigating, most substructural theorists end up echoing the following sentiment expressed in Ripley (2014):

Who was at every one of the crime scenes? Using this method, logical vocabulary all comes out in the clear. It’s not negation; he was out of town when the curries happened. It’s not the conditional; she was nowhere near the liars (assuming, anyway, that negation isn’t just the conditional with a false moustache on). But truth comes out in the clear too: it’s got a solid alibi for the Russells, the Hinnion-Liberts, and the validity Curries (as well as knowers and Montagues). In fact, there are only two characters that turn up at all of the crime scenes: contraction and transitivity.

There is a so-called pillar of logical society who is also present at all of these crime scenes. A character who many think is perhaps more central to the notion of logical consequence than other structural principles like contraction and transitivity. The principle that, for all A, we have that A entails A—structural reflexivity. What I want to do here is to argue that structural reflexivity is more suspicious than many have thought, and that resolving the paradoxes by rejecting it doesn’t have to land us in the anarchy wrought by logical nihilism.

2. Substructural Approaches to Paradox

Before we get onto our core business of arguing against structural reflexivity let us begin by going over some details of substructural approaches to the paradoxes of self-reference in general. As is common, we’ll mostly focus on the liar paradox. To that end, suppose that we have a sentence in our language, call it λ, that is Identical to ¬T〈λ〉, where 〈λ〉 is a name for the sentence λ and T is a truth predicate.[1] Informally speaking λ is a sentence which says of itself that it is not true. Such sentences can cause quite a lot of trouble in any system which is not prepared for them.

To see this, consider the following, very natural, rules for negation:

These are the usual rules exhibiting the usual ‘flip-flop’ nature of negation, characterising what Hösli and Jäger (1994: 473) call the ‘static’ aspects of the meaning of classical negation (the structural rules being required to capture its ‘dynamic’ aspects). Let us also consider the following, very plausible, rules governing our truth predicate T:

These rules seem to capture our intuitive understanding of the behaviour of ‘is true’, allowing us to derive the Tarski-biconditionals (that T〈A〉 if and only if A) in the presence of the usual rules for the conditional. As is well known, though, these rules are enough to get us into trouble. Consider the following derivation where λ is a sentence identical to ¬T〈λ〉:

This is nothing more than the sequent calculus presentation of the standard liar reasoning. There are a number of dependencies involved in the above reasoning, though: not only the rules for negation and the truth predicate, but also the structural principles of contraction ([WL] and [WR] above), the rule of cut ([Cut]), as well as structural reflexivity ([Id]).

Most approaches to the paradoxes of self-reference have, at this point, convicted the principles governing negation (e.g., Beall 2009; Field 2008; Priest 2006) or the naive truth predicate (e.g., Halbach 2011; Scharp 2013) given above. Unfortunately, this will not do for a uniform solution to the paradoxes of self-reference because we can get into trouble of a very similar kind without having to appeal to either of these items of vocabulary. For example, suppose that we want to introduce a naive validity predicate V(〈〉, 〈〉) into our language, governed by something like the rules below.[2]

In [VR] we require that all the formulas in Γ and ∆ be V-logical in the sense of being either of the form V(x, y) or the negation, conjunction, disjunction or implication of V-logical formulas. Beall and Murzi (2013) show that given rules like this and a sentence v which is identical to V(〈v〉, 〈⊥〉), we can derive \(\succ \bot\) solely using the structural rules.

As Beall and Murzi (2013) point out, this is essentially just Curry’s paradox, but with the work usually done by the conditional and a truth-predicate being done by the validity predicate. This points towards the main motivation usually put forward for substructural approaches to paradox: uniformity.[3] According to this line of thinking it just seems unprincipled to lay the blame for paradox on particular pieces of vocabulary, because no particular vocabulary seems to be required for paradox. While negation is involved in the liar there are structurally extremely similar paradoxes which don’t involve negation but instead use a conditional (like Curry’s paradox), or a validity predicate, or even principles about the nature of propositions (like the paradoxes in Hinnion & Libert 2003; Restall 2013). All of these paradoxes seem to share the same structure to such a degree that it would be implausible to think that they all had different sources. Of course, spelling out the precise nature of the structural similarity between the various paradoxes of self-reference is a non-trivial task, famously taken up in Priest (1994), the main battleground concerning whether the Curry-type paradoxes mentioned above are structurally similar to the liar and Russel’s paradoxes. We leave it to the discerning reader to make up their own mind on that particular point. If we admit that all of these paradoxes share a common source then they, of course, ought to have a common solution, and given that they share no vocabulary this cannot be a solution which convicts any particular piece (or pieces) of vocabulary. The central idea behind substructural approaches to the paradoxes, then, is to “grapple with the paradoxes where they live: in the basic features of argumentation”(Ripley 2015: 310).

Once we admit to ambitions of a uniform solution, and rule that particular pieces of vocabulary are innocent then we are left with the three suspects listed in Figure 1.

Figure 1. Our Suspects
Figure 1. Our Suspects

Approaches which restrict or reject these three structural principles have all appeared in the literature. Approaches which drop transitivity have received sustained technical and philosophical defences in Weir (2005), Cobreros, Égré, Ripley, and van Rooij (2014), and Ripley (2013b; 2012);[4] similarly those which drop contraction have been defended in Caret and Weber (2015), Gris̆in (1982), Mares and Paoli (2014), Petersen (2000), Priest (2015), Shapiro (2015), and Zardini (2011). Approaches which reject structural reflexivity have received far less attention. The notable exceptions are Fjellstad (2015), which defends a logic which is non-reflexive and non-transitive in order to give a proper semantic account of Prior’s infamous connective ‘Tonk’, and Greenough (2001), which gives a philosophical defence of a non-reflexive logic, but which falls prey to various problems as pointed out in Read (2003).[5] What we will do here is provide a sustained philosophical defence of the viability and plausibility of non-reflexive approaches to the paradoxes of self-reference. In particular the system I will defend here is that which results from the removable of structural reflexivity from a standard presentation of Gentzen’s multiple conclusion sequent calculus for classical logic (such as G1c from Troelstra and Schwichtenberg, 2000: 52) enriched with the above rules for truth. So that we have something to call such a system let us call it LKR.

3. Suspicions about Structural Reflexivity

One reason, I think, for the relative lack of attention to the prospects for rejecting structural reflexivity stems from thinking that it is an innocent logical principle. Surely, one might think, there is nothing sinister about thinking that A entails A. There are a number of reasons which can be given for doubting structural reflexivity which are (relatively) independent of concerns of paradox—reasons to doubt its validity which don’t amount to the claim that ‘dropping it saves us from paradox’. Dialectically these are not intended to be conclusive, but rather to illustrate that we already have reasons to doubt structural reflexivity. For example:

Truth in Virtue of: Suppose that we take seriously the idea, sometimes used in informally explaining the structure of logical consequence, that in a valid argument the conclusion is true in virtue of the premises being true, or equivalently that the truth of the conclusion is grounded in the truth of the premises. The ‘in virtue of’ relation is irreflexive[6] and so on this account logical consequence will inherit this irreflexivity. The resulting logic is also likely to deviate from classical logic in a number of other, undesirable, respects. For example, on this notion of consequence pq does not entail q, as the truth of q need not be grounded in the truth of pq (because, for example, q can be true while pq is false). In effect the kind of logic we are likely to get here is going to be something like the logic of ‘Strict Ground’ described in Correia (2014: 33–34).

Swyneshed on Insolublia: Consider the following inference due to Roger Swyneshed (Spade 1979: 189):

(1) “The conclusion of this inference is false; therefore, the conclusion of this inference is false.”.

According to Swyneshed and his contemporaries, the above inference is one with a false conclusion, and a true premise (see Yrjönsuuri, 2008: 598, for the details). This is quite sensitive to particular oddities of Swyeneshed’s view of self-referential sentences, but let us suppose for a moment that we take Swyeneshed’s view on board. Then this would be a case of an inference with a true premise and a false conclusion and thus, if logical consequence requires truth-preservation, a counterexample to structural reflexivity. Swyneshed himself, as it happens, took this to show that truth-preservation was not necessary for logical consequence, a point which has also been made for very different reasons in Field (2008: 284– 286).

Heterogeneous Logics: In Humberstone (1988) heterogeneous logics are defined as logics for which premises and conclusions are taken from different (formal) languages. To take a simple example, consider cases of ‘cross-linguistic entailment’ where premises are taken from one language, French say, and conclusions from another, say English, and we say that an argument \(\Gamma\succ\Delta\) is valid whenever when all the sentences in Γ are true (sentences of French) then at least one of the sentences in ∆ is (a) true (sentences of English). In this case we will have valid arguments like that from ‘Le voiture de mon oncle est blanc’ to ‘At least one car is white’, but will not rule the argument from ‘Le voiture de mon oncle est blanc’ to ‘Le voiture de mon oncle est blanc’ as being valid, as ‘Le voiture de mon oncle est blanc’ is not a sentence of English (and thus not a true sentence of English). Despite it’s vividness, the above example might strike some readers as something of a cheat, given that in the above setup instances of structural reflexivity are not even guaranteed to be statable. Consider, then, a case where both premises and conclusion are taken from the same language, say English, but are assessed relative to different dialects: for concreteness let us have the premises assessed as true relative to Australian English and the conclusion’s relative to British English. Then the argument from ‘John lost a thong at the beach’ to ‘John lost a thong at the beach’ will count as a counterexample to structural reflexivity when John lost a flip-flop at the beach, but didn’t lose any swimwear.

Hopefully these examples are convincing enough to show that structural reflexivity is not as sacrosanct or innocent a principle as it at first seems—there are various, rather sensible, reasons to think that it has invalid instances. Of course, this then raises the spectre of what happens to logic if we drop structural reflexivity.

4. Whither Logic?

One might object that, moving to a logic like LKR and dropping structural reflexivity is a bridge too far. After all, in such a logic there are no provable sequents, as we have nothing which we can start with! Surely this is throwing the baby out with the bathwater (and throwing out the bath along with it)! I think it is helpful here to consider a similar case. Suppose that you (incorrectly, I might add) thought that logic was only the study of logical truths, of determining which formulas are tautologies. That is, suppose that one was working in the logical framework where one identified logics with sets of formulas (the framework FMLA from Humberstone, 2011). Then one would be moved to say that Strong Kleene logic—the logic according to which a sentence is a logical truth if it gets the value 1 on every valuation which assigns values in the set {1, i, 0} to the propositional atoms and calculates the truth-values of components according to the tables in Figure 2—is not a logic. Note, in particular, that every formula gets the value i on the valuation which assigns i to all propositional variables, as the tables always give the value i to compounds all of whose components have the value i.

Figure 2. Strong Kleene Matrices
Figure 2. Strong Kleene Matrices

Of course, as we well know, while Strong Kleene has no tautologies it does have many valid arguments. That is, if we move from the logical framework FMLA to the logical framework SET-FMLA or SET-SET there are many arguments where (at least one of their) conclusions get the value 1 whenever all of their premises do. What I want to propose here is that we make a similar move in the case of thinking about logics like LKR. While LKR does not have any valid sequents, it does have a great many valid metasequents, higher-order sequents which have sets of sequents as their premises and sequents as their conclusions. That is to say, while LKR and other irreflexive systems have no valid inferences they do have a great many valid metainferences. More concretely, a metasequent is a structure Ss where S is a set of sequents, and s is a sequent. A metasequent Ss is valid in LKR iff there is an LKR derivation whose endsequent is s and whose leaf sequents are all members of S . So, for example, while the sequent \(p, q \succ p \land q\) is not LKR derivable, the metasequent \(p\succ p,\ q\succ q\Rightarrow p, q \succ p\land q\) is. As just defined, LKR is a very weak logic of metasequents.[7] In particular while metasequents like \(p\succ p,\ q\succ q\Rightarrow p, p\to q \succ q\) are valid, metasequents like \(p\succ q,\ p\succ q\to r\Rightarrow p\to r\), and \(p\succ \top{q}\Rightarrow p\succ q\), both of which correspond to (meta)rules of elimination, are not valid. One way of remedying this is to add inversion rules to LKR, stipulating that all of our left- and right-insertion rules are valid bottom up (as well as top down). So, for example, in addition to the standard left and right-insertion rules for the conditional

we also add their inverses—the same rules read from bottom to top—namely:

In a system like G1c the above inversion rules are admissible, as is noted in Troelstra and Schwichtenberg (2000: 66). In dealing with metasequents we need to add these rules explicitly (as we do not have a derivation of the premises of the rule in all cases, such as those mentioned above). If we add the inverses of all the rules in LKR we get a significantly stronger system than LKR itself as far as its derivable metasequents are concerned. Let us call this stronger system LKR+. There is further reason to be interested in LKR+, though. There is good reason to think that the translation τ from sequents to formulas which translates the sequent \(A_1, \ldots, A_n\succ B_1, \ldots, B_n\) as the formula (A1 ∧ . . . ∧ An ) (B1 ∨ . . . ∨ Bm) (where AB is a metalinguistic abbreviation for ¬AB) allows us to show that a metasequent s1, . . . , sns is valid in LKR+ iff the sequent \(\tau(s_1), \ldots, \tau(s_n)\succ \tau(s)\) is valid in Strong Kleene logic. For example, the LKR+-valid metasequent \(\succ p, p\succ q \Rightarrow \succ q\) corresponds to the Strong Kleene valid sequent \(p, p\supset q\succ q\), and the Strong Kleene invalid sequent \(\succ p\lor \lnot p\) corresponds to the LKR+-invalid metaseqent \(\Rightarrow p\succ p\). A similar result is provable concerning the system which results from adding inverses to the rules of the logic ST of Ripley (2013a), as shown semantically in Barrio, Rosenblatt, and Tajer (2015) and via syntactic methods in Pynko (2010), relating derivabiliy of metasequents in ST with derivability in Priest’s logic LP (Priest 1979). We leave it as an open question here whether the corresponding result holds for LKR+ and Strong Kleene logic, noting that if this were the case it would not only further demonstrate the strong duality between non-reflexive and non-transitive approaches to the paradoxes, but also show how deep the analogy alluded to above between non-reflexive approaches and Strong Kleene logic goes.

All of this raises the obvious question of how we are to understand metasequents in the present irreflexive setting.

5. Understanding Non-Reflexive Consequence

Underlying much of the appeal of structural reflexivity (and as noted below, transitivity) is an understanding of logical consequence as involving a kind of ‘preservation’ of some property (usually truth or epistemic warrant). This kind of intuition must be abandoned if we are dealing with a logic for which one of structural reflexivity or cut do not hold. To see what’s going on consider the following remark due to Girard, Taylor, and Lafont (1989: 31), in which C represents the active formula in Cut and where we are concerned with the instance of Reflexivity with C on the left and right of \(\succ\):

The identity axiom [=[Id]] says that C (on the left) is stronger than C (on the right); this rule [ = [Cut]] states the converse truth, i.e. C (on the right) is stronger than C (on the left).

So we can see that rejecting Transitivity or Reflexivity results in a certain kind of asymmetry in how we ought to understand statements of consequence. Here we will present a reading of sequents \(\Gamma\succ\Delta\) for LKR inspired by the notion of q- consequence in Malinowski (2004; 2014). Let us read a sequent \(\Gamma\succ\Delta\) as telling us that if we do not reject all the members of Γ then we should accept some member of ∆. What does this reading of sequents tell us about the meaning of [Id] and [Cut]? On this account [Id] tells us that if you don’t reject C then you should accept it. So reflexivity will fail for any sentence C which one should neither reject nor accept. Similarly, [Cut] tells us that accepting C precludes rejecting it, and a failure of transitivity will occur if there are any sentences which we should both accept and reject—something which is ruled out on the present account.[8] This reading sits quite naturally with the other rules of LKR. For example, consider [¬R]:

What this rule tells us is that if we are in a position where we do not reject all the members of Γ, don’t accept the members of ∆, and don’t accept ¬A then we’re equally in a position where we don’t reject all the members of Γ, don’t accept the members of ∆ and don’t reject A. That is to say, not accepting ¬A precludes rejecting A. Similarly, [¬L] tells us that not rejecting ¬A preclude accepting A.

Given this understanding of how we should read individual sequents we are now faced with explaining metasequents. Mostly, in this setting, we are interested in how we should understand metasequents which have the following form:

$$P_1\succ P_1, \ldots, P_{n}\succ P_{n}\Rightarrow \Gamma\succ\Delta$$

My proposal is that we read metasequents of this form as telling us that if we take a stance on each of the Pis, then if we don’t reject all the members of Γ then we should accept some member of ∆, where to ‘take a stance on A’ is to either accept A or to reject A.

The intuition behind this reading is to think of metasequents as making explicit the fact that in judging a sequent valid we are taking its components to be the kinds of statements for which it is permissible to take certain attitudes towards. This allows us to recapture full classical reasoning as being enthymematic, involving a suppressed assumption that the statements involved are ones which we either accept or reject.

To make this more explicit note that if every formula in Γ ∪ ∆ is either accepted or rejected then the sequent \(\Gamma\succ\Delta\) can be read as telling us that if we accept all the members of Γ then we should accept some member of ∆. Consider, then, the case where {P1, . . . , Pn} are all the atomic formulas which appear in Γ ∪ ∆. Then we can read the sequent

$$P_1\succ P_1, \ldots, P_{n}\succ P_{n}\Rightarrow \Gamma\succ\Delta$$

as telling us that, conditional on the assumption that we have accepted or rejected each of the Pis, if we accept all the members of Γ then we should accept some member of ∆. So, returning to the example we had at the end of the previous section, the (valid) metasequent \(p\succ p, q\succ q\Rightarrow p, q \succ p\land q\) tells us that, conditional on the assumption that we have either accepted or rejected p and q, if we accept p along with q then we should accept pq.

6. Revisiting the Paradoxes

So in the light of the above interpretation of non-reflexive consequence afforded by metasequents, what should we say about the paradoxes? In particular, what we are after is some kind of analysis of what is going on in paradoxical reasoning, and what this analysis tells us about the status of liar sentences and their kin. For example, according to Zardini (2011: 504), what is wrong with the above liar reasoning is that we’ve contracted on the sentence T〈λ〉 which is ‘unstable’ in a distinctly metaphysical way. Thus we fault a particular part of the liar reasoning on the basis of particular features of paradoxical sentences. On the present account, though, there is nothing wrong per se with the reasoning which engenders paradox, but rather with its starting point. That is to say, what is wrong is that we’ve taken paradoxical sentences to be ones which we ought to either accept or reject. When we make this, usually implicit, presupposition explicit we can see that this is where the real fault in the paradoxical reasoning lies. Our account of paradoxicality bears some resemblance to the following three proposals in the literature.

6.1. Quietism

Usually in talking about the status of paradoxical sentences one opens oneself up to ‘revenge’ worries, which use whatever resources one deploys in analysing paradoxicality and creates further paradoxical sentences. On the present approach, though, we have said nothing about the status of paradoxical sentences, but rather what one should (or more importantly, should not) do with them. As such our analysis of the paradoxes is a distinctly quietist one.

Our proposal bears some resemblance to recent quietist proposals in Horsten (2009) and Tennant (2015). Tennant’s rather subtle proposal is part of his general inferentialist account of meaning, and builds on the analysis of paradoxicality he proposed in Tennant (1982). According to Tennant some reasoning is paradoxical if, when presented in natural deduction form, any attempt to normalize it results in a reduction sequence which loops. This gives Tennant his account of paradoxicality. So while we have a normal form proof of \(\succ\lnot\top〈\lambda〉\) and a normal form proof of \(\lnot\top〈\lambda〉\succ\bot\) we cannot ‘paste’ these two derivations together to get a normal form proof of \(\succ\bot\). In essence we have a failure of [Cut], Tennant’s preferred logic being non-transitive. According to Tennant the norm for assertion is having a warrant, and one has a warrant for a claim A just in case one has a truthmaker for A (e.g., a normalizable proof of A) with no undercutters (i.e., a normal form disproof of A) (Tennant 1982: 578). On Tennant’s view, then, we cannot assert ¬T〈λ〉—we have a truthmaker for it, namely the normal form proof of \(\succ\lnot\top〈\lambda〉\), but we also have an undercutter, the normal form disproof ¬T〈λ〉 (our normal form proof of \(\lnot\top〈\lambda〉\succ\bot\)). More importantly, though, one is in a similar position with respect to the claim that the liar is determinate (i.e., T〈λ〉 ∨ ¬T〈λ〉). So on Tennant’s view, paradoxical sentences are indeterminate, but inexpressibly so on pain of inconsistency.

On the view outlined above we can also say more about paradoxicality. Say that a sentence A is paradoxical iff accepting or rejecting A results in triviality. That is to say, a sentence A is paradoxical iff for all Γ, ∆ we have that \(A\succ A\Rightarrow \Gamma\succ\Delta\).[9]

6.2. Greenough on ‘Supposition Aptness’

This account of paradoxicality bears some resemblance to Greenough’s (2001) notion of a sentence failing to be ‘supposition apt’. According to Greenough (2001: 123) a sentence A fails to be supposition apt iff for all B we can derive \(\succ A\to B\) and \(\succ\lnot (A\to B)\) in something like the system G1c. As is noted in Read (2003), this test makes every sentence fail to be supposition apt so long as the liar is formulable, as we can simply run the above liar reasoning to conclude \(\succ\) and then get \(\succ A\to B\) and \(\succ \lnot(A\to B)\) by applications of structural weakening. The rationale behind the test, though, was the idea that for a sentence to be supposition apt it has to be discriminating in the consequences we are able to draw from its mere supposition, and fails to be supposition apt when it is indiscriminate in the consequence which we can draw from its mere supposition. Making use of the higher-order sequent machinery which we have been using here, and thinking of supposition aptitude as failing to be paradoxical in the present sense, then allows one to rescue the view described in Greenough (2001) from the problems noted by Read.

6.3. ‘No-Proposition’ Views

The present view is reminiscent of solutions to the paradoxes which claim that paradoxical expressions fail to express propositions, such as that in Armour-Garb and Woodbridge (2014). Both no-proposition solutions and the present solution aim to resolve paradoxes by, in some way, dissolving them. In the case of no-proposition solutions this usually comes about by pointing out that paradoxical sentences don’t express propositions, and so are not alethically assessable—in the case of Armour-Garb and Woodbridge (2014), this means that the truth rules are valid only for sentences A which express propositions. On the present approach, though, the pathological nature of the liar sentence is located not in the sentence itself (which is a perfectly fine and meaningful sentence), but rather in our taking the kinds of attitudes towards it which are distinctive of logical consequence—problems only arise when we take a stance on sentences like λ, and thus can use them in reasoning. Rather than saying that paradoxical sentences are semantically defective, according to the present non-reflexive approach such sentences are instead in a sense logically defective, and it is by staying neutral (indeed silent) on their semantic status that we skirt around the usual strengthened-liar style revenge worries.

7. Curry’s Revenge?

This notion of paradoxicality does not open up the doors in any of the usual ways to a revenge paradox. But one might worry that one is lurking in the wings. In particular, one might be concerned with the fact that, while LKR+ invalidates [Id], our metasequent framework does validate its higher order analogue, namely \(\Gamma\succ\Delta \Rightarrow\Gamma\succ\Delta\). One might be concerned, then, that the framework outlined above is susceptibe to some variety of higher-order or ‘external’ revenge paradox. In particular, one might be concerned that if we attempt to add a validity predicate to our language which captures metasequent validity that we will end up being unable to block the validity Curry. It turns out that this is the case, as we can present a simplified version of the external validity Curry from Wansing and Priest (2015). In particular, suppose that we introduce an external validity predicate Bew with the following rules.

Then if we have a sentence γ such that \(\gamma = Bew(〈\succ\gamma〉, 〈\succ\bot〉)\), we can give the following informal metasequent derivation, which makes use of the fact that metasequent validity in LKR+ is reflexive, transitive (which we annote with [Tr]), and contracts ([Contr]).

First we will need the following derivation, which we will call d:

Now using d we can derive \(\Rightarrow\succ\bot\) as follows

The final step in the above derivation following from the transitivity of metasequent validity. So, given the usual rules for ⊥, adding an external validity predicate like Bew to our language results in triviality. In a sense this should be no surprise, given that it seems likely that the logic governing our metasequent seperator is Strong Kleene logic, a logic which cannot accomodate a naive validity predicate.

There are two ways to react to this situation. Firstly, one could be a quietist about external validity, arguing that claims about what follows conditional on a certain pattern of acceptance and rejection cannot, and ought not, be represented in the object language. On this approach a great many arguments are valid, but inexpressibly so on pain of triviality, validity only being able to be discussed from a distant metaphorical shore. In many ways this is the position which many non-substructural logics are in, being unable to fully express a validity predicate in their language. Such a position can be made to be palatable (perhaps by instituting some kind of heirarchy of validity predicates), but it is definitely one which we want to avoid if we are able. A better approach, then, would be to engage in a more full-blooded rejection of structural reflexivity, rejecting it not just at the level of sequents, but also at the level of metasequents. This would require us to give a more direct explanation of metasequent validity which was not parasitic on the standard notion of a sequent calculus derivation, requiring us to work directly with metasequents in a manner reminiscient of the systems discussed in Wansing and Priest (2015) and von Kutschera (1968).

8. Conclusion

We have judged structural reflexivity to be innocent for far too long. Moreover, much like other authors have started to argue about structural contraction and transitivity, we have always had ample reasons to be suspicious of this innocuous-seeming principle, and so it is no surprise that once we dig around that such a shady character might be implicated in the paradoxes of self-reference. Moreover, abandoning structural reflexivity allows us to acknowledge that there is nothing wrong, per se, with the reasoning involved in the paradoxes. Rather what is wrong is the implicit assumption that we ought to accept or reject the sentences which give rise to paradoxical behaviour. Making this implicit assumption explicit also opens the door to allowing us to both allow for paradoxical sentences in our language and at the same time to fully recapture classical reasoning as enthymematic.

There is still much which we do not know about non-reflexive approaches to paradox. For example:

  • I have said very little about the clear and obvious relationships between non-reflexive and non-transitive approaches to the paradoxes of self-reference. As is noted in passing in Ripley (2013a), these two approaches seem to be natural duals, a sentiment also expressed and discussed at length in Hösli and Jäger (1994) and Frankowski (2004b). Having a deeper understanding of this duality will help us to understand what is really at stake in taking one over another substructural approach to the paradoxes. In light of the results in Barrio et al. (2015) and French and Ripley (2015) it may be that there is less opposition between the various substructural approaches to paradox than first thought. One obvious avenue of investigation suggested in Section 4 is to determine to what extent we can give results connecting LKR+ and Strong Klenee logic, analoguous to those given in Barrio et al. (2015) connecting ST and LP.[10]
  • In this paper we did not give an explicit consistency proof for the system LKR or LKR+ (considered as logics of metasequents), and moreover did not concern ourselves overly much with issues concerning the model theory of these theories. It would be desirable to have a model-theoretic grip on these two logics.
  • As was mentioned at the end of Section 7, the most desirable way to deal with the external validity Curry is to engage in a more full-blooded rejection of structural reflexivity. To do this will require working with metasequents directly, via a metasequent calculus in the vein of the various higher-order systems discussed in Wansing and Priest (2015) and von Kutschera (1968), rather than indirectly as we have here.
  • One natural way of interpreting the Kripke fixed-point construction gives rise to a non-reflexive logic where we read a sequent \(\Gamma\succ\Delta\) as telling us that either some member of Γ is grounded and false, or some member of ∆ is grounded and true, this being the q-consequence relation for Strong Kleene logic described in Malinowski (2014). Does thinking in this way allow us to better understand the notion of semantic grounding, and the options afforded by different approaches to the notion. In a similar vein, does this suggest a route to understanding the inner logic of the validity predicate described in Meadows (2014)?

Much remains to be done. What I hope to have shown, though, is that structural reflexivity, despite being a supposed pillar of (logical) society may very well be the culprit behind the spree of paradox that has been sweeping the city, and ought to at least be brought in for some serious questioning.

9. Acknowledgements

Many thanks to Dave Ripley, Greg Restall, Andreas Fjellstad, Ole Hjortland, Leon Geerdink, Gillian Russell, Johannes Stern, Shawn Standefer, Heinrich Wansing, and two anonymous referees for helpful discussions and comments on this material.

References

Notes:

    1. Notational Preliminaries: I’ll use uppercase Roman letters A, B, C, . . . as schematic letters for formulas in our language, and uppercase Greek letters Γ, ∆, . . . as schematic for multi-sets of formulas (sets where we’re also concerned with the number of times in which a sentence appears). Throughout, I will use multiple-conclusion sequents of the form \(\Gamma\succ\Delta\), with ‘\(\succ\)’ as our sequent separator. I will use 〈A〉 to denote the distinguished name for A, and assume that our language is rich enough to provide enough self-reference to get us into trouble, remaining agnostic as to how such names are generated. The interested reader should consult Ripley (2012: 355–356) for some suggestions as to how we could handle such names without arithmetic or a theory of syntax explicitly in the background.return to text

    2. Here we give the rules for a naive validity predicate given in Zardini (2013; 2014). We do this primarily because the presentation given in Beall and Murzi (2013), which most authors follow, includes a principle VD (=\(A, (〈A〉, 〈B〉) \succ B\)), which combines the above left-introduction rule for validity with appeals to structural reflexivity, and as such most presentations of the validity Curry contain hidden appeals to structural reflexivity. Given that most people in the literature on paradoxes do not question structural reflexivity this is unsurprising, but it is something which needs to be kept in mind when we are working in a setting where precisely this is in question. An anonymous referee also points out that another reason to prefer these rules over those given by Beall and Murzi is that they allow us to derive statements about metainferences such as that \(V (〈A〉, 〈B〉) \succ V(〈A〉, 〈B\lor C〉)\).return to text

    3. For a more detailed defence of this justification for substructural approaches to the paradoxes of self-reference the reader should consult Section 3 of Ripley (2015).return to text

    4. The approach in Tennant (1982; 2015) is also, in many ways, one which drops unrestricted structural transitivity.return to text

    5. Schroeder-Heister (2012: 205) suggests that a restriction on structural reflexivity similar to that argued for by Greenough will allow one to block paradox, motivating it on inferentialist grounds. Schroeder-Heister’s claim is that if one takes seriously the idea that the operational rules for a connective determine the meaning of the connectives, then one should only allow instances of [Id] which cannot be derived via operational rules. The idea here is that we ought only to allow instances of [Id] where there is no more specific way in which we could have introduced A using our rules. As Schroeder-Heister notes this is reflected in the common practice of restricting [Id] to cases where A is an atomic predication, as usually proof-theorists are interested in settings where atomic-predications cannot be introduced via left- or right- insertion rules, as they can be in the presence of the rules for the truth predicate, for example.return to text

    6. The irreflexivity of the ‘in virtue of’ or ‘grounding’ relation was arguably first noted by Bolzano in Section 204 of his Wissenschaftslehre. Bolzano (1973: 272) contains an English translation of the relevant section.return to text

    7. I thank an anonymous referee for suggesting this line of inquiry.return to text

    8. This situation is structurally similar to that in Ripley (2013a: 154) if we read sequents in terms of tolerant assertion and denial. Note that in order to get natural failures of [Cut] we would need to consider a reading of sequents inspired by the p-consequence operations of Frankowski (2004a).return to text

    9. Similarly, we can say that a set of sentences S = {A1 , . . . , An} are jointly paradoxical iff for all Γ, ∆ we have that \(A_{1}\succ A_{1}, \ldots, A_{n}\succ A_{n}\Rightarrow \Gamma\succ\Delta\), and (in order to avoid classifying any set S including a paradoxical sentence from being jointly paradoxical) no proper subset of S is jointly paradoxical.return to text

    10. Footnote 10 of Barrio et al. (2015) might be particularly helpful in this regard if we can show that LKR+ is the semantically complete with respect to TS-models.return to text