We introduce \(\textsf{CLASS}\), a session-typed, higher-order, core language that supports concurrent computation with shared linear state. We believe that \(\textsf{CLASS}\) is the first proposal for a foundational language able to flexibly express realistic concurrent programming idioms, with a type system ensuring all the following three key properties: \(\textsf{CLASS}\) programs never misuse or leak stateful resources or memory, they never deadlock, and they always terminate. \(\textsf{CLASS}\) owes these strong properties to a propositions-as-types foundation based on Linear Logic, which we conservatively extend with logically motivated constructs for shareable affine state. We illustrate \(\textsf{CLASS}\) expressiveness with several examples involving memory-efficient linked data structures, sharing of resources with linear usage protocols, and sophisticated thread synchronisation, which may be type-checked with a perhaps surprisingly light type annotation burden.

figure a
figure b

1 Introduction

Stateful programming involving concurrency and shared state plays a prominent role in modern software development, but, in practice, getting concurrent code right is still quite hard for common developers. Typical sources of “bugs” include resource leaks (forgetting to release unused memory or close a socket), violation of resource state preconditions (writing to a closed file or sending out-of-order messages), races (data invariant breaking, erratic sharing of resources), deadlocks (indefinite wait for lock release or incoming messages), livelocks, and even general non-termination. Fifty years ago Hoare noted [40]: “Parallel programs are particularly prone to time-dependent errors, which either cannot be detected by program testing nor by run-time checks. It is therefore very important that a high-level language designed for this purpose should provide complete security against time-dependent errors by means of a compile-time check”. It does not come as a surprise that finding ways to approximate such certainly very ambitious goal is still today the object of exciting intense research.

In this paper, we approach this challenge by leveraging the propositions-as-types (PaT) paradigm towards the realm of concurrency and shared state. PaT is known to offer a unifying framework connecting logic, computation, and programming languages. Since the seminal work of Curry and Howard [42], it is a prolific structuring concept for designing and reasoning about programming languages (see [82]). Remarkably, languages derived within PaT intrinsically satisfy crucial properties: type preservation (since reduction corresponds to cut-reduction), confluence (since computation corresponds to proof simplification), deadlock freedom (as a consequence of cut-elimination) and livelock freedom / termination (as a consequence of strong normalisation).

Although PaT has a traditional focus on functional computation, the emergence of linear logic has progressively motivated interpretations of stateful/resourceful computation [1, 2, 12, 14, 78], eventually leading to the discovery of tight correspondences between session types and linear logic [22, 27, 81]. These systems already capture aspects of state change, namely in the sequential execution of session protocols, thus raising the question of whether such approaches could be extended to express notions of shared mutable state, subject to interference, as found in typical imperative and concurrent programs. Recently, such challenge was addressed by several works [9, 64, 67]. In particular, [67] developed a first basic shared state model enjoying all the aforementioned strong properties of PaT. However, although [67] supports higher-order shareable store for pure values of replicated type, it forbids linear objects, such as stateful processes or data structures with update in-place, to be stored and shared as in languages like Java, Rust, and in the \(\textsf{CLASS}\) core language we introduce herein.

In this work, we develop a novel, more fundamental approach to shared state and PaT, and introduce \(\textsf{CLASS}\), a typed, higher-order, session based core language that supports general concurrent computation with dynamically allocated shared linear (more precisely, affine) state. We believe that \(\textsf{CLASS}\) is the first proposal for a foundational language. able to flexibly express realistic concurrent programming idioms, while ensuring all the following three key properties by static typing: \(\textsf{CLASS}\) programs never misuse or leak stateful resources or memory, they never deadlock, and they always terminate.

Despite the strength of its type system, \(\textsf{CLASS}\) expressiveness and effectiveness substantially overcomes limitations of related works, as we show with compelling program examples that can be algorithmically typed for memory safety, dead- and live-lock freedom with a perhaps surprisingly light type annotation burden. \(\textsf{CLASS}\) owes these strong properties to is PaT foundation based on Second-Order Linear Logic, already known to capture the polymorphic session calculus and the linear System F [74], but which we conservatively extend with novel logically motivated constructs for shareable affine state, also based on DiLL co-exponentials [35, 67], but to which we give here a different, more general and fundamental interpretation.

1.1 Overview

A main novelty and source of \(\textsf{CLASS}\)’s expressiveness, flexibility and strong meta-theoretical properties resides in its mechanism for shared state composition. It is interesting to overview such mechanism in the context of the basic composition and interaction principles of the fundamental linear logic interpretations [22, 27, 81]. Our computational model is structured around processes that interact via binary sessions, the basic composition rules being mix and cut.

figure c

The mix rule types the independent composition of processes P and Q, which do not share any free names and run side-by-side without interacting. This is captured by the implicit disjointness of their linear typing contexts \(\varDelta _1\) and \(\varDelta _2\), declaring the types of their interaction channels. Interactive composition is expressed by the cut rule, which connects exactly two processes P and Q through a single linear session x with dual typed endpoints (x : A and \(x:\overline{A}\)), following Abramsky’s idea of “cut as interactive composition” [1].

Intuitively, duality of endpoint (session) types ensures that all interactions between P and Q on x always matches: when P sends, Q receives; when Q offers, P chooses; and likewise for all types. Notice that sharing a single channel x between the threads P and Q is important to ensure acyclicity of proof structures, and cut-elimination/deadlock absence. But PQ may use an arbitrary number of linear channels, in \(\varDelta _1,\varDelta _2\), to also compose with other processes.

Shared composition in session types is available for replicated “server” objects , typed by the linear logic exponential type bang !A. Contraction of the dual exponential type why-not \(?\overline{A}\) allows an unbounded number of usages of such replicated server object to be introduced in client processes. In the dyadic presentation of linear logic (cf. [5, 11]), contraction is expressed by moving ?-typed names into the unrestricted context \(\varGamma \), with the \(\mathrm {[T?]}\) rule.

figure e

Names in \(\varGamma \) may be used unrestrictedly; each call (typed by \(\mathrm {[Tcall]}\)) spawns a fresh copy of the server body at type y : A, to be used by the client at type \(y:\overline{A}\), in a linear binary session. By the typing rule for !A (promotion) such copy does not depend on linear resources. Thus, interaction with replicated objects as captured by the exponentials !A and ?A implements a copy semantics where each call obtains a new private stateless copy of the same object.

In this work, we introduce a third composition mechanism, allowing processes to concurrently share mutex memory cells, storing linear state. Mutex memory cells and their usages are typed respectively by a pair of dual modalities and , whose logical rules are motivated by Differential Linear Logic (DiLL) [35], in particular cocontraction, expressed by the type rule [Tsh].

figure h

While sharing of replicated objects corresponds to contraction of ?A types, shared usage of mutex cells corresponds to cocontraction of types. Apart from the explicit use of [Tsh], the type system ensures that memory cells are always used linearly. The shared usage is free in the conclusion of the typing rule, therefore a memory cell may be shared by an arbitrary number of processes, by nested iterated use of cocontraction.

Moreover, cocontraction also ensures that concurrent processes may share a single mutex cell (just like [Tcut] w.r.t. binary sessions). This constraint comes from the linear logic discipline, and it is important to ensure deadlock freedom. As discussed in Concluding Remarks, this does not hinder \(\textsf{CLASS}\) expressiveness - e.g., a single mutex cell may act as a gateway to further bundles of shared state, organised in resource hierarchies, as our examples illustrate - and even suggests convenient concurrent programming structuring techniques.

To access a mutex memory cell in its (unlocked) full state, typed by , the client uses a take operation. Take waits for acquiring the cell lock and reads its contents. The cell then transitions to the (locked) empty state, typed by . The taking client becomes the sole responsible for filling back the cell contents, using a put operation. This will restore the cell to the full state, releasing its lock, and making it accessible to other concurrent threads waiting to take it. Our mutex memory cell object is thus akin to a behaviourally typed incarnation of Concurrent Haskell MVars [45] or Rust \(\mathsf {std{:\,\!:}sync{:\,\!:}Mutex}\) objects [46].

To ensure safe releasing of a memory cell, its contents are required to be of affine type \(\wedge A\). Affine objects are well-behaved disposable values, that when discarded, safely dispose all resources they hereditarily refer to, this being ensured by the linear logic typing.

We illustrate the introduced concepts with a simple example, where two concurrent threads compete to set on an initially off flag, but only one may win. The flag iteratively announces its state to the client with either \(\#\textsf{Off}\) or \(\#\textsf{On}\). If the state is off, the client must select \(\#\textsf{turnOn}\), if the state is on, it will remain on. Process \(\textsf{flag}(f)\) implements the flag (at name f) in the off state, and process \(\textsf{on}(f)\) in the on state, defined thus

figure m

The flag object is typed with the (linear) usage protocol defined by the coinductive type \(\textsf{Flag}\) below, such that \(\textsf{flag}(f)\vdash f: \textsf{Flag} \) and \(\textsf{on}(f)\vdash f: \textsf{Flag}\)

figure n

We now consider a scenario where a flag object is shared via a mutex memory cell c initially storing a off flag of type \(\wedge \textsf{Flag}\) among two concurrent clients.

figure o

When running \(\textsf{main}()\) exactly one of the threads (executing the same code, just with a different id) will turn the flag on and win, the other will loose. Notice that all threads drop usage of the memory cell c using release, which corresponds to DiLL coweakening ([35]).

When considering a new language, in particular with a static typing discipline, it is necessary to argue about its expressiveness, and aim for a better perception of how natural programs get past its typing rules, and of how types help in structuring programs. In this paper, we approach these concerns by showcasing many interesting examples that challenge the expressiveness of the \(\textsf{CLASS}\) language and type system on realistic concurrent programming scenarios. We have developed many more examples, distributed with our implementation [68], combining imperative, higher-order functional, and session-based programming styles. For all these programs, strong guarantees of memory safety, deadlock-freedom, termination, and absence of “dynamic bugs”, even in the presence of blocking primitives and higher-order state, are compositionally certified by our lightweight type discipline based on Propositions-as-Types and Linear Logic.

1.2 Outline and Contributions

We believe that CLASS is the first proposal for a foundational language able to flexibly express realistic concurrent programming idioms while ensuring by typing three key properties: \(\textsf{CLASS}\) programs never misuse or leak stateful resources or memory, they never deadlock, and they always terminate.

In Section 2 we formally present the core language \(\textsf{CLASS}\), its type system and operational semantics. Our model builds on the propositions-as-types approach to session-based concurrency [22, 27, 80], extending Second-Order Classical Linear Logic with inductive/coinductive types, affine types, and novel primitives for shareable first-class mutex reference cells for linear state.

In Section 3 we state and prove type preservation (Theorem 1), progress (Theorem 2) which implies deadlock-freedom, and strong normalisation (Theorem 3), which also implies livelock absence. Our proof uses a logical relations argument, extended with an interesting technique to handle shared state interference, which we believe is exploited here for the first time.

Given the strong properties of its type system, it is of course very important to substantiate our claims about \(\textsf{CLASS}\) expressiveness. In Section 4 we illustrate the expressiveness of \(\textsf{CLASS}\) language and type system by going through a series of compelling examples. Namely, we discuss a general technique for sharing linear protocols, a shareable linked list with update in-place, a shareable buffered channel, using a linked list with pointers to tail and head nodes, and executing send and receive operations in O(1) time; the dining philosophers, illustrating techniques that rely on our type structure to encode resource acquisition hierarchies; a generic barrier for n threads; and a Hoare style monitor with await/notify conditions, where our implementation of the condition’s process queue is supported by a dynamic linked data structure, as in real systems code.

Section 5 discusses related work. Section 6 offers concluding remarks and suggests further research. Complete definitions and detailed proofs to all results are provided in [65].

2 The Core Language and its Type System

We present the core language, type system, and operational semantics of \(\textsf{CLASS}\). The language is based on a PaT correspondence with Linear Logic, so terms of the language correspond to proof rules. We start by types and duality.

Definition 1 (Types)

Types AB of \(\textsf{CLASS}\) are defined by

figure p

Types in the first two rows correspond to Second-Order Classical Linear Logic, extended with inductive/coinductive types (\(\mu , \nu \)). Types comprise variables (X), units (\({\textbf{1}}\), \(\bot \)), multiplicatives (\(\otimes \), ), additives (\(\oplus \), ), exponentials (!, ?) and quantifiers (\(\exists \), \(\forall \)). The third row extends basic types with affine (\(\wedge , \vee \)) and new modalities ( ) to type shared affine state. Duality is the involution operation \(A \mapsto \overline{A}\) on types, corresponding to Linear Logic negation, defined by

figure t

Duality captures symmetry in process interaction, as manifest in the cut rule. In our system, typing judgements have the form \(P \vdash _\eta \varDelta ; \varGamma \). The typing context \(\varDelta ;\varGamma \) is dyadic [4, 15, 22, 63], where \(\varDelta \) is handled linearly and \(\varGamma \) is unrestricted; both \(\varDelta \) and \(\varGamma \) assign types to names. The index \(\eta \) is a finite map that holds coinduction hypothesis to type corecursive processes, as detailed later.

Definition 2

The typing rules of \(\textsf{CLASS}\) are presented in Figs. 1 to 5.

The type system corresponds, via propositions-as-types [22, 27, 80], to Second-Order Classical Linear Logic (Fig. 1) with inductive/coinductive types (Fig. 2), affinity (Fig. 3) and extended with constructs for shared mutable state (Figs. 4 - 5). The basic composition rules are [Tmix] and [Tcut], which correspond to mix and cut of Linear Logic, respectively. [Tmix] types a parallel composition , where P and Q run in parallel without interfering. On the other hand, [Tcut] types linear interactive composition : processes P and Q run concurrently and communicate through a private linear session x, session endpoints being typed by dual types \(A/\overline{A}\). When the cut type annotation does not play any role, we may omit it and write . In examples, for readability, we use and instead of and , respectively.

For the basic process constructs [19, 22, 27, 80], type send and receive, type choice and offer (in examples we use labelled choice) , !/? type replicated servers and their invocation, \(\forall /\exists \) type receive and send of types, implementing polymorphic processes.

Fig. 1.
figure 1

Typing Rules I: Second-Order CLL.

Fig. 2.
figure 2

Typing Rules II: Induction and Coinduction.

Fig. 3.
figure 3

Typing Rules III: Affinity.

Coinductive types are introduced by rule [Tcorec]. It types corecursive processes , with parameters \(z,\boldsymbol{w}\) bound in P, that are instantiated with the arguments \(x, \boldsymbol{y}\) (free in the process term). By convention, the coinductive behaviour, of type \(\nu Y. \ A\), of a corecursive process is always offered in the first argument z. According to [Tcorec], to type the body P of a corecursive process, the map \(\eta \) is extended with a coinductive hypothesis binding the process variable X to the typing context \(\varDelta , z:Y; \varGamma \), so that when typing the body P of the corecursion we can appeal to X, which intuitively stands for P itself, and recover its typing invariant. Crucially, the type variable Y is free only in z : A. This causes corecursive calls to be always applied to names \(z'\) that hereditarily descend from the initial corecursive argument z, a necessary condition for strong normalisation (Theorem 3), and morally corresponds to only allowing corecursive calls on “smaller” argument sessions (of inductive type).

Rule [Tvar] types a corecursive call \(X(z, \boldsymbol{w})\) by looking up in \(\eta \) for the corresponding binding and renaming the parameters with the arguments of the call. Inductive and coinductive types are explicitly unfolded with [T\(\mu \)] and [T\(\nu \)].

To simplify the presentation in program examples, we omit explicit unfolding actions, and write inductive and coinductive type definitions with equations of the form and instead of \(A = \mu X. \ f(X)\) and \(B = \nu X. \ f(X)\), respectively. Similarly, we write corecursive process definitions as \(Q(x, \boldsymbol{y}) =f(Q(-))\) instead of , while of course respecting the constraints imposed by typing rules [Tvar] and [Tcorec].

Affinity Affinity is important to model discardable linear resources, and plays an important role in \(\textsf{CLASS}\). An affine session can either be used as a linear session or discarded. The typing rules for the affine modalities are in Fig. 3. Affine sessions are introduced by rule [Taffine] that promotes a linear a : A to an affine session \(a:\wedge A\). It types , which provides an affine session at a and continues as P, and follows the structure of a standard promotion rule.

A session a may be promoted to affine if it only depends on resources that can be disposed, i.e. resources that satisfy some form of weakening capability, namely: coaffine sessions \(b_i\) of type \(\vee B_i\), that can be discarded; full cell usages \(c_i\) of type with , that can be released; and unrestricted sessions in \(\varGamma \), which are implicitly ?-typed. The dependencies of an affine object on coaffine or full cell objects are explicitly annotated as \(\boldsymbol{b},\boldsymbol{c}\) in the process term, to instrument the operational semantics, but we often omit them and simply write .

The coaffine endpoint \(\vee A\) of an affine session, dual of \(\wedge \overline{A}\), has two operations: use and discard. Rule [Tuse] types a process that uses a coaffine session a and continues as Q, it is a dereliction rule. [Tdiscard] types the process that discards a coaffine session a, it is a weakening rule.

Shared Mutable State Shared state is introduced in \(\textsf{CLASS}\) by typed constructs that model mutex memory cells, and associated cell operations allowing its use by client code, defined by the tying rules in Fig. 4.

At any moment a cell may be either full or empty, akin to the MVars of Concurrent Haskell [45]. A full cell on c, written , is typed by rule [Tcell]. Such cell stores an affine session of type \(\wedge A\), implemented at a by P. All objects stored in cells are required to be affine, so that memory cells may always be safely disposed without causing memory leaks. An empty cell on c, of type , and written , is typed by rule [Tempty].

Fig. 4.
figure 4

Typing Rules IV: Reference Cells.

Client processes manipulate cells via take, put and release operations. These operations apply to names of cell usage types - (full cell usage) and (empty cell usage) - which are dual types of and , respectively. At any given moment, a client thread owning a -typed usage to a cell may execute a take operation, typed by rule [Ttake]. The take operation waits to acquire the cell mutex c, and reads its contents into parameter a, the linear (actually coaffine, of type \(\vee A\)) usage for the object stored in the cell; the cell becomes empty, and execution continues as Q.

It is responsibility of the taking thread to put some value back in the empty cell, thus releasing the lock, causing the cell to transition to the full state. The put operation is typed by [Tput], the stored object a, implemented by \(Q_1\), is required to be affine, as specified in the premise \(a:\wedge \overline{A}\).

Hence a cell flips from full to empty and back; [Ttake] uses the cell c at type, and its continuation (in the premise) at type, symmetrically [Tput] uses the cell c at type, and its continuation (in the premise) at type.

The operation allows a thread to manifestly drop its cell usage c. Release is typed by [Trelease] (cf. coweakening [35]); a usage may only be released in the unlocked state . When, for some cell c, all the owning threads release their usages, which eventually happens in well-typed programs, the cell c gets disposed, and its (affine) contents safely discarded.

Our memory cells cells are linear objects, with a linear mutable payload, which are never duplicated by reduction or conversion rules. However, in \(\textsf{CLASS}\), multiple cell usages may be shared between concurrent threads, which compete to take and use it in interleaved critical sections. Such aliased usages be passed around and duplicated dynamically, changing the sharing topology at runtime.

Sharing of cell usages is logically expressed in our system by the typing rules in Fig. 5. Co-contraction, introduced in Differential Linear Logic DiLL [35], allows finite multisets of linear resources to safely interact in cut-reduction, resolving concurrent sharing into nondeterminism, as required here to soundly model memory cells and their linear concurrent usages. Rule [Tsh] interprets cocontraction with the construct , and types sharing of the cell usage between the concurrent threads P and Q.

Fig. 5.
figure 5

Typing Rules V: State Sharing.

Contrary to cut, is not a binding operator for c. The shared usage is free in the conclusion of the typing rule, permitting c to be shared among an arbitrary number of threads, by nested iterated use of [Tsh]. In [Tsh], P and Q only share the single mutex cell c, since the linear context is split multiplicatively, just like [Tcut] wrt. binary sessions. This condition comes from the DiLL typing discipline, and is important to ensure deadlock freedom.

While [Tsh] types sharing of a full (unlocked) cell usage of type , the symmetric rules [TshR] and [TshR] type sharing of an empty (locked) cell usage of type . We may verify that for every cell c in a well-typed process, at most one unguarded operation to c may be using type , all the remaining unguarded operations to c must be using type . This implies that, at runtime, only one thread may own the lock for a given (necessarily empty) cell, and execute a put to it, which will bring the cell back to full and release its lock, other threads must be either attempting to take, or release the reference.

Working together, the sharing typing rules ensure that in any well-typed cell sharing tree, at most one single thread at any time may be actively using a cell (in the locked empty state) and put to it, thus guaranteeing mutual exclusion, while satisfying Progress (Theorem 2) which in turn ensures deadlock absence, even in the presence of the crucially blocking behaviour of the take operation.

2.1 Operational Semantics

We now define \(\textsf{CLASS}\) operational semantics, which is given by a structural precongruence relation \(\le \) that captures static relations on processes, essentially rearranging them, and a reduction relation \(\rightarrow \) that captures process interaction.

Definition 3

(\(P \equiv Q\) and \(P \le Q\)). Structural congruence \(\equiv \) is the least congruence on processes closed under \(\alpha \)-conversion and the \(\equiv \)-rules in Fig. 6. Structural precongruence \(\le \) is the least precongruence on processes including \(\equiv \) and closed under \(\alpha \)-conversion and the \(\le \)-rules in Fig. 6.

The basic rules of \(\equiv \) essentially reflect the expected static laws, along the lines of the structural congruences / conversions in [22, 80]. The binary operators forwarder, cut and share are commutative ([comm]). The set of processes modulo \(\equiv \) is a commutative monoid with binary operation given by parallel composition and identity given by inaction ([par]). Any two static constructs commute, as expressed by the laws [CM]-[ShC!]. Furthermore, we can distribute the unrestricted cut over all the static constructs as expressed by law [D-C!X], where \(*\) stands for either a mix, linear or unrestricted cut or a share.

Fig. 6.
figure 6

Structural congruence \(P \equiv Q\) and precongruence \(P \le Q\).

The commuting conversions [ShTake] and [ShPut] allows take and put operations on cell usages to commute with a share construct. Rule [ShTake] picks the take that occurs on the left argument, however since share is commutative, a right-biased version of [ShTake] is admissible. Using [ShTake], any of the two possible interleavings for two concurrent takes may be nondeterministically picked via \(\le \). Indeed, we express \(\le \) as a precongruence because it introduces nondeterminism, and does not express a behavioural equivalence as \(\equiv \) does. N.B.: Although one could easily formulate a confluent version of \(\textsf{CLASS}\) semantics, using explicit sums as in [13, 35, 65, 66], we prefer in this paper to focus on the expressiveness of \(\textsf{CLASS}\) as a programming language and on its deadlock and livelock absence properties, adopting a nondeterministic reduction relation.

In [ShPut] only a put, in the -typed premise of [TshL], may be propagated up and eventually update the cell, causing it to transit back to the full state. Hence, take operations originating the typed premise of [TshR] will be blocked, waiting until such (unique) put propagation occurs. Algebraically, rule [ShRel] expresses that the release operation is the identity for share composition, we orient it as a precongruence, to ensure type preservation.

Definition 4

(Reduction \(\rightarrow \)). Reduction \(\rightarrow \) is defined by the rules of Fig. 7.

We let \(\xrightarrow []{*}\) stand for the reflexive-transitive closure of \(\rightarrow \). Reduction includes the set of principal cut conversions, i.e. the redexes for each pair of interacting constructs. It is closed by structural precongruence ([\(\le \)]) and in rule [cong] we consider that \(\mathcal C\) is a static context, i.e. a process context in which the hole is covered only by the static constructs mix, cut and share.

Fig. 7.
figure 7

Reduction \(P \rightarrow Q\).

Operationally, the forwarding behaviour is implemented by name substitution [23] ([fwd]). All the other conversions apply to a principal cut between two dual actions. Reduction rules for the basic session constructs that interpret Second Order Linear Logic and recursion are the expected ones [22, 27, 81], along predictable lines. For readability, we omit the type declarations in the cuts, as they do not actually play any role in reduction.

We comment the rules concerning affinity. The interaction between an affine session and an use operation is defined by reduction rule [\(\wedge \vee \)u], where a cut on \(a:\wedge A\) between and reduces to a cut on a : A between the continuations P and Q. The reduction between an affine session and a discard operation is defined by [\(\wedge \vee \)d]. A cut between and reduces to a mix-composition of discards (for the coaffine sessions \(\boldsymbol{b}\)) and releases (for the cell usages \(\boldsymbol{c}\)) cf. [6, 20]). In the corner case where \(\boldsymbol{c}\) and \(\boldsymbol{a}\) are empty, the left-hand side of [\(\wedge \vee \)d] simply degenerates to inaction (the identity of mix).

The reductions for the mutable state operations are fairly self-explanatory. In rule [ ], a cut between a full mutex cell cell and a release operation reduces to a process that discards the affine cell contents, cf. rule [\(\wedge \vee \)d]. In rule [ ], a cut on between a full cell and a take operation reduces to a process with two cuts, both composed with the continuation \(\{a/a'\}Q\) of the take. The outer cut on \(a:\wedge A\) composes with the stored affine session, which was successfully acquired by the take operation. The inner cut on composes with the reference cell c, which has became empty in the reductum. Finally, in rule [ ], a cut on session between an empty cell and a put operation reduces to a cut on session between a full cell, that now stores the session that was put, and the continuation of the put process. Notice that the locking/unlocking behaviour of cells is simply modelled by rewriting of the process terms, from cell to empty and back, as typical in process calculi.

3 Type Safety and Strong Normalisation

In this section we state and give proof sketches for our main results of type safety and strong normalisation. Full proofs may be found in [65].

Type Preservation The semantics of \(\textsf{CLASS}\) is defined by a set of precongruence \(\le \) and reduction \(\rightarrow \) rules on process terms. Theorem 1 shows that these relations preserve typing, and gives substance to our PaT approach, showing that every \(\le \) and \(\rightarrow \) rule corresponds to a conversion on type derivations/proofs.

Theorem 1 (Type Preservation)

Suppose \(P \vdash _\eta \varDelta ; \varGamma \). (1) If \(P \le Q\), then \(Q \vdash _\eta \varDelta ; \varGamma \). (2) If \(P \rightarrow Q\), then \(Q \vdash _\eta \varDelta ; \varGamma \).

Proof

By induction on derivations for \(P \le Q\) (resp. \(P \rightarrow Q\)), we verify that all the rules of \(\le \) (Def. 3) (resp. \(\rightarrow \) (Def. 4)) are type preserving.

Progress We prove the progress property for well-typed \(\textsf{CLASS}\) processes. The following notion of live process becomes useful. A process P is live if and only if \(P = \mathcal C[Q]\), for some static context \(\mathcal C\) (the hole lies within the scope of static constructs mix, cut and share) and Q is an active process (a process with a topmost action prefix, such as a receive or a take, or a forwarder). We first show that a live well-typed process either reduces or offers an interaction with its environment on a free name. The following observability predicate (cf. [70]) characterises the interactions of a process with its environment

Definition 5

(\(P \downarrow _{x}\)). The predicate \(P \downarrow _{x}\) is defined by rules of Fig. 8.

The predicate \(P \downarrow _{x}\) holds if P offers an immediate interaction (unguarded action) on free name x. We can observe the subject of an action (rule [act]) and xy of a forwarder . The definition of \(P \downarrow _{x}\) is closed by \(\le \) and propagates observations over the various static operators. Cut bound names are not free, hence cannot be observed. Share propagates all the observations x for which \(x \ne y\) and by applying \(\le \) rules [ShTake], [ShRel] or [ShPut] via [\(\le \)], an interaction on x may be observed. We have

Lemma 1 (Liveness)

Let \(P \vdash _\emptyset \varDelta ; \varGamma \) be live. Either \(P \downarrow _{x}\) or P reduces.

Proof

(Sketch) By induction on a derivation for \(P \vdash _\emptyset \varDelta ; \varGamma \), along the lines of [27]. To handle case [Tcut] : both \(P_1\) and \(P_2\) are live, since both type with a nonempty linear typing context, hence we can apply the induction hypothesis (i.h.) to both premises of [Tcut]: either (i) one of \(P_1\) and \(P_2\) reduces or (ii) both \(P_1 \downarrow _{x_1}\) and \(P_2 \downarrow _{x_2}\). If (i), then P reduces. Case (ii) follows because, crucially, \(P_1\) and \(P_2\) synchronise through a single private session y, then either \(x_1 \ne y\) or \(x_2 \ne y\), in which case we can observe either \(x_1\) or \(x_2\); or \(x_1 = x_2 = y\), in which case we can trigger a reduction, by applying \(\le \) rules to P in order to exhibit a principal cut. For case [Tsh] : since \(P_1\) and \(P_2\) are live, we apply i.h. to both premises. The interesting case occurs when \(P_1 \downarrow _{x_1}\) and \(P_2 \downarrow _{x_2}\). Co-contraction implies that \(P_1\) and \(P_2\) share the single usage y, so if \(x_1 \ne y\) or \(x_2 \ne y\), we have either \(P_1 \downarrow _{x_1}\) or \(P_1 \downarrow _{x_2}\). If both \(x_1 = x_2 = y\), then we derive \(P \downarrow _{y}\): the observation corresponds to either a take or a release operation on y, which we commute up with [ShTake] or [ShRel]. For [TshL] , we apply the i.h. to the premise \(P_1\), which types with an empty usage on y. If \(P_1 \downarrow _{y}\), then \(P \downarrow _{y}\), the observation corresponding a put operation on y, which we commute up with [ShPut]. Symmetrically for [TshR].

Fig. 8.
figure 8

Observability Predicate \(P \downarrow _{x}\).

Theorem 2 (Progress)

Let \(P \vdash _\emptyset \emptyset ; \emptyset \) be a live process. Then, P reduces.

Proof

Follows from Lemma 1 since \(\textsf{fn}(P) =\emptyset \).

Remarkably, our proof of Theorem 2 leverages deep properties of Linear Logic, in particular the structure of the linear cut and co-contraction, allowing us to prove deadlock absence, even in a language with primitives exhibiting blocking behaviour, avoiding the use of extra mechanisms [10, 25, 31, 33, 47, 48, 76].

Strong Normalisation Establishing strong normalisation (SN) for concurrent process calculi is usually fairly challenging, particularly in the presence of name passing, recursion and higher-order shared state [16, 32, 49, 69, 83]. For example, with reference cells one may express general recursion with Landin’s knot, and, in general, circular chains of references that may lead to divergence. However, our linear type system uses primitive recursion and corecursion, and excludes cyclic dependencies through state or session based interaction, allowing strong normalisation, and therefore livelock absence, to hold. Our proof relies on defining suitable linear logical relations, cf. [21, 62, 72], adapted to Classical Linear Logic [1, 8, 38], and crucially relying on a notion of reducibility up to interference that imposes stronger properties on the interpretation of the state modalities, and which allows the inductive proof of the Fundamental Lemma 2 to go through in the usual way. To this end, we extend our basic language with auxiliary constructs and , which denote memory cells subject to interference from concurrent writers, allowed to take terms from the set \(S \subseteq \{P \;|\;P \vdash _\eta a: \wedge A\}\). The intuition is that a take on the cell may always read any object from S, due to interference. We also consider the additional reduction (nondeterministic) rules (1)-(3), where in 1 and 2 we assume \(P\in S\).

figure ch

In this section, we thus consider reduction of \(P\rightarrow Q\) to be the relation defined in Fig 7, extended with these rules. When a take or a release interacts with , an arbitrary element P from the set S may be picked (rules (1) and (2)). In (3), a put interacts with causing to evolve to (3). The following notion is also useful. A process P is S-preserving on x if or , and

  • if and \(Q \in S\), then is S-preserving on x.

  • if , then \(P_1 \in S\) and \(P_2\) is S-preserving on x.

A set of processes T is S-preserving on x if and only for all \(P \in T\), P is S-preserving on x. Intuitively a process P that uses a cell x is S-preserving on x if it only puts values from S on cell x. The notion of S-preservation, parametric on any S, brings explicit the conditions needed for safe interaction with a memory cell, subject to interference, while ensuring a state invariant S on the cell contents. We now introduce the logical predicate.

Definition 6

(Logical Predicate \(\llbracket x:A \rrbracket _\sigma \)). By induction on the type A, we define the sets \(\llbracket x:A \rrbracket _\sigma \) an shown in Fig. 9, such that and are \(\llbracket -:\wedge \overline{A} \rrbracket \)-preserving on x.The definition is direct for the positive types A, for negative types B is given by orthogonality.

The definition relies on Girard’s notion of orthogonality  [37]. Duality promotes succinctness in our definition: for negative types A, \(\llbracket x:A \rrbracket _\sigma \) is defined as the orthogonal of the predicate for its dual \(\overline{A}\) (positive) type. To handle polymorphic and inductive types, the logical predicate is indexed by a map \(\sigma \) that assigns reducibility candidates \(R[x:A]\) to type variables. A reducibility candidate \(R[x:A]\) is any set S of processes \(P \vdash _\emptyset x:A\) such that P is SN and \(S=S^{\bot \bot }\). We let \(\mathcal R[-:A]\) be the set of all reducibility candidates \(R[x:A]\) for some name x. The definition relies on a congruence relation \(\approx \) extending \(\le \) with a complete set of commuting conversions, along standard lines [22, 27, 80]. It essentially plays the role of the labelled transition system in the proof of strong normalisation given in [62].

We extend the logical predicate to typing judgements \(P \vdash _\eta \varDelta ; \varGamma \) by universal closure over the typing context and \(\sigma \).

Definition 7

(Extended Logical Predicate \(\mathcal L \llbracket \vdash _\eta \varDelta ; \varGamma \rrbracket _\sigma \)). We define \(\mathcal L \llbracket \vdash _\eta \varDelta ; \varGamma \rrbracket _\sigma \) inductively on \(\varDelta , \varGamma \) and \(\eta \) as the set of processes \(P \vdash _\eta \varDelta ; \varGamma \) s.t.

figure cv

We now state the Fundamental Lemma (2) from which Theorem 3 follows.

Fig. 9.
figure 9

Logical Predicate \(\llbracket x:A \rrbracket _\sigma \).

Lemma 2 (Fundamental Lemma)

If \(P \vdash _\eta \varDelta ; \varGamma \), then \(P \in \mathcal L \llbracket \vdash _\eta \varDelta ; \varGamma \rrbracket _\sigma \).

Proof

(Sketch) By induction on \(P \vdash _\eta \varDelta ; \varGamma \). For cases [Tcell] and [Tempty], we show that and respectively simulate (where \(P \in S\)) and , when composed with any S-preserving on c usages. To handle one of the most challenging cases, [Tsh] we prove, for all S, and all S-preserving on x processes \(P_1\) and \(P_2\), that (1) is simulated by (2). This allows us to infer that if (2) is SN, then so it is (1). When \(S=\llbracket a:\wedge \overline{A} \rrbracket _\sigma \), the i.h. yields SN, hence we conclude (2) SN. Similarly for [TshL], [TshR].

Theorem 3 (Strong Normalisation)

If \(P \vdash _\emptyset \emptyset ;\emptyset \), then P is SN.

4 Typeful Concurrent Programming in CLASS

In this section, we discuss the expressiveness of \(\textsf{CLASS}\)’s type system, going through a sequence of illustrative realistic concurrent programming idioms.

Sharing a Linear Session. Our first example illustrates how objects subject to a linear usage protocol and satisfying an invariant may be shared among multiple concurrent clients by serialising linear usages using a mutex cell, alternating ownership from the cell to clients and back at the invariant state, a commonly used discipline to implement and reason about resource sharing (see, e.g.,  [9, 17, 39]). We illustrate with a basic toggle switch with two states - \(\textsf{On}\) and \(\textsf{Off}\) - the resource invariant is the state \(\textsf{Off}\), and two operations \(\mathsf {\# turnOn}\) and \(\mathsf {\# turnOff}\) that must be executed in strict linear sequence (Fig. 10). The toggle protocol, defined by type \(\textsf{Off}\), offers the single option \(\#\textsf{turnOn}\), after which it evolves to \(\textsf{On}\). Conversely, type \(\textsf{On}\) offers the single option \(\#\textsf{turnOff}\), after which it evolves to an affine \(\textsf{Off}\). The toggle process at t is defined by two mutually corecursive processes \(\textsf{on}(t)\) and \(\textsf{off}(t)\), which define the expected behaviour, and comply with types \(\textsf{On}\) and \(\textsf{Off}\).

Fig. 10.
figure 10

Sharing a Linear Toggle Switch

Process \(\textsf{main}()\) introduces a mutex cell c storing an affine toggle object at the invariant type \(\wedge \textsf{Off}\). It then shares it with two concurrent clients, each acquires the toggle in the invariant type and uses the linear protocol independently. After their linear interaction, they put back the toggle, the type system ensures that this can only happen when the invariant (given by the cell type) holds. When they are done, both clients release their respective usages of c, which ultimately leads to the cell being deallocated and the (affine) toggle to be discarded.

We have also developed \(\textsf{CLASS}\) code for a generic (polymorphic) wrapper factory that, for any affine corecursive protocol, generates a wrapper to a general invariant-based sharing interface.

Fig. 11.
figure 11

A Linked List with an Append In-Place Operation.

Linked Lists, Update In-Place. In this example, we show how inductive/coinductive types combine harmoniously with \(\textsf{CLASS}\) state modalities to type linked data structures with memory-efficient updates in-place. Specifically, we show how to code a linked list, parametric on the type A of its affine values, with update in-place append (Fig. 11). An object of type \(\textsf{SList}(A)\) is a (full) cell storing a \(\textsf{List}(A)\) object. An object of type \(\textsf{List}(A)\) is a session that either selects \(\#\textsf{Null}\) (the list is empty), in which case it closes; or selects \(\#\textsf{Next}\), in which case it sends an affine session \(\wedge A\) representing the head element and continues as the tail \(\textsf{SList}(A)\). Process \(\textsf{nil}(l)\) - defines an empty list at l - and process \(\textsf{cnext}(a,c,l)\) - constructs a nonempty list l with head a and tail c. For example, a list with elements ab stored at is represented

figure de

Process \( \textsf{append}(c,l', c') \vdash c:\overline{\textsf{SList}(A)}, l':\overline{\textsf{List}(A)}, c':\textsf{SList}(A) \) produces on \(c'\) the result of appending l (in place) to c. It takes the list l stored in c, and then performs case analysis on l. If l selects \(\#\textsf{Null}\), it simply replaces the previous null node of c by \(l'\) and forwards the updated cell c to the output \(c'\). This corresponds to the recursion base case in which the list l is empty.

If l selects \(\#\textsf{Next}\), in which case l has at least one element, one receives at l the node element \(a:\vee \overline{A}\), and corecursively call append \(l'\) to the tail \(l:\overline{\textsf{SList}(A)}\) and puts back in c element a and tail x “returned” by the call. Notice that x is exactly x (by forwarding), which was passed along linearly. Remarkably, the \(\textsf{append}(c,l', c')\) operation just defined may be safely applied concurrently to the same shared linked list, with the final result being the correct one (some serialisation of the appends), without deadlocks or livelocks. It is also interesting to see how the type system forbids a list to be appended to itself.

We have also developed many other in-place operations on linked data structures, such as insertion sort, and other kinds of linked structures such as queues and binary search trees. In the next examples we discuss a shared queue ADT with a fine-grained locking discipline and O(1) enqueue and dequeue operations.

A Concurrent Shareable Buffered Channel. We illustrate increased degrees of sharing in a mutable data structure with various references pointing to different parts of it, how the \(\textsf{CLASS}\) type system may express interfaces that talk about different client views for using a stateful object, and the use of polymorphism to implement information hiding ensuring that client code will never break the representation invariants of stateful ADTs, particularly challenging when aliasing and sharing are involved.

More concretely, we consider a shareable buffered channel (Fig. 12), and provide a realistic and efficient implementation [56] based on a message queue represented by a linked list with update-in-place (cf. Section 4 above) and two independent pointers: one to the head of the list, used for receiving, and another to the tail, used for sending. The operations are executed in O(1) time. Moreover we provide a typing with two separate send and receive views, which may be used by an arbitrary number of concurrent clients. In particular, when the list is nonempty, both send and receive run in true concurrency (asynchronously), without blocking each other, thanks to fine-grained locking.

Fig. 12.
figure 12

A Concurrent Shareable Buffered Channel.

The buffered channel type \(\textsf{BChan}(M)\), where M is the type of messages, offers two views: \(\textsf{SendT}(M)\) and \(\textsf{RecvT}(M)\), interfaces for sender and receiver endpoint clients. These views are exposed with a par ( ), since they share an underlying resourceful structure. In fact, they could not be exported using a tensor (\(\otimes \)); it is interesting to notice how the type system imposes these constraints, important to ensure deadlock freedom. The representation type of both views is (see Section 4), hidden behind the SV and RV existential types [29, 58]; sending clients use a cell storing a reference to the tail node of the queue; receiving clients use a cell storing a reference to the head node of the queue.

Clients use the buffer through references of abstract type SV and RV and replicated menus \(!\textsf{MenuS}(M,SV)\) and \(!\textsf{MenuR}(M,RV)\). Both menus export the options \(\#\textsf{Share}\) and \(\#\textsf{Free}\) to allow sharing and release of the views. To send, a client selects \(\#\textsf{Send}\), sends his handle (of opaque type SV), the message to send and receives the (linear) handle back. In this implementation, receive is non-blocking, so operation \(\#\textsf{Recv}\) returns a \(\textsf{Maybe}(\wedge M)\) value: the client receives either \(\#\textsf{Nothing}\) (if the buffer is empty) or \(\#\textsf{Just}\) followed by a message a, otherwise. In 4 we discuss the implementation, in \(\textsf{CLASS}\), of (Hoare style) monitors with conditions, which would allow a blocking receive to be implemented.

Process \(\textsf{msend}(me)\) implements the \(\#\textsf{Send}\) “method”. It first receives the sending view handle (of concrete type Rep), which is a cell with the \( tailptr \), and the message a to be sent. Then, a new cell \(c'\) with nil (l) is created, the current tail of the list c is updated with a new node storing a and pointing to \(c'\). Finally, the \( tailptr \) cell is updated to point to the new tail node \(c'\) of the linked list.

Fig. 13.
figure 13

The Dining Philosophers.

Dining Philosophers. A resource hierarchy solution for the dining philosophers problem [34] requires forks to be acquired in a defined order. We “encode” such order in \(\textsf{CLASS}\) with an explicit (necessarily) acyclic structure, which informs the type system about the code safety. This allows us to define a correct implementation that satisfies deadlock freedom by pure linear logic typing. More concretely, we organise the forks in a linked chain defined by the inductive types and .

Any fork in the chain may be shared by an arbitrary number of philosophers, cocontraction ensures that philosophers cannot communicate between themselves via any other channel, all synchronisation must happen via the chained forks. Furthermore, the chain can be resized and grow unboundedly to accommodate an arbitrary number of philosophers. If a philosopher successfully takes a fork \(f_i\), he can then take any fork \(f_j\), with \(i < j\); crucially, he must follow the path dictated by the chain, hence cannot acquire forks \(f_j\) with \(j < i\). In Fig. 13 we define the \(\textsf{eat}\) operation, which allows each philosopher \(P_i\), with \(0 \le i < k-1\) to eat: it acquires two consecutive forks in the chain. And \(\textsf{eat2}\), which is the specific eating operation for the symmetry breaker \(P_{k-1}\): it acquires the first fork, and traverses the chain to acquire the last with \(\textsf{takeLast}(n, x) \vdash n:\overline{\textsf{Fork}}, x:\textsf{Fork} \otimes {\textbf{1}}\).

Fig. 14.
figure 14

A Barrier for N Threads

A Barrier for N threads. We describe in Fig. 14 a \(\textsf{CLASS}\) implementation of a simple barrier, parametric on the number N of threads to synchronise. We find it interesting to model the “real” code shown in the Rust reference page for \(\mathsf {std{:\,\!:}sync{:\,\!:}Mutex}\) [46]. The code uses if-then-else and primitive integers, as offered in our implementation, that could be defined as idioms in \(\textsf{CLASS}\). We represent a barrier by a mutex cell storing a pair consisting of an integer n, holding the number of threads that have not yet reached the barrier, and a stack s of waiting threads, each represented by a session of affine type \(\wedge \bot \) (so they will be safely aborted if at least one thread fails to reach the barrier).

The type \(\textsf{Barrier}\) of the barrier is , where \(\textsf{BState} \triangleq \textsf{Int} \otimes \wedge \textsf{List}(\wedge \bot )\). Initially the barrier is initialised with \(n=N\) threads and an empty stack, so that the invariant \(n+depth(s)=N\) holds during execution. Each \(\textsf{thread}(c;i)\) acquires the barrier c and checks if it is the last thread to reach the barrier (if \(n == 1\)): in this case, it awakes all the waiting threads (\(\textsf{awakeAll}(w_s)\)) and resets the barrier. Otherwise, it updates the barrier by decrementing n and pushing its continuation into the stack (the continuation for thread i just prints “finished”). The following process \(\textsf{main}() \vdash \emptyset \) creates a new barrier c and spawns N threads, each labelled by a unique id i: Again, our type system statically ensures that the code does not deadlock or livelock.

Fig. 15.
figure 15

Implementing a Counter Monitor with Await / Notify.

A Hoare Style Monitor. A Hoare style monitor is a well-know powerful programming abstraction [39], allowing concurrent operations on shared data to be coordinated in a sound way, so that it always satisfy a correctness invariant. The key essential idea is that concurrent client threads use the monitor lock to access the protected state in mutual exclusion, but may also wait (via a await primitive) inside the monitor until the state satisfies specific (pre-)conditions, while transferring state ownership to other threads potentially responsible for establishing such conditions and announcing it (via a notify primitive).

We discuss a \(\textsf{CLASS}\) implementation of a monitor, sketching the main components and how they are typed (Fig. 15). We consider a counter with value n, with increment \(\#\textsf{Inc}\) and decrement \(\#\textsf{Dec}\) operations, and subject to the invariant \(n\ge 0\). The type of the counter \(\textsf{CounterI}\) exposes two separate, coinductively defined, client interfaces \( \textsf{DecI}\) and \(\textsf{IncI}\) for decrementing and incrementing.

While the \(\#\textsf{Inc}\) operation is synchronous, the \(\#\textsf{Dec}\) operation is always called asynchronously by passing a continuation (of type \(\textsf{ContDec}\)). This allows decrementers to wait inside the monitor for condition \(\textsf{NZ}\) (\(n>0\)) when \(n=0\). The condition \(\textsf{NZ}\) is represented by a wait queue of type \(\textsf{WaitQ}\). The representation type of the monitor (\(\textsf{Rep}\)) holds the counter value and the wait queue. Each node in the wait queue stores information, of type \(\textsf{ContDecW}\), for the waiting thread. Every such \(\textsf{ContDecW}\) objects stores (1) the pending action on the internal monitor state (of type \(\wedge \textsf{Rep} \multimap \wedge \textsf{Rep})\), to be executed after await returns, and (2) a callback to the continuation provided by the external client in the asynchronous call (of type \(\textsf{DecI} \multimap \bot \)).

The \(\textsf{awaitNZ}(m,n,w,cc)\) process implements the monitor wait operation, used in the \(\#\textsf{Dec}\) operation. It receives the (empty) cell usage m to the monitor state, the integer value n (where \(n=0\)), a reference w to the wait queue, and the continuation cc, it pushes a new node in the queue and puts the monitor state back, unlocking the cell m, and releases m. The \(\textsf{incloop}(iv,m)\) process implements the counter \(\textsf{IncI}\) interface. The call to \(\textsf{notifyNZ}(m,s,m')\) after incrementing n will cause a waiting \(\textsf{DecI}\) thread to be awaken (if any), and continue by applying the pending action to the \(\textsf{Rep}\) state s in which \(n>0\) holds, before passing the updated state \(m'\) to the \(\textsf{incloop}\) recursive call. Affinity plays a key role, allowing all data structures, including waiting continuations to be safely discarded, at the end of any computation. We have only shown here some code snippets, the complete code is available in the \(\textsf{CLASS}\) distribution.

Our examples illustrate how our system types non-trivial concurrent code, akin to real system-level code, involving higher-order state, rich sharing and ownership transfer patterns, while ensuring deadlock, livelock freedom and memory safety. Our typing of sharing imposes that only a single bundle of linear resources may be shared by two independent threads. As our examples show, code can often be structured in that way, so that bundles of many linear resources may be safely shared by monitor-like structures, exposing informative typed interfaces.

The feasibility of \(\textsf{CLASS}\) is corroborated by our implementation [68] of a fully-fledged type checker and interpreter, developed in Java (\({\sim } 15\)k), and packaged with an extensive \(\textsf{CLASS}\) library of code and test suites (\({\sim }10\)k), including all the examples in this paper. Type checking is decidable in polynomial time, using a minimal type annotation, only on cut-bound names and function parameters, the multiplicative rules are handled by lazy context splitting (cf. [41]). The type checker ensures that corecursive calls are done on a session hereditarily descendent from the corecursion parameter, a condition motivated by our SN result (Theorem 3). But we also support an unsafe corecursion mode, in which this check is turned off, to type programs defined by general corecursion.

The type checker supports useful type inference and reconstruction abilities. The interpreter uses java.util.concurrent.* package [53], using primitives such as fine-grained locks and condition variables to emulate the synchronous interactions of \(\textsf{CLASS}\) sessions and a cached thread pool to manage the life cycle of short-lived threads. Cell deallocation is implemented by reference counting, incremented on each share and decremented on each release. Forwarding redirects the clients of a shared cell through a chain of forwarding pointers (cf. [9]).

5 Related Work

Many resource-aware logics and type systems to tame shared state and interference have been proposed [3, 17, 18, 24, 44, 57, 60, 61, 77]. These systems adopt some form of linearity and/or affinity to resourceful programming [30, 75] and to model failures/exceptions [20, 28, 36, 52, 59]. In \(\textsf{CLASS}\), linearity allows us to control state sharing, whereas affinity is useful to ensure memory safety and to represent safely finalizable or abortable computations. The hereditary session-discarding behaviour of affine sessions, modelled by rule [\(\wedge \vee \)d], is also present in other works, e.g. [6, 20, 59].

\(\textsf{CLASS}\) builds on top of the PaT correspondence with Linear Logic [22, 27, 80], the logical principles for the state modalities being inspired by DiLL [35]. Recent works [7, 9, 10, 43, 50, 64, 67] also address the problem of sharing and nondeterminism in the setting of session-based PaT. In [67], reference cells may only store replicated sessions (of type !A), thus cannot refer to linear entities such as other cells or linear sessions, hence cannot represent many realistic programming idioms that \(\textsf{CLASS}\) does (see Section 4). Accommodating linear state in a pure PaT approach is thus addressed in this work with a novel, more fundamental approach. Furthermore, in [67], recursion is obtained via a system-F style encoding [79], which cannot model inductive stateful structures with updates in-place as we do with \(\textsf{CLASS}\) native inductive/coinductive types.

The take/put operations of \(\textsf{CLASS}\) relate with Concurrent Haskell MVars [45] and the acquire/release operations of the manifest sharing session-typed language \(\textsf{SILL}_S\) [9, 10]. Sharing in \(\textsf{SILL}_S\) is based on shift modalities to move from shared to linear mode and back, and contraction principles to alias shared sessions. In \(\textsf{CLASS}\) we explore DiLL modalities and cocontraction principles [35] to express sharing of linear state and put / take protocols of mutex memory cells of invariant type. The work [10] ensures deadlock-freedom by relying on programmer provided partial orders on events [26, 33, 55], whereas in \(\textsf{CLASS}\) deadlock-freedom follows the same simple and general inductive argument of the corresponding result in e.g. [22], thanks to the logical character of the new proof rules (DiLL cocontraction, that enjoys cut-elimination). The work [64] introduces the language \(\textsf{CSLL}\), by extending linear logic with coexponentials that support a notion of shared state, with a quite different approach than ours. \(\textsf{CSLL}\) does not claim the ability to naturally express shared linked data structures with update in-place and fine-grained locking, as \(\textsf{CLASS}\) does. Nevertheless, it is natural to define in \(\textsf{CLASS}\) sessions exporting weakening, sharing and dereliction capabilities for linear behaviours, as in our shared buffer example.

Recently, the work [43] develops \(\lambda _\textsf{lock}\), a substructural-typed \(\lambda \)-calculus with higher-order locks, which enjoys deadlock-freedom by imposing a set of high-level principles that guarantee acyclicity of the lock-sharing topologies, and which follow in \(\textsf{CLASS}\) as a consequence of its logical-motivated type system and DiLL’s cocontraction. This work also extends \(\lambda _{\textsf{locks}}\) with partial orders in which a resource can shared by more than two concurrent threads. None of the models in [9, 10, 43, 64] addresses livelock absence or memory safety, as \(\textsf{CLASS}\) does.

As far as we are aware, \(\textsf{CLASS}\) is a first proposal integrating shared state and recursion in a language based on PaT and Linear Logic, while guaranteeing strong normalisation. Least/greatest fixed points in Linear Logic were studied in [8], which inspired the development of recursion in [54, 73], our treatment of recursion draws inspiration on [73]. Several works exploit the technique of logical relations to establish strong normalisation for concurrent process calculi [1, 16, 62, 69, 83]. The work [16] proves strong normalisation for a language with higher-order store with a type and effect system that stratifies memory into regions so as to preclude circularities. Interestingly, in \(\textsf{CLASS}\) such stratification is implicitly guaranteed by the acyclicity inherent to Linear Logic. Linear logical relations were studied in [21, 62, 72, 74]. In this work we recast and extend the technique to Classical Linear Logic, exploring orthogonality [1, 8, 38], and demonstrate, using a specially devised technique of interference-sensitive reducibility, how logical relations scale to accommodate shared state.

6 Concluding Remarks

We have introduced \(\textsf{CLASS}\), a session-based language founded on a propositions-as-types interpretation of Second-Order Classical Linear Logic, extended with recursion, affine types, first-class mutex cells and shared linear state. We believe that \(\textsf{CLASS}\) is the first proposal of a language of its kind to provide the following three strong properties by static typing: well-typed \(\textsf{CLASS}\) programs enjoy progress, hence never deadlock, do not leak memory and always terminate.

\(\textsf{CLASS}\) metatheoretical properties are obtained in a compositional and modular way, by leveraging the key features of propositions-as-types, from which the operational semantics and type system also emerges. In \(\textsf{CLASS}\), types and process have a consistent proof-theoretical behaviour: typed program constructs correspond exactly to proof rules, with a proper compositional semantics via logical relations (Section 3). Programs are composed by plugging basic constructs with the cut rule, and all interaction principles are captured by principal cut reductions that act locally in proofs/type derivations (Def. 4). We also obtain an algebraic system based on proof simplification to reason about program (observational) equivalence, due to confluence (cf. [65]).

Besides the foundational relevance of our work, we also argued how \(\textsf{CLASS}\) can cleanly express realistic concurrent higher-order programming idioms, with many compelling examples. Any type system introduces conservative restrictions on its language, but we believe that \(\textsf{CLASS}\) offers an interesting balance between the strong properties it ensures by typing and its expressiveness. In fact, we find \(\textsf{CLASS}\) type system helpful to guide the development of safe concurrent idioms, with a fairly light type annotation burden. As future work, we would like to investigate several possible refinements of the \(\textsf{CLASS}\) type discipline, namely, allowing finer-grained resource-access policies to be expressed, and exploring the integration of dependent and refinement types [51, 71], enhancing the logical expressiveness of the basic type system.