Next Article in Journal
Detection of Salient Crowd Motion Based on Repulsive Force Network and Direction Entropy
Previous Article in Journal
Application of Second Law Analysis in Heat Exchanger Systems
Previous Article in Special Issue
Chemical Kinetics Roots and Methods to Obtain the Probability Distribution Function Evolution of Reactants and Products in Chemical Networks Governed by a Master Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Properties of the Reaction Counts Chemical Master Equation

1
Computational Medicine, Zuse Institute Berlin, 14195 Berlin, Germany
2
Department of Mathematics and Computer Science, Freie Universität Berlin, 14195 Berlin, Germany
Entropy 2019, 21(6), 607; https://doi.org/10.3390/e21060607
Submission received: 28 February 2019 / Revised: 10 June 2019 / Accepted: 13 June 2019 / Published: 19 June 2019

Abstract

:
The reaction counts chemical master equation (CME) is a high-dimensional variant of the classical population counts CME. In the reaction counts CME setting, we count the reactions which have fired over time rather than monitoring the population state over time. Since a reaction either fires or not, the reaction counts CME transitions are only forward stepping. Typically there are more reactions in a system than species, this results in the reaction counts CME being higher in dimension, but simpler in dynamics. In this work, we revisit the reaction counts CME framework and its key theoretical results. Then we will extend the theory by exploiting the reactions counts’ forward stepping feature, by decomposing the state space into independent continuous-time Markov chains (CTMC). We extend the reaction counts CME theory to derive analytical forms and estimates for the CTMC decomposition of the CME. This new theory gives new insights into solving hitting times-, rare events-, and a priori domain construction problems.

1. Introduction

Continuous-time Markov chains (CTMC) accurately capture the dynamics of a broad range of biochemical reaction systems. The CTMCs come in two flavours: The discrete state space, the chain transitions from one state to another, or the continuous state space, the state transitions are smooth yet non-differentiable. Each type has an intuitive interpretation. Discrete state spaces describe the population or counts of chemicals in a system, while continuous states spaces describe concentrations of chemicals in a system [1,2]. It is tempting to generalise concentrations to be simply scaled populations, which would mean that all systems could be mapped into the continuous setting and solved there. However, this is not the case mathematically. It was shown that for large populations, the discrete CTMC and continuous CTMC were equivalent [3,4]. If the large population assumption is not satisfied, mathematically, the discrete state space system has to be studied in its own right.
The probability distribution of a CTMC over the discrete state space in the biochemical context is found by solving the chemical master equation (CME). We first present the CME and then describe its components. The formula of the CME is written as follows,
p ( Z ( t ) = z ) t = n = 1 N r a n ( z ν n ) p ( Z ( t ) = z ν n ) n = 1 N r a n ( z ) p ( Z ( t ) = z ) .
In the equation above, Z ( t ) is a CTMC over the state space Ω N 0 N s . Each state is a vector of N s N non-negative integers describing the population of the species at time t 0 . Starting from the left-hand side, the equation states that the change in probability of being in a state z Ω at time t is given by the sum of two terms. The first term describes the probability flowing into the state z and the second term describes the probability flowing out of the state z . There are N r N reactions perturbing the population; the terms ν n are referred to as stoichiometric vectors, they capture the net change in population if the nth reaction fires. Hence, subtracting the stoichiometric vectors ν n from z gives us the set of all previous states which might transition into z and “contribute” probability. The last piece of information in the equation are the rates at which reactions fire, these are encapsulated in the propensity functions { a n : Ω R + , for n = 1 , , N r } . In summary, the CME describes the change in probability of observing a CTMC in a certain state, which is equal to the sum of probability transitioning into the state, minus the sum of probability leaving the state.
Even though the dynamics of the derivative are fairly simple, solving the CME is a hard problem [5,6,7,8,9,10,11,12]. A special case of the CME which is truly unyielding to approximation is when a system is close to its population boundary (for example, close to zero). In this scenario, none of the state transitions around a state near the boundary can be summarised by the average transitions out of the state. This impedes most approximation methods, and even if a method was successful, its computational effort would be equivalent to that of solving the Finite State Projection for the same error. An approach which was proposed in various different contexts to simplify dynamics, was to study the number of reactions fired rather than the population counts [6,13,14,15,16,17]. Because reactions cannot “un-fire”, the state transitions are only moving forward. Furthermore, given that the system’s starting population is known, the species count in any state can be reconstructed from the reaction counts.
In this work, we will reintroduce the reaction counts variant of the CME. We will then explore the theoretical and structural results which emerge from its forward stepping process. Then, we will show how the reaction counts CME can be used to construct solutions to the classical CME. Lastly, we decompose the state space of the CME into independent CTMCs and give analytical forms and estimates to calculate their probabilities. The purpose of this work is to gain intuition and explore structural properties of the CME, hence, the motivation is more theory rather than the application of the CME. In light of that, we omit a discussion of numerical methods in this paper and envisage this aspect in future research.

2. Reaction Counts CME

2.1. Formulation

We denote the state space of the reaction counts CME by Λ N 0 N r . Each state is a vector of non-negative integers of length N r . Each element of the vector represents the number of times its corresponding reaction has fired. In the CME setting, the change in state after a reaction fires is given by the stoichiometric vector, which we denoted earlier by v n , for n = { 1 , , N r } . This vector quantifies the net change in populations after the reaction has fired. Analogously, in the reaction counts CME setting, a change in state indicates that a reaction has fired, and its corresponding reaction count is incremented by one. Therefore, the stoichiometric vector for the nth reaction is simply the identity vector 1 n ; the vector is zero in all but the nth position, where it is one.
We bridge the species counts CME and the reaction counts CME with the mapping Γ : Ω × Λ Ω , that is, for ( x 0 , r ) Ω × Λ ,
Γ ( x 0 , r ) : = x 0 + [ v 1 , v 2 , , v N r ] r , = x 0 + n = 1 N r v n r n .
The mapping above links the reaction counts CME to the species counts CME by stating that, given the starting state x 0 and the reactions which have fired are known, then the current state in the species counts is the starting species state plus the sum of stochiometries of all the reactions which have fired. It is important to note that the map Γ is injective into the species state space Ω . In most cases, Λ is of higher dimension than Ω , hence, Γ is seldom bijective. For our purpose, we only need the pull back map of Γ . We denote the pull back map as Γ 1 : Ω × Ω P ( Λ ) , where for ( x 0 , x ) Ω × Ω ,
Γ 1 ( x 0 , x ) : = { r Λ : Γ ( x 0 , r ) = x } .
With the mapping between Λ and Ω established, we can now inherit the propensity function from the species counts over to the reaction counts. For n = 1 , , N r , the reaction counts propensity of the nth reaction is given by,
α n ( x 0 , r ) : = a n ( Γ ( x 0 , r ) ) if Γ ( x 0 , r ) Ω , 0 otherwise .
With the state space, stoichiometry, and the propensities established in the reaction counts setting, we are ready to formulate the reaction counts based Kurtz process. For x 0 Ω , let ( R x 0 ( t ) ) t 0 be the reaction counts Kurtz process,
R x 0 ( t ) : = n = 1 N r 1 n P 0 t α n ( x 0 , R x 0 ( s ) ) d s .
The corresponding reaction counts CME for the process above is given by,
d p ( R x 0 ( t ) = r ) d t = n = 1 N r α n ( x 0 , r 1 n ) p ( R x 0 ( t ) = r 1 n ) n = 1 N r α n ( x 0 , r ) p ( R x 0 ( t ) = r ) ,
for all r Λ . The initial condition for the reaction counts CME is a points mass on the origin,
p ( R x 0 ( 0 ) = r ) = 1 if r = 0 , 0 otherwise .
Upon first glance, the CME of the reaction counts is simpler in its complexity compared to its species counter part. In the reaction case, the processes only move forward, which gives rise to simple forward propagating dynamics. Furthermore, given solutions to the reaction counts CME exist, with the simple application of the push forward measure, we would obtain:
p ( X ( t ) = x ) = r Γ 1 ( X ( 0 ) , x ) p ( R X ( 0 ) ( t ) = r ) .
The relationship given above is the critical motivation for studying the reaction counts CME. In principle, if structures and results could be attained in the reaction counts setting, then by using Γ , these results can be mapped into species setting. The simplest example of this is the proof for the existence of solutions of the CME for finite time. We now show that the reaction counts CME has analytical solutions, then using the push forward measure in Equation (8), we prove that the solutions of the CME exist for a broad range of propensity functions.

2.2. Analytical Solutions of the Reaction Counts CME

Firstly, we will establish the notations and assumptions needed to present the results. For brevity, we denote the sum of all propensities α n as:
α ( · , r ) : = n = 1 N r α n ( · , r ) .
The reaction counts CME in Equation (6) is a difference differential equation, hence, by arranging the probability states in a vector u ( t ) : = ( p ( R x 0 ( t ) = r ) ) r Λ , we reduce solving the reaction counts CME to solving the ODE,
d u ( t ) d t = A * u ( t ) .
For r , l Λ , the matrix A * has the properties: [ A * ] r , r 0 , [ A * ] r , l 0 , with l Λ [ A * ] l , r = 0 . We will prove that A * is a generator of a one-parameter semigroup. Before this, let us consider a simple two reaction system to gain some intuition into the different components of the reaction counts setting.
Example 1.
Let us consider the birth-death process in the context of the reaction counts CME. The birth-death process in the species count is given by,
Z ( t ) = x 0 + P 0 t c b d s P 0 t c d Z ( s ) d s ,
where x 0 N 0 is the starting population, and the birth and death rates are denoted by c b and c d , respectively. We can translate this process into the reaction counts setting using the mapping,
Γ x 0 ( r 1 , r 2 ) = x 0 + r 1 r 2 .
We restrict the reaction counts state space Λ N 0 × N 0 to only contain states which yield non-negative values by applying Γ x 0 . Substituting the mapping Γ x 0 and the process Z ( t ) into Equations (2) to (5) leads to the reaction counts birth-death process formulation:
R x 0 ( t ) = 1 0 T P 0 t c b d s + 0 1 T P 0 t c d Γ x 0 ( R x 0 ( s ) ) 1 , ( R x 0 ( s ) ) 2 d s .
The relationship between the reaction count state space and the species count state space is visualised in Figure 1A. The colours in the figure show how the species count state space partitions the reaction count state space. To demonstrate the forward moving structure of the reaction counts CME, we derive the generator for the first nine states of its state space:
d u ( t ) d t = A * u ( t ) ,
where,
u ( t ) = d p ( R x 0 ( t ) = ( 0 , 0 ) ) d t , d p ( R x 0 ( t ) = ( 1 , 0 ) ) d t , d p ( R x 0 ( t ) = ( 0 , 1 ) ) d t , T ,
A * = [ ( 0 , 0 ) c b c d x 0 ( 1 , 0 ) 0 ( 0 , 1 ) 0 ( 2 , 0 ) 0 ( 1 , 1 ) 0 ( 0 , 2 ) 0 c b c b c d ( x 0 + 1 ) 0 0 0 0 c d x 0 0 c b c d ( x 0 1 ) 0 0 0 0 c b 0 c b c d ( x 0 + 2 ) 0 0 0 c d ( x 0 + 1 ) c b 0 c b c d x 0 0 0 0 c d ( x 0 1 ) 0 0 c b c d ( x 0 2 ) ] .
The initial condition is the identity vector with the total probability mass on state ( 0 , 0 ) . We can observe that due to the forward moving nature of the reaction counts setting, A * is a lower diagonal matrix. In Figure 1B,C, we present a birth-death process with initial state x 0 = 0 , birth rate c b = 1.0 , and death rate c d = 0.1 . Specifically, we show the probability distributions of the first three states of that process which correspond to state 1 in the species count setting. In Figure 1B, it is shown that the probability distribution of being in state 1 in the species count CME setting bounds the corresponding reaction counts CME distributions. In Figure 1C, we progressively add the first three reaction counts states corresponding to the state 1, showing that the reaction counts CME approximates the species counts CME from below.
Theorem 1.
(Sunkara 09 ([15])) Given a system with N s species and N r reactions at a starting state x 0 Ω . If A r is the generator for the reaction counts CME (6), then:
1.
There exists a permutation matrix P such that P T A * P is lower triangular;
2.
The spectrum of A * is the set:
s p e c ( A * ) = n = 1 N r α n ( x 0 , r ) : r Λ .
The property of the reaction counts setting, that the process only moves forward, aids in showing that the generator could be rewritten as a lower triangular matrix. We know that the spectrum of a lower triangular matrix are the diagonal elements of the matrix. In our case, the diagonal elements are simply the negative sum of the outgoing propensities. This then reveals that the spectrum of the generator A * is in the negative real numbers. Then by the spectral mapping theorem, solutions to the reaction counts CME exist. We can further exploit this “forward stepping” structure to write the analytical solutions for the probability at each state in Λ .
Proposition 1.
(Sunkara 09 ([15])) Given a system with N s species and N r reactions at a starting state x 0 Ω . Let C ( 0 , 1 ] . If ( R x 0 ( t ) ) t 0 is the reaction counts Kurtz process (5) with Λ , then the solution to (6) at t > 0 is given by:
p ( R x 0 ( t ) = 0 ) = e α ( x 0 , r ) t C ,
for r = 0 , and,
p ( R x 0 ( t ) = r ) = n = 1 N r α n ( x 0 , r 1 n ) 0 t e α ( x 0 , r ) ( t s ) p ( R x 0 ( s ) = r 1 n ) d s ,
for r Λ \ 0 .
Combining Proposition 1 with the push forward measure in Equation (8), we see that at finite time t the solutions to the CME (1) exist. Unfortunately, since the reaction counts CME cannot reach a stationary state (in cases where the state space Λ is not finite), the proof for the existence of stationary solutions for a generalised CME is still an open problem. Now that the solutions of the reaction counts CME have been established, we can probe further into its properties. Firstly, we explore how the state space of the reaction count CME has a naturally hierarchical partitioning, and how this partitioning can be used to build a sequence of approximate sub-processes.

3. Partitioning the State Space

Definition 1.
For m N , we define,
m : = { r Λ : r 1 = m } ,
to be the set of all states which are reachable by m steps from the origin 0 . The state space Λ naturally partitions into non-intersecting subsets,
Λ = 0 1 2 .
If we consider truncating the state space progressively, we find that this is equivalent to the Finite State Projection with an N-step domain expander [9,18].
Lemma 1.
For x 0 Ω , let ( R x 0 ( t ) ) t 0 be the reaction counts Kurtz process (5). Then for any m N ,
r 1 m n = 1 N r α n ( x 0 , r 1 ) 0 t p ( R x 0 ( s ) = r 1 ) d s = m ˜ = m + 1 r 2 m ˜ p ( R x 0 ( t ) ) = r 2 ) .
Proof. 
Fix m N . Let ( R x 0 , m ( t ) ) t 0 be the same process as ( R x 0 ( t ) ) t 0 , but with the restriction that all propensities of states r m ˜ = m + 1 m ^ are zero. Since in the reaction counts case, states only transition forward, we have that,
p ( R x 0 ( t ) = r ) = p ( R x 0 , m ( t ) = r ) , for all r m ˜ = 0 m m ˜ .
That is, the probability of both processes are the same for states leading up to the states in m . However, since the process R x 0 , m ( t ) does not evolve past m , by the conservation of probability we have that,
m ˜ = 0 m r m ˜ p ( R x 0 , m ( t ) = r ) + r 1 m n = 1 N r α n ( x 0 , r 1 ) 0 t p ( R x 0 , m ( s ) = r 1 ) d s = 1 .
Substituting in Equation (15) for states which appear before m + 1 reduces the above expression to,
m ˜ = 0 m r m ˜ p ( R x 0 ( t ) = r ) + r 1 m n = 1 N r α n ( x 0 , r 1 ) 0 t p ( R x 0 ( s ) = r 1 ) d s = 1 .
By the conservation of probability,
m ˜ = 0 r 2 m ˜ p ( R x 0 ( t ) = r 2 ) = 1 ,
hence, substituting this into the right-hand side of Equation (17) and subtracting the like terms gives,
r 1 m n = 1 N r α n ( x 0 , r 1 ) 0 t p ( R x 0 ( s ) = r 1 ) d s = m ˜ = m + 1 r 2 m ˜ p ( R x 0 ( t ) = r 2 ) .
 ☐
Lemma 1 is an alternative formulation of the principle behind the Finite State Projection method. We extend on this result and can show that for a desired error ε ( 0 , 1 ) , there exists a subset of the state space Λ which will produce an approximation with the desired error.
Theorem 2.
For x 0 Ω , let ( R x 0 ( t ) ) t 0 be the reaction counts Kurtz process (5). For ε > 0 there exists m N such that,
m ˜ = m + 1 r m ˜ p ( R x 0 ( t ) = r ) < ε .
Proof. 
The proof is an extension of Lemma 1. We define,
ϕ m : = r 1 m n = 1 N r α n ( x 0 , r 1 ) 0 t p ( R x 0 ( s ) = r 1 ) d s ,
and,
ψ m + 1 : = m ˜ = m + 1 r 2 m ˜ p ( R x 0 ( t ) = r 2 ) ,
the left- and right-hand terms of Equation (14). Then for m N , Lemma 1 can be reformulated to state,
ϕ m = ψ m + 1 ;
separating the m + 1 states from ψ m + 1 gives us,
= r m + 1 p ( R x 0 ( t ) = r ) + m ˜ = m + 2 r 2 m ˜ p ( R x 0 ( t ) = r 2 ) : = ψ m + 2 , = r m + 1 p ( R x 0 ( t ) = r ) + ψ m + 2 .
Combining the last and first term in the equalities gives us,
ψ m + 1 = r m + 1 p ( R x 0 ( t ) = r ) + ψ m + 2 .
Rearranging the equality reduces to, ψ m + 2 ψ m + 1 < 0 . We have shown that ψ is monotonically decreasing. By definition, ψ has the properties that ψ 1 = 1.0 p ( R x 0 ( t ) = 0 ) and lim m ψ m = 0 . Hence, there exists an m such that ψ m < ε .  ☐
Theorem 2 is an alternative proof for the fact that, when studying transient dynamics (finite time) of the CME for an arbitrary precision, one can always find a finite state space to project the CME onto. The key structure that we used to prove this is that the state space of reaction counts has a natural partitioning and that the probability only flows forward. This gave us the monotonicity needed to prove that state space truncation of the CME is well-posed. In the next section, we further decompose each state in the reaction counts state space into paths over the state space.

4. Paths

Definition 2.
For m { 2 , 3 , } , a vector g = ( g 1 , , g m ) Λ m is said to be an admissible path if for every index i { 2 , , m } there exists an n { 1 , , N r } such that g i = g i 1 + 1 n , where 1 n is the identity vector with one in the nth position.
Definition 3.
For m { 2 , 3 , } , we denote the set of all admissible paths of length m by,
γ m : = { g Λ m : g is admissible }
Definition 4.
For a point r Λ , we denote the set of all admissible paths of length r 1 + 1 which start at the origin 0 and end at the state r by,
G ( r ) : = { g γ r 1 + 1 : g 1 = 0 and g r 1 + 1 = r } .

4.1. Path Chains

We define a Markov chain over a path in the reaction counts state space. The chain must be such that it mimics a “realisation” over the reaction state space. Using Proposition 1, we define a path chain over an admissible path in the reaction state space. It is important to note that a path chain is a continuous-time Markov chain (CTMC) [19], where all reactions which transition the state off the path of the chain are accrued into a sink state. For brevity, we simply remove this component and only describe the states of interest (see Figure 2). Also, given all chains are from the same reaction counts setting, we omit explicitly stating x 0 in the propensity functions. We now define a path chain.
Definition 5.
For m N , g γ m , and C ( 0 , 1 ] , we define a path chain to be the stochastic process ( X g , C ( t ) ) t 0 , with state space { g { sink } } and the probability distribution given by,
p ( X g , C ( t ) = g 1 ) : = e α ( g 1 ) t C ,
p ( X g , C ( t ) = g k ) : = β ( g k 1 , g k ) 0 t e α ( g k ) ( t s ) p ( X g , C ( s ) = g k 1 ) d s ,
for k { 2 , , m } . Then to conserve probability,
p ( X g , C ( t ) = g k ) = sink ) : = 1.0 g k g p ( X g , C ( t ) = g k ) .
The propensity function α is inherited from Equation (9) and β ( g k , g k + 1 ) : = α n ( g k ) , where n is the reaction index which transitions the state g i to g i + 1 .
Before stating the new proposition, let us recall the decompositions performed so far. Firstly, we showed that every state in the species state space can be decomposed into multiple reaction states in the reaction count setting. That is, a population state can be decomposed into all the different ways in which it can be visited. We then ventured further and showed that a path on the species counts state space corresponds to a path with only forward stepping transitions in the reaction counts state space. Now, we extend this further by showing that any state in the reaction counts state space can be decomposed into the sum of all independent path chains which start at the origin and end in that state.
Proposition 2.
For x 0 Ω , let ( R x 0 ( t ) ) t 0 be the Reaction counts Kurtz process (5).
For r Λ \ 0 , let
G ( r ) : = { g γ r 1 + 1 : g 1 = 0 a n d g r 1 + 1 = r } ,
be the set of all admissible paths of length r 1 + 1 which start at 0 and ending at r (Definition 4).
Then,
p ( R x 0 ( t ) = r ) = g G ( r ) p ( X g , 1 ( t ) = r ) .
Proof. 
The Proposition will be proved using mathematical induction. Firstly, we show the base case; that every state which is reachable by one step from the origin satisfies Equation (20). Let r = I n , with n { 1 , , N r } . Applying Proposition 1 we know that,
p ( R x 0 ( t ) = I n ) = α n ( 0 ) 0 t e a ( I n ) ( t s ) p ( R x 0 ( s ) = 0 ) d s .
Now we consider the path chains leading to I n . The state can only be reached in one way, hence, we have that G ( I n ) = { g = ( 0 , I n ) } . Now calculating the probability of the evolution of a path chain (Definition 5) over g G ( I n ) gives us,
p ( X g , 1.0 ( t ) = I n ) : = β ( 0 , I n ) 0 t e α ( I n ) ( t s ) p ( X g , 1 ( s ) = 0 ) d s .
Given the two equations above match, we can conclude that for r 1 ,
p ( R ( t ) = r ) = g G ( r ) p ( X g , 1 ( t ) = r ) .
Now we perform the inductive step. Fix m > 1 . Assume that for r ˇ m ,
p ( R x 0 ( t ) = r ˇ ) = g G ( r ˇ ) p ( X g , 1 ( t ) = r ˇ ) .
Then for r m + 1 , Proposition 1 states that,
p ( R ( t ) = r ) = n = 1 N r α n ( r I n ) 0 t e α ( r ) ( t s ) p ( R x 0 ( s ) = r I n ) d s ,
using the inductive assumption Equation (23) on the right-hand side inside the integral,
= n = 1 N r α n ( r I n ) 0 t e α ( r ) ( t s ) g G ( r I n ) p ( X g , 1 ( s ) = r I n ) d s .
Rewriting the propensities in the path propensity notation gives,
= n = 1 N r β ( r I n , r ) 0 t e α ( r ) ( t s ) g G ( r I n ) p ( X g , 1 ( s ) = r I n ) d s , = n = 1 N r g G ( r I n ) β ( r I n , r ) 0 t e α ( r ) ( t s ) p ( X g , 1 ( s ) = r I n ) d s .
Since all paths in G ( r ) are paths from the set n = 1 N r G ( r I n ) , with r as the last state, the two summations reduce to give,
= g ^ G ( r ) β ( g ^ m , r ) 0 t e α ( r ) ( t s ) p ( X g ^ , 1 ( s ) = g ^ m ) d s .
By Definition 5, the term above captures all the paths in G ( r ) , therefore,
= g ^ G ( r ) p ( X g ^ , 1 ( s ) = r ) d s .
We have shown that given Equation (23) holds on m , then Equation (23) also holds for m + 1 . Hence, by mathematical induction, for any r Λ ,
p ( R x 0 ( t ) = r ) = g G ( r ) p ( X g , 1 ( t ) = r ) d s .
 ☐
We have shown that each state in the reaction counts CME can be decomposed into the framework of “path chains” that we have defined. In essence, we have shown that the path chains are trajectories of the reaction counts Kurtz process. Even though at first glance the result is not a startling revelation, the real novelty lies in how we derived the probability distributions for these trajectories. By carefully decomposing the state space and concurrently deriving the corresponding probability distributions, we were able to give an analytical form for the probability of individual trajectories of the Kurtz process. Using the probability distribution given over the path chains in Definition 5, we can derive the classical stochastic simulation algorithm [20]. Using the path chains framework we have introduced so far, we can build realisations for applications such as: Hitting time problems, rare event problems, observable problems, etc. In the following section, we further build on the properties of path chains by proposing a method for estimating the path probabilities of trajectories.

4.2. Gated- and Un-Gated Path Chains

Given a path or trajectory, we define the path probability to be the probability distribution of the last state in the chain. We can use Definition 5 to analytically compute the probability of any path. However, we can exploit the properties of the path chains further and derive some simple estimates for the path probability. We borrow the notion of gating and un-gating introduced by Sunkara (2013) [21], and show how these can be used to calculate estimates for the path probabilities.
The concept of gating is mathematically tedious to prove, however, the concept is fairly simple. Given a chain g of length m , gating this chain at position j < m means that we set the propensity of leaving state g j to zero. Hence, the probability will simply flow into state g j and remain there (hence, the term “gating”). Then we can “un-gate” the state g j , which means that we reset the time to zero, set the initial probability at state g j to the accumulated probability from the gating, and then allow the process to continue. The reason we are interested in this notion of gating and un-gating is that for path chains, it happens so that gating and un-gating the chains gives an upper bound for the original path probability. We will now prove that this is indeed the case.
Definition 6.
For an admissible path g γ m , we define a truncated path g j : = ( g j , g j + 1 , , g m ) , where the first j 1 terms of the path are ignored.
Definition 7.
Fix Δ t < and x 0 Ω . Let g γ m be a path of length m + 1 . We define ( X g , C ( t ) ) t [ 0 , Δ t ] to be a path chain over g in the time interval [ 0 , Δ t ] . Then for j { 2 , , m } we define an un-gated path chain over g j to be given by:
p ( X g j , C ( t ) = g k ) : = β ( g k 1 , g k ) 0 t e α ( g k ) ( t s ) p ( X g , C j ( s ) = g k 1 ) d s ,
for k { j + 1 , , m } and,
p ( X g j , C ( t ) = g j ) : = e α ( g j ) t C j
where,
C j : = β ( g j 1 , g j ) 0 Δ t p ( X g , C ( s ) = g j 1 ) .
To conserve probability,
p ( X g j , C ( t ) = g k ) = sink ) : = 1.0 g k g j p ( X g , C ( t ) = g k ) .
We see that the un-gated path chain is a normal path chain with the initial probability reset to the total probability which has flown into the gated state (see Figure 3). We now show that gating and un-gating over a path gives an upper bound for the path probability. First we consider the case of a single gating and un-gating at the position g 2 .
Lemma 2.
Fix g γ m and Δ t > 0 . Given two path chains ( X g , C ( t ) ) t [ 0 , Δ t ] and ( X g 2 , C 2 ( t ) ) t [ 0 , Δ t ] , it follows that for all t [ 0 , Δ t ] ,
0 t p ( X g , C ( s ) = 2 ) d t 0 t p ( X g 2 , C 2 ( s ) = 2 ) d t ,
and for j { 3 , , m }
p ( X g , C ( t ) = g j ) p ( X g 2 , C 2 ( t ) = g j ) .
Proof. 
We break down the proof into two steps. First we investigate the bounds formed by gating at the second step, and then we prove the bound over the remaining states in the chain. We begin by showing that for t [ 0 , Δ t ],
0 t p ( X g , C ( s ) = 2 ) d t 0 t p ( X g 2 , C 2 ( s ) = 2 ) d t .
From the definition of the path chain (Definition 5), the solution for the probability of being in state g 2 at time t is given by,
p ( X g , C ( t ) = g 2 ) = C β ( g 1 , g 2 ) α ( g 1 ) α ( g 2 ) e α ( g 2 ) t e α ( g 1 ) t .
Using the solution above, we can calculate the probability which would accumulate in g 2 if the chain was gated at g 2 ,
C 2 : = C β ( g 1 , g 2 ) α ( g 1 ) 1 e α ( g 1 ) Δ t .
Substituting the above terms into the definition of the probability of being is state g 2 at time t in the un-gated path chain ( X g 2 , C 2 ( t ) ) t [ 0 , Δ t ] gives us,
p ( X g 2 , C 2 ( t ) = g 2 ) = C β ( g 1 , g 2 ) α ( g 1 ) 1 e α ( g 1 ) Δ t e α ( g 2 ) t .
Taking the integral of Equations (26) and (28) over the interval [ 0 , t ] [ 0 , Δ t ] gives us,
0 t p ( X g , C ( s ) = g 2 ) d s = C β ( g 1 , g 2 ) ( α ( g 1 ) α ( g 2 ) ) α ( g 1 ) α ( g 1 ) α ( g 2 ) ( e α ( g 1 ) t 1 ) α ( g 1 ) ( e α ( g 2 ) t 1 ) ,
0 t p ( X g 2 , C 2 ( s ) = g 2 ) d s = C β ( g 1 , g 2 ) α ( g 1 ) α ( g 2 ) 1 e α ( g 1 ) Δ t 1 e α ( g 2 ) t .
To simplify notation, let us define,
ϕ ^ ( t ) : = 0 t p ( X g 2 , C 2 ( s ) = g 2 ) d s and ϕ ( t ) : = 0 t p ( X g , C ( s ) = g 2 ) d s .
Then the difference between ϕ ^ ( t ) and ϕ ( t ) reduces to,
ϕ ^ ( t ) ϕ ( t ) = C β ( g 1 , g 2 ) α ( g 1 ) α ( g 2 ) 1 e α ( g 1 ) Δ t 1 e α ( g 2 ) t C β ( g 1 , g 2 ) ( α ( g 1 ) α ( g 2 ) ) α ( g 1 ) α ( g 1 ) α ( g 2 ) ( e α ( g 1 ) t 1 ) α ( g 1 ) ( e α ( g 2 ) t 1 ) .
Since e α ( g 1 ) Δ t e α ( g 1 ) t for all t [ 0 , Δ t ] , we attain the lower bound,
C β ( g 1 , g 2 ) α ( g 1 ) α ( g 2 ) 1 e α ( g 1 ) t 1 e α ( g 2 ) t C β ( g 1 , g 2 ) ( α ( g 1 ) α ( g 2 ) ) α ( g 1 ) α ( g 1 ) α ( g 2 ) ( e α ( g 1 ) t 1 ) α ( g 1 ) ( e α ( g 2 ) t 1 ) .
We show that the right-hand side of the equation above is always positive. We define the right hand-side of Equation (32) as a function,
Ψ ( g 1 , g 2 , t ) : = C β ( g 1 , g 2 ) e α ( g 2 ) t α ( g 2 ) e α ( g 1 ) t α ( g 1 ) + e ( α ( g 1 ) + α ( g 2 ) ) t ( α ( g 1 ) α ( g 2 ) ) ( α ( g 1 ) α ( g 2 ) ) α ( g 1 ) α ( g 2 ) .
We notice that the function Ψ has the property,
Ψ ( g 1 , g 2 , t ) = Ψ ( g 2 , g 1 , t ) .
Firstly, if we consider the case α ( g 1 ) = α ( g 2 ) , we can see that ψ ( g 1 , g 2 , t ) = 0 . Using the symmetry of Ψ , we only need to consider the case α ( g 1 ) < α ( g 2 ) . Then it follows that,
Ψ ( g 1 , g 2 , t ) = C β ( g 1 , g 2 ) e α ( g 2 ) Δ t α ( g 2 ) e α ( g 1 ) t α ( g 1 ) + e ( α ( g 1 ) + α ( g 2 ) ) t ( α ( g 1 ) α ( g 2 ) ) ( α ( g 1 ) α ( g 2 ) ) α ( g 1 ) α ( g 2 ) ,
replacing e α ( g 1 ) t α ( g 1 ) with e α ( g 1 ) t α ( g 2 ) gives us,
C β ( g 1 , g 2 ) ( e α ( g 2 ) t e α ( g 1 ) t ) α ( g 2 ) + e ( α ( g 1 ) + α ( g 2 ) ) t ( α ( g 1 ) α ( g 2 ) ) ( α ( g 1 ) α ( g 2 ) ) α ( g 1 ) α ( g 2 ) , = C β ( g 1 , g 2 ) ( e α ( g 2 ) t e α ( g 1 ) t ) ( α ( g 1 ) α ( g 2 ) ) α ( g 1 ) > 0 + e ( α ( g 1 ) + α ( g 2 ) ) Δ t ) α ( g 1 ) α ( g 2 ) > 0 , 0 .
Hence, the right-hand side of Equation (33) is positive. So it follows that for t [ 0 , Δ t ] ,
0 t p ( X g , C ( s ) = 2 ) d s 0 t p ( X g 2 , C 2 ( s ) = 2 ) d s .
We have shown that the outflow of probability in the un-gated chain is more than in the original path chain. We now show that this phenomenon continues all the way down the chain. We prove this using mathematical induction. First, let us consider the base case j = 3 . By the definition of path chains,
p ( X g , C ( t ) = g 3 ) p ( X g 2 , C 2 ( t ) = g 3 ) = β ( g 2 , g 3 ) 0 t e α ( g 3 ) ( t s ) p ( X g , C ( s ) = g 2 ) p ( X g 2 , C 2 ( s ) = g 2 ) d s , β ( g 2 , g 3 ) 0 t p ( X g , C ( s ) = g 2 ) p ( X g 2 , C 2 ( s ) = g 2 ) d s ,
applying condition (25) gives us that,
0 .
Hence, for t [ 0 , Δ ] ,
p ( X g , C ( t ) = g 3 ) p ( X g 2 , C 2 ( t ) = g 3 ) .
We now build the inductive step. Fix k { 4 , , m } , assume that, for all t [ 0 , Δ t ] ,
p ( X g , C ( t ) = g k ) p ( X g 2 , C 2 ( t ) = g k ) .
Then by the definition of path chains,
p ( X g , C ( t ) = g k + 1 ) p ( X g 2 , C 2 ( t ) = g k + 1 ) = β ( g k , g k + 1 ) 0 t e α ( g k + 1 ) ( t s ) p ( X g , C ( s ) = g k ) p ( X g 2 , C 2 ( s ) = g k ) d s , β ( g k , g k + 1 ) 0 t p ( X g , C ( s ) = g k ) p ( X g 2 , C 2 ( s ) = g k ) d s ,
applying the inductive assumption Equation (34) gives us that,
0 .
Hence, by mathematical induction, for j { 3 , , m } and t [ 0 , Δ t ] , the probabilities of the un-gated chain bound the probability on the original path chain, that is,
p ( X g , C ( t ) = g j ) p ( X g 2 , C 2 ( t ) = g j ) .
 ☐
The intuition behind Lemma 2 is that once we gate and un-gate a state, the un-gated chain has more probability in it than the regular path chain beyond the gated state. Since the dynamics of the path chain beyond the gated state are not changed, we would expect that the un-gated chain (having more probability) would be an upper bound for the original path chain. The proof, as seen above, is sadly tedious. We now consider a cascade of gating and un-gating, that is, we gate and un-gate as we transition through the path chain. We prove that this gives an upper bound for the path probabilities.

4.3. Cascade of Gating and Un-Gating

We now set up a cascade of gating and un-gating at ever state of the path chain. We achieve this by defining the starting probability of the un-gating. Previously, an un-gated path chain at position j had a initial probability of,
C j : = β ( g j 1 , g j ) 0 Δ t p ( X g , C ( s ) = g j 1 ) d s .
We now introduce a recurrent initial probability,
C ˜ j : = β ( g j 1 , g j ) 0 Δ t p ( X g j 1 , C ˜ j 1 ( s ) = g j 1 ) d s ,
for j { 2 , , m 1 } . Using this new recursive initial probability, we will build a sequence of path chains and using the idea of gating and un-gating at each transition of the path chain, we can construct an upper bound for the path probability.
Theorem 3.
Fix g γ m and Δ t > 0 . Let ( X g , C ( t ) ) t [ 0 , Δ t ] be a path chain over g . We recursively define a set of un-gated chains, { ( X g j , C ˜ j ( t ) ) t [ 0 , Δ t ] : j { 2 , , m 2 } } . Then for k { 3 , , m 1 } ,
0 Δ t p ( X g , C ( s ) = g k ) d s 0 Δ t p ( X g k 1 , C ˜ k 1 ( s ) = g k ) d s .
Proof. 
We prove this result by repeatedly applying Lemma 2. Fix k { 3 , , m 1 } , then by the first clause of Lemma 2 we have that,
0 Δ t p ( X g k 1 , C ˜ k 1 ( s ) = g k ) d s 0 Δ t p ( X g k 2 , C ˜ k 2 ( s ) = g k ) d s ,
since g k is the third state of the chain g k 2 , we can apply the second clause of Lemma 2 to get,
0 Δ t p ( X g k 3 , C ˜ k 3 ( s ) = g k ) d s ,
stepping backward in the chain using Lemma 2, we reduce down to,
0 Δ t p ( X g , C ( s ) = g k ) d s .
 ☐
In summary, by gating and un-gating along a path chain we can easily construct estimates for the probability of the trajectories of interest. We now consider an example where we build a path chain of the birth-death process introduced in Example 1, to show how to compute the gating and un-gating steps.
Example 2.
Let us consider the birth-death process introduced in Example 1, with the parameters x 0 = 2 , c b = 1.0 , and c d = 0.1 . We will now construct the path probability of the chain:
g = ( 0 , 0 ) , ( 1 , 0 ) , ( 1 , 1 ) , ( 1 , 2 ) , ( 1 , 3 ) .
The generator for this path chain is given by,
A * = [ ( 0 , 0 ) c b 2 c d ( 1 , 0 ) 0 ( 1 , 1 ) 0 ( 1 , 2 ) 0 ( 1 , 3 ) 0 s i n k 0 c b c b 3 c d 0 0 0 0 0 3 c d c b 2 c d 0 0 0 0 0 2 c d c b c d 0 0 0 0 0 c d 0 0 2 c d c b c b c b 0 0 ] .
We can solve for p ( X g , 1 ( t ) = ( 1 , 3 ) ) by simply taking the matrix exponential at time t and applying it to the respective initial probability. Now we will construct the gating and un-gating approximation. This approximation is calculated using the following recursive equation: Let m be the number of states in the chain g , then,
u n ( t ) = β ( g n 1 , g n ) α ( g n 1 ) 1.0 e α ( g n 1 ) t u n 1 ( t ) ,
for n = 2 , , m with u 1 = 1.0 . The functions α and β are as given in Definition 5. Hence, u 5 ( t ) is the gated and un-gated estimate for the probability p ( X g , 1 ( t ) = ( 1 , 3 ) ) . In Figure 4A, we plot the true path probability, p ( X g , 1 ( t ) = ( 1 , 3 ) ) , and the upper bound for the path probability, u 5 ( t ) , for time interval t [ 0 , 10 ] seconds. The graph shows that u 5 ( t ) is an upper bound for p ( X g , 1 ( t ) = ( 1 , 3 ) ) at any time point. It is interesting to observe that the error between the two functions decreases over time, however it is not clear whether the difference will converge to zero as time goes to infinity. To test whether this behaviour can be observed in other parameter settings, we plot the analytical path probability and its upper bound for three different parameter settings in Figure 4B. We observe the same effect in all three presented cases. This raises the question whether this observation is a structural property or only an artefact of the specific model which is considered here.

5. Conclusions

We began by formulating the reaction counts CME and showing that its forward stepping characteristic yields analytical solutions. Then, with a simple application of the push forward measure, we could prove the existence of solutions of the CME for finite time. We further decomposed the reaction counts state space into independent CTMCs and gave analytical forms and estimates for their path probabilities. Hence, we have derived analytical theory for trajectories which arise from the solutions of the CME. This theory can be used for computing observable estimates of the underlying CTMCs. We can also use the paths to estimate the hitting times and rare event probabilities. A natural future direction would be to investigate if the current framework of the time independent propensities can be translated to the time dependent propensities. An even more challenging task would be, in the context of parameter inference, to reverse time on the trajectories and to cascade probability backwards.

Funding

V. Sunkara was supported by the BMBF (Germany) project PrevOp-OVERLOAD, grant number 01EC1408H, and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy—The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689). The publication cost were funded by the German Research Foundation and the OpenAccess Publication Fund of Freie Universität Berlin.

Acknowledgments

The author would like to thank the Biocomputing group at FU Berlin for all their stimulating discussions and constant encouragement.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Wilkinson, D.J. Mathematical and Computational Biology Series. In Stochastic Modelling for Systems Biology; Chapman & Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
  2. Van Kampen, N.G. Stochastic Processes in Physics and Chemistry, 3rd ed.; North Holland: Amsterdam, The Netherlands, 2007. [Google Scholar]
  3. Kurtz, T.G. Strong approximation theorems for density dependent Markov chains. Stoch. Process. Their Appl. 1978, 6, 223–240. [Google Scholar] [CrossRef] [Green Version]
  4. Grima, R.; Thomas, P.; Straube, A.V. How accurate are the nonlinear chemical Fokker-Planck and chemical Langevin equations? J. Chem. Phys. 2011, 135, 084103. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Higham, D.J. Modeling and Simulating Chemical Reactions. SIAM Rev. 2008, 50, 347–368. [Google Scholar] [CrossRef] [Green Version]
  6. Hegland, M.; Hellander, A.; Lötstedt, P. Sparse grids and hybrid methods for the chemical master equation. BIT Numer. Math. 2008, 48, 265–283. [Google Scholar] [CrossRef]
  7. Engblom, S. Spectral approximation of solutions to the chemical master equation. J. Comput. Appl. Math. 2009, 229, 208–221. [Google Scholar] [CrossRef] [Green Version]
  8. Jahnke, T.; Udrescu, T. Solving chemical master equations by adaptive wavelet compression. J. Comput. Phys. 2010, 229, 5724–5741. [Google Scholar] [CrossRef] [Green Version]
  9. Sunkara, V.; Hegland, M. An Optimal Finite State Projection Method. Procedia Comput. Sci. 2010, 1, 1579–1586. [Google Scholar] [CrossRef]
  10. Kazeev, V.; Khammash, M.; Nip, M.; Schwab, C. Direct Solution of the chemical master equation Using Quantized Tensor Trains. PLoS Comput. Biol. 2014, 10, e1003359. [Google Scholar] [CrossRef] [PubMed]
  11. Schnoerr, D.; Sanguinetti, G.; Grima, R. Approximation and inference methods for stochastic biochemical kinetics—A tutorial review. J. Phys. A Math. Theor. 2017, 50, 093001. [Google Scholar] [CrossRef]
  12. Vlysidis, M.; Kaznessis, Y. Solving Stochastic Reaction Networks with Maximum Entropy Lagrange Multipliers. Entropy 2018, 20, 700. [Google Scholar] [CrossRef]
  13. Haseltine, E.L.; Rawlings, J.B. Approximate simulation of coupled fast and slow reactions for stochastic chemical kinetics. J. Chem. Phys. 2002, 117, 6959–6969. [Google Scholar] [CrossRef] [Green Version]
  14. Goutsias, J. Quasiequilibrium approximation of fast reaction kinetics in stochastic biochemical systems. J. Chem. Phys. 2005, 122, 184102. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Sunkara, V. The chemical master equation with respect to reaction counts. In Proceedings of the 18th World IMACS Congress and (MODSIM09) International Congress on Modelling and Simulation, Cairns, Australia, 13–17 July 2009; Volume 1, pp. 2377–2383. [Google Scholar]
  16. Menz, S.; Latorre, J.; Schütte, C.; Huisinga, W. Hybrid Stochastic–Deterministic Solution of the Chemical Master Equation. Multiscale Model. Simul. 2012, 10, 1232–1262. [Google Scholar] [CrossRef]
  17. Black, A.J.; Ross, J.V. Computation of epidemic final size distributions. J. Theor. Biol. 2015, 367, 159–165. [Google Scholar] [CrossRef] [PubMed]
  18. Khammash, M.; Munsky, B. The finite state projection algorithm for the solution of the chemical master equation. J. Chem. Phys. 2006, 124, 1–12. [Google Scholar] [CrossRef]
  19. Norris, J.R. Markov Chains; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  20. Gillespie, D.T. Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem. 1977, 81, 2340–2361. [Google Scholar] [CrossRef]
  21. Sunkara, V. Analysis and Numerics of the Chemical Master Equation. Ph.D. Thesis, The Australian National University, Canberra, Australia, 2013. [Google Scholar]
Figure 1. (A) Cartoon showing the mapping of the reactions counts birth-death process to the species count birth-death process. (B) An evaluation of the birth-death process for parameters ( x 0 = 0 , c b = 1.0 , c d = 0.1 ) . The plot shows: The distribution of the species having population one over time p ( Z ( t ) = 1 ) ; and the distributions of reaction ( 1 , 0 ) , ( 1 , 2 ) , and ( 2 , 3 ) firing at time t , in the reaction counts setting. (C) ( ) = p ( R 0 ( t ) = ( 0 , 1 ) ) , ( ) = p ( R 0 ( t ) = ( 0 , 1 ) ) + p ( R 0 ( t ) = ( 1 , 2 ) ) , ( ) = p ( R 0 ( t ) = ( 0 , 1 ) ) + p ( R 0 ( t ) = ( 1 , 2 ) ) + p ( R 0 ( t ) = ( 2 , 3 ) ) . The distribution of the species count chemical master equation (CME) in state 1 at t is approached from below by the sum of the probabilities of the reactions firing which result in being in state 1 .
Figure 1. (A) Cartoon showing the mapping of the reactions counts birth-death process to the species count birth-death process. (B) An evaluation of the birth-death process for parameters ( x 0 = 0 , c b = 1.0 , c d = 0.1 ) . The plot shows: The distribution of the species having population one over time p ( Z ( t ) = 1 ) ; and the distributions of reaction ( 1 , 0 ) , ( 1 , 2 ) , and ( 2 , 3 ) firing at time t , in the reaction counts setting. (C) ( ) = p ( R 0 ( t ) = ( 0 , 1 ) ) , ( ) = p ( R 0 ( t ) = ( 0 , 1 ) ) + p ( R 0 ( t ) = ( 1 , 2 ) ) , ( ) = p ( R 0 ( t ) = ( 0 , 1 ) ) + p ( R 0 ( t ) = ( 1 , 2 ) ) + p ( R 0 ( t ) = ( 2 , 3 ) ) . The distribution of the species count chemical master equation (CME) in state 1 at t is approached from below by the sum of the probabilities of the reactions firing which result in being in state 1 .
Entropy 21 00607 g001
Figure 2. The cartoon above depicts the generator of a path chain. With α ( · ) being the total outward propensity and β ( · , · ) the propensity to transition to the next state in the chain. All reactions leading away from the chain are directed into the sink state.
Figure 2. The cartoon above depicts the generator of a path chain. With α ( · ) being the total outward propensity and β ( · , · ) the propensity to transition to the next state in the chain. All reactions leading away from the chain are directed into the sink state.
Entropy 21 00607 g002
Figure 3. The cartoon above depicts the cascade of gating being performed on a path chain. When a state is gated, the propensities leaving the state are set to zero (depicted with a red cross). When the state is un-gated, the propensities are reintroduced.
Figure 3. The cartoon above depicts the cascade of gating being performed on a path chain. When a state is gated, the propensities leaving the state are set to zero (depicted with a red cross). When the state is un-gated, the propensities are reintroduced.
Entropy 21 00607 g003
Figure 4. (A) Graph of the path probability p ( X g , 1 ( t ) = ( 1 , 3 ) ) and the upper bound of the path probability u 5 ( t ) for the time interval t [ 0 , 10 ] . (B) Case 1: ( c b = 1.0 , c d = 0.15 ) , Case 2: ( c b = 1.0 , c d = 0.2 ) , Case 3: ( c b = 2.0 , c d = 0.15 ) . “analytical” refers to the path probability and “approximation” refers to the upper bound of the path probability.
Figure 4. (A) Graph of the path probability p ( X g , 1 ( t ) = ( 1 , 3 ) ) and the upper bound of the path probability u 5 ( t ) for the time interval t [ 0 , 10 ] . (B) Case 1: ( c b = 1.0 , c d = 0.15 ) , Case 2: ( c b = 1.0 , c d = 0.2 ) , Case 3: ( c b = 2.0 , c d = 0.15 ) . “analytical” refers to the path probability and “approximation” refers to the upper bound of the path probability.
Entropy 21 00607 g004

Share and Cite

MDPI and ACS Style

Sunkara, V. On the Properties of the Reaction Counts Chemical Master Equation. Entropy 2019, 21, 607. https://doi.org/10.3390/e21060607

AMA Style

Sunkara V. On the Properties of the Reaction Counts Chemical Master Equation. Entropy. 2019; 21(6):607. https://doi.org/10.3390/e21060607

Chicago/Turabian Style

Sunkara, Vikram. 2019. "On the Properties of the Reaction Counts Chemical Master Equation" Entropy 21, no. 6: 607. https://doi.org/10.3390/e21060607

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop