Licensed under the Creative Commons attribution-noncommercial license, http://creativecommons.org/licenses/by-nc/3.0/. Please share and remix noncommercially, mentioning its origin.
The models we’ve worked on so far have all been deterministic. It’s important to realize that this assumption is unrealistic. Stochasticity both environmental and demographic is an everpresent feature of real systems. He we expand our modeling toolkit to include some methods for studying stochastic versions of compartmental models. In particular, we introduce the concept of birth-death processes and Gillespie’s direct method for solving such processes.
First, we aim to understand the theory of birth-death processes in
general. In section above, we studied a model that was deterministic,
continuous in time, and continuous in the state variables
At this point, there are still a number of different directions we could go. We will make two assumptions that, together with the assumption of demographic stochasticity, uniquely determine a whole class of models. First, we assume that the epidemic is a Markov chain. A Markov chain is a stochastic process with the property that the future state of the system is dependent only on the present state of the system and conditionally independent of all past states. This is known as the memoryless property. Second, we assume that the changes in the state variables (increments and decrements) occur one at a time. That is, we cannot have two individuals simultaneously undergoing a “transition”, where refers to any change in the state variables (birth, death, conversion between classes, etc.). Any time an individual undergoes such a transition, we will call it an “event”. Accordingly, this kind of modeling is sometimes referred to as “event-driven simulation”. For historical reasons, the continuous time Markov chain with increments and decrements of one is known as a birth-death process. (In general, a Markov chain with integer-valued increments and decrements is known as a jump process.)
[Note: An aside about birth-death processes is that notation varies
considerably from author to author, even though the authors are
referring to exactly the same stochastic process. Particularly, the
computational literature uses a notation borrow from chemistry, e.g.,
To illustrate the approach, we’ll start with the simple closed
stochastic SI epidemic. Because the population is closed (no births,
deaths, or migration) we represent total population size as a constant
If we can determine what the sequence of “inter-event times” is, then we have fully specified the trajectory of the epidemic, for we know that at each of those times the number of susceptibles decreases by one and the number of infecteds increases by one. So, what are the inter-event times? The memoryless property of the continuous time Markov chain requires that the time between events is independent of the time between any other set of events, and, moreover, that if we were to investigate the process at any point in time between events the time to the next event would be independent of the time elapsed since the previous event. This defines the birth-death process as a kind of Poisson process. There is only one distribution for the inter-event times that has this property, the exponential distribution. Since we know how to simulate exponentially distributed random variables, we just simulate the sequence of event times and make our increments and decrements accordingly. This approach is known as Gillespie’s direct method.
Let’s implement Gillespie’s direct method for an SI epidemic with
demographic stochasticity. It’s sometimes convenient to express a model
such as this using chemical equations. In this case, it’s just one
“reaction”.
Here’s a function implementing the Gillespie direct method. Note that it just takes one step at a time and depends on another function which actually computes that step.
SI.simul <- function (x, params, nstep) {
## set up an array to store results
output <- array(dim=c(nstep+1,3))
## name the variables in the array
colnames(output) <- c("time","X","Y")
output[1,] <- x # initial condition
## iterate the model for nstep events
for (k in 1:nstep) {
## update x and store result
output[k+1,] <- x <- SI.onestep(x,params)
}
as.data.frame(output)
}
Here’s the function that takes the step.
SI.onestep <- function (x, params) {
## the second element of x is number of susceptibles X
X <- x[2]
## the third element of x is number of infecteds Y
Y <- x[3]
event.rate <- params["beta"]*X*Y/(X+Y)
## how much time do we wait for this event?
tau <- rexp(n=1,rate=event.rate)
c(tau=x[1]+tau,X=X-1,Y=Y+1)
}
Using the same parameters as before, we run some simulations and plot.
Note
that because time is continuous and the process is stochastic, the
number of events that occur in a specified amount of time will vary.
Instead, we save some pre-set number of “events”.
set.seed(38499583) # make results repeatable
nsims <- 10 # number of simulations to run
pop.size <- 200 # size of the population
Y0 <- 2 # initial number infected
nstep <- pop.size-Y0 # run until everyone infected
xstart <- c(time=0,X=(pop.size-Y0),Y=Y0) # initial conditions
params <- c(beta=60,gamma=365/13) # parameters (R0=2.1)
simdat <- vector(mode='list',length=nsims) # to store simulated data
for (k in 1:nsims) {
simdat[[k]] <- SI.simul(xstart,params,nstep)
}
trange <- range(sapply(simdat,function(x)range(x$time)))
yrange <- range(sapply(simdat,function(x)range(x$Y)))
plot(trange,yrange,type='n',xlab='time',ylab="Y",bty='l')
for (k in 1:nsims)
lines(Y~time,data=simdat[[k]],col=k,type='o',pch=16)
Simulate the stochastic SI model using Gillespie’s direct method. Experiment with the initial number of infecteds (
The techniques we’ll take up next will allow us to go beyond the
simple SI model to models with an arbitrary number of compartments. For
concreteness, let’s look at a stochastic SIR epidemic with demography.
In particular, we have
Since the transition processes are independent we can calculate a “total event rate” as the sum of the individual rates. That is, the waiting time to the next event, regardless of what kind it is, is exponentially distributed. The rate for this distribution is just the sum of the rates of the eight individual processes. Once we know when an event will occur, the next step is to determine which event it is. This is easy since each event must occur with probability proportional to its rate. This means that we can randomly choose which event occurs, but that we must do so in such a way such that each transition is selected in proportion to its contribution to the total rate.
Let’s see how to implement this in R
. As before, we first define a one-step function.
SIR.onestep <- function (x, params) {
X <- x[2]
Y <- x[3]
Z <- x[4]
N <- X+Y+Z
beta <- params["beta"]
mu <- params["mu"]
gamma <- params["gamma"]
## each individual rate
rates <- c(
birth=mu*N,
infection=beta*X*Y/N,
recovery=gamma*Y,
sdeath=mu*X,
ideath=mu*Y,
rdeath=mu*Z
)
## what changes with each event?
transitions <- list(
birth=c(1,0,0),
infection=c(-1,1,0),
recovery=c(0,-1,1),
sdeath=c(-1,0,0),
ideath=c(0,-1,0),
rdeath=c(0,0,-1)
)
## total event rate
total.rate <- sum(rates)
## waiting time
if (total.rate==0)
tau <- Inf
else
tau <- rexp(n=1,rate=total.rate)
## which event occurs?
event <- sample.int(n=6,size=1,prob=rates/total.rate)
x+c(tau,transitions[[event]])
}
Also as before, we set parameters and loop through the process. We do things slightly differently here: we’ll stop the simulations if ever we run out of susceptibles. For safety’s sake, however, we’ll put an upper bound on the amount of work we’ll do.
SIR.simul <- function (x, params, maxstep = 10000) {
output <- array(dim=c(maxstep+1,4))
colnames(output) <- names(x)
output[1,] <- x
k <- 1
## loop until either k > maxstep or
## there are no more infectives
while ((k <= maxstep) && (x["Y"] > 0)) {
k <- k+1
output[k,] <- x <- SIR.onestep(x,params)
}
as.data.frame(output[1:k,])
}
Now let’s repeat for 10 runs and plot.
set.seed(56856583)
nsims <- 10
xstart <- c(time=0,X=392,Y=8,Z=0) #initial conditions
params <- c(mu=0.02,beta=60,gamma=365/13) #parameters
require(plyr)
simdat <- rdply(
nsims,
SIR.simul(xstart,params)
)
head(simdat)
plot(Y~time,data=simdat,type='n')
d_ply(simdat,".n",function(x)lines(Y~time,data=x,col=.n))
Check out the simdat
data frame created by the above code. Use class
, head
, tail
, str
, and plot
to examine it.
Simulate the stochastic SIR model using Gillespie’s direct method. As before, experiment with the initial number of infecteds (
An alternative implementation of the codes above will return the results at specified time-points.
SIR.simul.alt <- function (x, params, times) {
output <- array(dim=c(times,4),dimnames=list(NULL,names(x)))
output[1,] <- x
t <- times[1]
## loop until either k > maxstep or
## there are no more infectives
for (k in seq.int(from=2,to=length(times))) {
while (t < times[k]) {
x <- SIR.onestep(x,params)
}
while (t > times[k] && k <= length(times)) {
output[k,] <- x
k <- k+1
}
}
as.data.frame(output)
}
From the deterministic models, we know that
The following codes perform many simulations at each of several values of
R0vals <- c(0.5,3) # R0 values to explore
xstart <- c(time=0,X=392,Y=8,Z=0)
params <- c(mu=0.02,beta=60,gamma=365/13) #parameters
nsims <- 100 # number of simulations per R0 value
simdat <- array(dim=c(length(R0vals),nsims))
for (k in seq_along(R0vals)) {
R0 <- R0vals[k]
params <- c(mu=1/60,gamma=365/13,beta=R0*365/13)
simdat[k,] <- replicate(n=nsims,
{
sim <- SIR.simul(xstart,params)
tail(sim$Z,1)
}
)
}
We can plot these distributions using histograms.
binwidth <- 10
popsize <- sum(xstart[-1])
breaks <- seq(from=0,to=popsize,by=binwidth)
hists <- apply(simdat,1,hist,breaks=breaks,
probability=TRUE,plot=FALSE)
midpoints <- hists[[1]]$mids
counts <- sapply(hists,function(x)x$counts)
prob <- sapply(hists,function(x)x$density)
barplot(height=t(prob),width=binwidth,names=midpoints,
beside=T,col=seq_along(R0vals),
xlab="epidemic final size")
legend("top",legend=R0vals,fill=seq_along(R0vals),
title=expression(R[0]),bty='n')
Just for fun, let’s make a similar plot using plyr
, reshape2
, and ggplot2
.
require(ggplot2)
require(reshape2)
rownames(simdat) <- R0vals
simdat2 <- melt(simdat,
varnames=c("R0","rep"),
value.name="finalsize"
)
ggplot(data=simdat2,mapping=aes(x=finalsize,group=R0))+
geom_histogram(binwidth=10,color=NA,position='dodge')+
facet_grid(R0~.,labeller=label_bquote(R[0]==.(x)))
Use the codes above to explore the final size in a neighborhood of the critical threshold. How do the results from the deterministic and stochastic models differ?
Use the stochastic simulator to generate some “data” for known parameter values:
The Gillespie algorithm gives us a way to generate exact realizations when stochasticity is purely demographic. Very frequently, we wish to consider models with environmental stochasticity or we are thinking about large populations, for which the Gillespie algorithm is impractically slow. The stochastic differential equation (SDE) formalism is a convenient approximation under these circumstances. It represents a more coarse-grained approach to the epidemiological dynamics that can nevertheless capture many important features at relatively modest computational cost.
Here, we’ll examine an SDE formulation of the SIR model with
demography and implement some codes that allow us to simulate
realizations of this model. When working with compartmental models, the
mathematics becomes more convenient if we think first in terms of the
fluxes between the compartments. A diagram of the SIR model is shown
below, with emphasis on the fluxes. Associated with each flux is a
counting process, i.e., a nondecreasing, integer-valued stochastic
process. In particular, for any pair of compartments X, Y, let
Diagram of the SIR model
To formulate the SDE version of the SIR model, we write stochastic
differential equations for the fluxes. For example, we might propose the
following:
eulmar <- function (func, xstart, times, params, dt) {
out <- array(
dim=c(length(times),length(xstart)),
dimnames=list(NULL,names(xstart))
)
out[1,] <- x <- xstart
t <- times[1]
for (k in seq.int(from=2,to=length(times))) {
while (t < times[k]) {
dx <- func(t,x,params,dt)
x <- x+dx
t <- t+dt
}
out[k,] <- x
}
as.data.frame(cbind(time=times,out))
}
In the eulmar
function, it is assumed that func
is a function that takes one Euler-Maruyama step of size dt
. Here’s an example of such a function for the SIR model.
sir.eulerstep <- function (t, x, params, dt) {
N <- sum(x)
means <- c(
params["mu"]*N,
params["beta"]*x[1]*x[2]/N,
params["gamma"]*x[2],
params["mu"]*x
)*dt
dn <- rnorm(n=6,mean=means,sd=sqrt(means))
c(
dn[1]-dn[2]-dn[4],
dn[2]-dn[3]-dn[5],
dn[3]-dn[6]
)
}
set.seed(238785319)
xstart <- c(X=392,Y=8,Z=0) #initial conditions
params <- c(mu=0.02,beta=50,gamma=365/13) #parameters
times <- seq(from=0,to=0.5,by=1/365)
x <- eulmar(func=sir.eulerstep,
xstart=xstart,params=params,
times=times,
dt=0.001)
plot(Y~time,data=x,type='o')
In this case, it will be useful to transform the state variables. To do this, we must use the Itô formula:
log.sir.eulerstep <- function (t, x, params, dt) {
x <- exp(x)
N <- sum(x)
means <- c(
params["mu"]*N,
params["beta"]*x[1]*x[2]/N,
params["gamma"]*x[2],
params["mu"]*x
)*dt
v <- means
dn <- rnorm(n=6,mean=means,sd=sqrt(means))
c(
(dn[1]-dn[2]-dn[4])/x[1]-sum(v[c(1,2,4)])/2/x[1]/x[1],
(dn[2]-dn[3]-dn[5])/x[2]-sum(v[c(2,3,5)])/2/x[2]/x[2],
(dn[3]-dn[6])/x[3]-sum(v[c(3,6)])/2/x[3]/x[3]
)
}
set.seed(23785319)
xstart <- c(X=392,Y=8,Z=1)
params <- c(mu=0.02,beta=50,gamma=365/13)
times <- seq(from=0,to=0.5,by=1/365)
x <- eulmar(func=log.sir.eulerstep,
xstart=log(xstart),
params=params,
times=times,
dt=0.001)
within(
x,
{
X <- exp(X)
Y <- exp(Y)
Z <- exp(Z)
}
) -> x
plot(Y~time,data=x,type='o')
require(plyr)
simdat <- rdply(
20,
within(
eulmar(func=log.sir.eulerstep,
xstart=log(xstart),
params=params,
times=times,
dt=0.001),
{
X <- exp(X)
Y <- exp(Y)
Z <- exp(Z)
}
)
)
plot(Y~time,data=simdat,type='n')
d_ply(simdat,".n",function(x)lines(Y~time,data=x,col=.n))