JAGS

JAGS stands for Just Another Gibbs Sampler. It can be invoked directly from R. The usefulness of this program (as well as other Gibbs samplers such as Winbugs, NIMBLE, and STAN) is that it allows users to focus on specifying the probabilistic model rather than on the computation (e.g., code writing and debugging). Yet, one inevitable drawback of this is that users may use this tool without necessarily grasping the implications of their assumptions, often specifying models that are nonsensical. The other problem is that, when something breaks inside the black-box, you are stuck with errors that are often incomprehensible. Finally, JAGS can be very slow to run when you have a lot of data. Anyway, here is a nice webpage that explains a bit how to implement different models in JAGS.

JAGS is a stand alone software that you will need to download from this website. To allow R to communicate with JAGS, we will rely on the R library “jagsUI” so make sure this has been installed and loaded into R.

Statistical model

To illustrate how to use JAGS, let’s go back to our running example. Recall that we are assuming: \[x_i \sim N(\mu,\sigma^2)\] \[\mu \sim N(m_0,v^2_0)\] \[\frac{1}{\sigma^2} \sim Gamma(a,b)\]

where

\[m_0=0\] \[v^2_0 = 100\] \[a=b=0.1\]

Model specification in JAGS

Whenever using JAGS, we will rely on two files. One file is the JAGS file, which has the specification of the statistical model. This is stored in a file called “normal_model.R”. Here it’s content:

model{
    for (i in 1:N){
        X[i]~dnorm(mu,prec)
    }
    mu ~ dnorm(0,1/100)
    prec ~ dgamma(0.1,0.1)
}

Notice that:

  • we are trying to estimate two parameters “mu” and “prec”.
  • keep in mind that in JAGS, the normal distribution is parameterized using the mean (mu) and the precision (i.e., 1/sig2).
  • The line within the loop is the description of the likelihood, which simply states that \(x_i\), for i=1,…,N, comes from a normal distribution with the same set of parameters “mu” and “prec”.
  • The code outside the loop describes our priors.

Notice that with JAGS it is very simple to experiment with different priors or likelihoods. On the other hand, if we want to derive the FCDs to create our customized Gibbs sampler from scratch, we have to be very careful with the choice of the likelihood-prior pair to ensure conjugacy. At a later moment, as part of developing our Gibbs sampler from scratch, we will explore how to generate samples from FCDs for which we do not have a closed form expression (i.e., these FCDs do not correspond to any standard distribution) using the Metropolis-Hastings algorithm.

Running JAGS from within R

The other file is a R file that puts data into a format that is suitable for JAGS, runs JAGS, and examines its output. Here it is:

rm(list=ls(all=TRUE)) 
library(jagsUI)

#generate simulated data
true.mu=3
true.sd=4
X=rnorm(1000,mean=true.mu,sd=true.sd)

# MCMC settings 
ni <- 15000 #number of iterations
nt <- 50    #interval to thin 
nb <- 5000  #number of iterations to discard as burn-in
nc <- 3     #number of chains

#set parameters to monitor
params=c("mu","prec")

#create function that sets the initial values
inits=function(){
  list(mu=rnorm(1,0,10), prec=rgamma(1,1,1))
}

#run model
setwd('U:/uf/courses/bayesian course/rmarkdown')
model2=jags(model.file="normal_model.R", inits=inits,
           parameters.to.save=params,data=list(X=X,N=length(X)),n.chains=nc,
           n.burnin=nb,n.iter=ni,n.thin=nt,DIC=TRUE)
model2 #the Rhat results suggest that the algorithm has converged
## JAGS output for model 'normal_model.R', generated by jagsUI.
## Estimates based on 3 chains of 15000 iterations,
## adaptation = 100 iterations (sufficient),
## burn-in = 5000 iterations and thin rate = 50,
## yielding 600 total samples from the joint posterior. 
## MCMC ran for 0.014 minutes at time 2024-02-15 09:08:03.160588.
## 
##              mean    sd     2.5%      50%    97.5% overlap0 f  Rhat n.eff
## mu          2.979 0.131    2.732    2.976    3.229    FALSE 1 1.003   518
## prec        0.059 0.002    0.054    0.059    0.064    FALSE 1 1.001   600
## deviance 5672.060 1.728 5670.217 5671.528 5676.634    FALSE 1 1.009   310
## 
## Successful convergence based on Rhat values (all < 1.1). 
## Rhat is the potential scale reduction factor (at convergence, Rhat=1). 
## For each parameter, n.eff is a crude measure of effective sample size. 
## 
## overlap0 checks if 0 falls in the parameter's 95% credible interval.
## f is the proportion of the posterior with the same sign as the mean;
## i.e., our confidence that the parameter is positive or negative.
## 
## DIC info: (pD = var(deviance)/2) 
## pD = 1.5 and DIC = 5673.549 
## DIC is an estimate of expected predictive error (lower is better).

Examining JAGS results

Let’s see if the algorithm has been able to retrieve the true parameter values “true.mu” and “true.sd”. The results shown below suggest that the algorithm has successfully estimated these parameters.

#inference on mu
mu=model2$sims.list$mu
par(mfrow=c(2,1),mar=c(4,4,1,1))
plot(mu,type='l',xlab='Iterations',ylab='mu')
abline(h=true.mu,col='blue')

plot(density(mu),type='l',xlab='',main='mu')
z=quantile(mu,c(0.025,0.975))
abline(v=z,col='red',lty=3)
abline(v=mean(mu))
abline(v=true.mu,col='blue')

#inference on sig2
sig2=1/model2$sims.list$prec
par(mfrow=c(2,1),mar=rep(4,4))
plot(sig2,type='l',xlab='Iterations',ylab='sig2')
abline(h=true.sd^2,col='blue')

plot(density(sig2),type='l',xlab='',main='sig2')
z=quantile(sig2,c(0.025,0.975))
abline(v=z,col='red',lty=3)
abline(v=mean(sig2))
abline(v=true.sd^2,col='blue')



Back to main menu

Comments?

Send me an email at