Basic area-level model

The basic area-level model (Fay and Herriot 1979; Rao and Molina 2015) is given by \[ y_i | \theta_i \stackrel{\mathrm{iid}}{\sim} {\cal N} (\theta_i, \psi_i) \,, \\ \theta_i = \beta' x_i + v_i \,, \] where \(i\) runs from 1 to \(m\), the number of areas, \(\beta\) is a vector of regression coefficients for given covariates \(x_i\), and \(v_i \stackrel{\mathrm{iid}}{\sim} {\cal N} (0, \sigma_v^2)\) are independent random area effects. For each area an observation \(y_i\) is available with given variance \(\psi_i\).

First we generate some data according to this model:

m <- 75L  # number of areas
df <- data.frame(
  area=1:m,      # area indicator
  x=runif(m)     # covariate
)
v <- rnorm(m, sd=0.5)    # true area effects
theta <- 1 + 3*df$x + v  # quantity of interest
psi <- runif(m, 0.5, 2) / sample(1:25, m, replace=TRUE)  # given variances
df$y <- rnorm(m, theta, sqrt(psi))

A sampler function for a model with a regression component and a random intercept is created by

library(mcmcsae)
model <- y ~ reg(~ 1 + x, name="beta") + gen(factor = ~iid(area), name="v")
sampler <- create_sampler(
  model,
  family=f_gaussian(var.prior=pr_fixed(1), var.vec = ~ psi),
  linpred="fitted", data=df
)

The meaning of the arguments used is as follows:

An MCMC simulation using this sampler function is then carried out as follows:

sim <- MCMCsim(sampler, store.all=TRUE, verbose=FALSE)

A summary of the results is obtained by

(summ <- summary(sim))
## llh_ :
##       Mean  SD t-value  MCSE q0.05  q0.5 q0.95 n_eff R_hat
## llh_ -19.1 5.9   -3.24 0.122 -29.1 -18.9 -9.95  2357     1
## 
## linpred_ :
##     Mean    SD t-value    MCSE q0.05  q0.5 q0.95 n_eff R_hat
## 1  3.430 0.205   16.77 0.00380 3.101 3.426  3.77  2898 1.000
## 2  3.005 0.295   10.19 0.00555 2.513 2.998  3.49  2827 0.999
## 3  4.062 0.326   12.45 0.00597 3.521 4.064  4.60  2982 1.000
## 4  1.151 0.210    5.49 0.00389 0.806 1.154  1.49  2912 0.999
## 5  3.613 0.246   14.67 0.00470 3.207 3.620  4.02  2751 0.999
## 6  4.564 0.187   24.47 0.00341 4.260 4.562  4.88  3000 0.999
## 7  1.611 0.202    7.96 0.00369 1.269 1.613  1.94  3000 1.002
## 8  3.155 0.244   12.94 0.00457 2.755 3.155  3.55  2844 1.000
## 9  2.297 0.258    8.92 0.00494 1.885 2.290  2.73  2715 1.000
## 10 0.666 0.288    2.31 0.00527 0.195 0.665  1.15  3000 1.000
## ... 65 elements suppressed ...
## 
## beta :
##              Mean    SD t-value    MCSE q0.05 q0.5 q0.95 n_eff R_hat
## (Intercept) 0.921 0.127    7.27 0.00231 0.717 0.92  1.13  3000     1
## x           3.227 0.207   15.55 0.00379 2.879 3.23  3.57  3000     1
## 
## v_sigma :
##          Mean     SD t-value    MCSE q0.05  q0.5 q0.95 n_eff R_hat
## v_sigma 0.425 0.0529    8.04 0.00124 0.344 0.422 0.516  1805     1
## 
## v :
##       Mean    SD t-value    MCSE   q0.05    q0.5   q0.95 n_eff R_hat
## 1   0.4513 0.212   2.133 0.00394  0.1022  0.4530  0.8033  2886 1.000
## 2  -0.2645 0.295  -0.895 0.00549 -0.7679 -0.2641  0.2259  2893 1.000
## 3  -0.0761 0.322  -0.237 0.00587 -0.6125 -0.0741  0.4568  3000 0.999
## 4  -0.4413 0.222  -1.987 0.00405 -0.8139 -0.4401 -0.0809  3000 1.000
## 5  -0.4071 0.252  -1.614 0.00468 -0.8260 -0.4069 -0.0038  2911 0.999
## 6   0.7106 0.204   3.477 0.00373  0.3798  0.7120  1.0413  3000 1.000
## 7   0.3162 0.218   1.452 0.00398 -0.0532  0.3203  0.6814  3000 1.000
## 8   0.1390 0.248   0.560 0.00464 -0.2621  0.1434  0.5382  2865 1.000
## 9  -0.1029 0.258  -0.398 0.00492 -0.5115 -0.1067  0.3269  2757 1.000
## 10 -0.3339 0.291  -1.146 0.00549 -0.8066 -0.3360  0.1495  2818 1.000
## ... 65 elements suppressed ...

In this example we can compare the model parameter estimates to the ‘true’ parameter values that have been used to generate the data. In the next plots we compare the estimated and ‘true’ random effects, as well as the model estimates and ‘true’ estimands. In the latter plot, the original ‘direct’ estimates are added as red triangles.

plot(v, summ$v[, "Mean"], xlab="true v", ylab="posterior mean"); abline(0, 1)
plot(theta, summ$linpred_[, "Mean"], xlab="true theta", ylab="estimated"); abline(0, 1)
points(theta, df$y, col=2, pch=2)

We can compute model selection measures DIC and WAIC by

compute_DIC(sim)
##      DIC    p_DIC 
## 86.95802 48.68899
compute_WAIC(sim, show.progress=FALSE)
##    WAIC1  p_WAIC1    WAIC2  p_WAIC2 
## 59.20655 20.93311 81.55569 32.10768

Posterior means of residuals can be extracted from the simulation output using method residuals. Here is a plot of (posterior means of) residuals against covariate \(x\):

plot(df$x, residuals(sim, mean.only=TRUE), xlab="x", ylab="residual"); abline(h=0)

A linear predictor in a linear model can be expressed as a weighted sum of the response variable. If we set compute.weights=TRUE then such weights are computed for all linear predictors specified in argument linpred. In this case it means that a set of weights is computed for each area.

sampler <- create_sampler(
  model,
  family=f_gaussian(var.prior=pr_fixed(1), var.vec = ~ psi),
  linpred="fitted", data=df, compute.weights=TRUE
)
sim <- MCMCsim(sampler, store.all=TRUE, verbose=FALSE)

Now the weights method returns a matrix of weights, in this case a 75 \(\times\) 75 matrix \(w_{ij}\) holding the weight of direct estimate \(i\) in linear predictor \(j\). To verify that the weights applied to the direct estimates yield the model-based estimates we plot them against each other. Also shown is a plot of the weight of the direct estimate for each area in the predictor for that same area, against the variance of the direct estimate.

plot(summ$linpred_[, "Mean"], crossprod(weights(sim), df$y),
     xlab="estimate", ylab="weighted average")
abline(0, 1)
plot(psi, diag(weights(sim)), ylab="weight")

References

Fay, R. E., and R. A. Herriot. 1979. “Estimates of Income for Small Places: An Application of James-Stein Procedures to Census Data.” Journal of the American Statistical Association 74 (366): 269–77.
Rao, J. N. K., and I. Molina. 2015. Small Area Estimation. John Wiley & Sons.