Posterior inference for STAR linear model
Usage
blm_star(
y,
X,
X_test = NULL,
transformation = "np",
y_max = Inf,
prior = "gprior",
use_MCMC = TRUE,
nsave = 1000,
nburn = 1000,
nskip = 0,
psi = NULL,
compute_marg = FALSE
)
Arguments
- y
n x 1
vector of observed counts- X
n x p
matrix of predictors- X_test
n0 x p
matrix of predictors for test data- transformation
transformation to use for the latent process; must be one of
"identity" (identity transformation)
"log" (log transformation)
"sqrt" (square root transformation)
"np" (nonparametric transformation estimated from empirical CDF)
"pois" (transformation for moment-matched marginal Poisson CDF)
"neg-bin" (transformation for moment-matched marginal Negative Binomial CDF)
"box-cox" (box-cox transformation with learned parameter)
"ispline" (transformation is modeled as unknown, monotone function using I-splines)
"bnp" (Bayesian nonparametric transformation using the Bayesian bootstrap)
- y_max
a fixed and known upper bound for all observations; default is
Inf
- prior
prior to use for the latent linear regression; currently implemented options are "gprior", "horseshoe", and "ridge"
- use_MCMC
logical; whether to run Gibbs sampler or Monte Carlo (default is TRUE)
- nsave
number of MCMC iterations to save (or MC samples to draw if use_MCMC=FALSE)
- nburn
number of MCMC iterations to discard
- nskip
number of MCMC iterations to skip between saving iterations, i.e., save every (nskip + 1)th draw
- psi
prior variance (g-prior)
- compute_marg
logical; if TRUE, compute and return the marginal likelihood (only available when using exact sampler, i.e. use_MCMC=FALSE)
Value
a list with at least the following elements:
coefficients
: the posterior mean of the regression coefficientspost.beta
: posterior draws of the regression coefficientspost.pred
: draws from the posterior predictive distribution ofy
post.log.like.point
: draws of the log-likelihood for each of then
observationsWAIC
: Widely-Applicable/Watanabe-Akaike Information Criterionp_waic
: Effective number of parameters based on WAIC
If test points are passed in, then the list will also have post.predtest
,
which contains draws from the posterior predictive distribution at test points.
Other elements may be present depending on the choice of prior, transformation, and sampling approach.
Details
STAR defines a count-valued probability model by (1) specifying a Gaussian model for continuous *latent* data and (2) connecting the latent data to the observed data via a *transformation and rounding* operation. Here, the continuous latent data model is a linear regression.
There are several options for the transformation. First, the transformation
can belong to the *Box-Cox* family, which includes the known transformations
'identity', 'log', and 'sqrt', as well as a version in which the Box-Cox parameter
is inferred within the MCMC sampler ('box-cox'). Second, the transformation
can be estimated (before model fitting) using the empirical distribution of the
data y
. Options in this case include the empirical cumulative
distribution function (CDF), which is fully nonparametric ('np'), or the parametric
alternatives based on Poisson ('pois') or Negative-Binomial ('neg-bin')
distributions. For the parametric distributions, the parameters of the distribution
are estimated using moments (means and variances) of y
. The distribution-based
transformations approximately preserve the mean and variance of the count data y
on the latent data scale, which lends interpretability to the model parameters.
Lastly, the transformation can be modeled using the Bayesian bootstrap ('bnp'),
which is a Bayesian nonparametric model and incorporates the uncertainty
about the transformation into posterior and predictive inference.
The Monte Carlo sampler (use_MCMC=FALSE
) produces direct, discrete, and joint draws
from the posterior distribution and the posterior predictive distribution
of the linear regression model with a g-prior.
Note
The 'bnp' transformation is
slower than the other transformations because of the way
the TruncatedNormal
sampler must be updated as the lower and upper
limits change (due to the sampling of g
). Thus, computational
improvements are likely available.
Examples
# \donttest{
# Simulate data with count-valued response y:
sim_dat = simulate_nb_lm(n = 100, p = 5)
y = sim_dat$y; X = sim_dat$X
# Fit the Bayesian STAR linear model:
fit = blm_star(y = y, X = X)
#> [1] "Burn-In Period"
#> [1] "Starting sampling"
#> [1] "0 seconds remaining"
#> [1] "Total time: 1 seconds"
# What is included:
names(fit)
#> [1] "coefficients" "post.beta" "post.pred"
#> [4] "post.sigma" "post.log.like.point" "WAIC"
#> [7] "p_waic"
# Posterior mean of each coefficient:
coef(fit)
#> beta1 beta2 beta3 beta4 beta5
#> 0.08370276 0.29644099 0.41728039 -0.09148244 -0.06807578
# WAIC:
fit$WAIC
#> [1] 325.1182
# MCMC diagnostics:
plot(as.ts(fit$post.beta))
# Posterior predictive check:
hist(apply(fit$post.pred, 1,
function(x) mean(x==0)), main = 'Proportion of Zeros', xlab='');
abline(v = mean(y==0), lwd=4, col ='blue')
# }