BAYES MODELLING OF JOINT MEAN AND COVARIANCE STRUCTURE AND VARIABLE SELECTION USING MCMC

  • Jiaming Shen

Student thesis: Phd

Abstract

This thesis aims to present the joint modelling scheme of mean and covariance structure from the Bayesian perspective for longitudinal data. Modelling covariance matrices are generally complex due to two obstacles, i.e., high-dimensionality and positive definiteness. Based on a modified Cholesky decomposition (MCD) of the covariance matrix, we propose to model the mean, the generalised autoregressive parameters, and the innovation variances resulted from the MCD, simultaneously, in terms of linear regression models. We consider not only parameter estimation but also variable selection under Bayesian version. Specifically, Markov chain Monte Carlo (MCMC) sampling strategy is considered, and the Gibbs sampler, MH algorithm and slice sampler are applied to draw random samples from the posterior distributions of the model parameters. Bayesian variable selection methods through adding shrinkage priors are also investigated. We also demonstrated our methodologies through both simulated data and real data by R. Compared with existing Bayesian methods, our proposed methods provide more reliable results. We firstly systematically introduce the Bayesian version of the joint mean covariance model (JMCM) in Chapter 2. In Chapter 3, we proposed algorithms to sample the innovation variance coefficient efficiently in the joint mean covariance model. To this end, we firstly construct auxiliary variables to transfer the intractable posterior distribution to a known-form distribution. Thus, this approach is a full Gibbs algorithm. We also proposed a sampling approach by adaptive MH algorithm to obtain a fast mixing Markov chain in terms of the adaptive random walk covariance matrix. In simulation studies, we compare our full Gibbs algorithm and adaptive MH algorithm with approximation proposal MH algorithm widely used in previous studies. We showed that our full Gibbs algorithm is faster under a low dimensional case than the approximation proposal method. On the other hand, our proposed method is not sensitive to the choice of initial values. When the dimension of parameters increases, our full Gibbs algorithm and adaptive MH algorithm perform better in terms of MSE and acceptance rate. We also applied our approaches to real data studies. We explore the indicator based Bayesian variable selection method in Chapter 4. However, the indicator method suffers from several issues, which is not an ideal method. To surmount the difficulty we face in Chapter 4, we use horseshoe prior to obtaining Bayesian sparse estimation in Chapter 5. We show that the shrinkage profile for innovation variance is different from the well-developed normal mean problem. To adapt the shrinkage profile of innovation variance, we propose a modified version of horseshoe prior. In simulation studies, we show that under the spare settings, the performance of vague normal prior is similar to the mean model when pn. For the innovation variance model, the horseshoe prior outperforms the vague normal prior even in p
Date of Award1 Aug 2022
Original languageEnglish
Awarding Institution
  • The University of Manchester
SupervisorAlexander Donev (Supervisor) & Jianxin Pan (Supervisor)

Keywords

  • shrinkage prior
  • Longitudinal data analysis
  • Bayesis analysis
  • Joint mean covariance model
  • MCMC

Cite this

'