Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter March 24, 2021

Bayesian approaches to variable selection: a comparative study from practical perspectives

  • Zihang Lu EMAIL logo and Wendy Lou

Abstract

In many clinical studies, researchers are interested in parsimonious models that simultaneously achieve consistent variable selection and optimal prediction. The resulting parsimonious models will facilitate meaningful biological interpretation and scientific findings. Variable selection via Bayesian inference has been receiving significant advancement in recent years. Despite its increasing popularity, there is limited practical guidance for implementing these Bayesian approaches and evaluating their comparative performance in clinical datasets. In this paper, we review several commonly used Bayesian approaches to variable selection, with emphasis on application and implementation through R software. These approaches can be roughly categorized into four classes: namely the Bayesian model selection, spike-and-slab priors, shrinkage priors, and the hybrid of both. To evaluate their variable selection performance under various scenarios, we compare these four classes of approaches using real and simulated datasets. These results provide practical guidance to researchers who are interested in applying Bayesian approaches for the purpose of variable selection.


Corresponding author: Zihang Lu, Department of Public Health Sciences, Queen’s University, Kingston, Ontario, Canada, E-mail:

Acknowledgments

This research was supported by funding from a Canadian Institutes of Health Research Doctoral Award (Frederick Banting and Charles Best Canada Graduate Scholarship) to the first author while he was studying at the University of Toronto. The authors would like to thank the reviewers for their comments and suggestions, which have significantly improved the quality of the manuscript.

  1. Author contribution: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: Canadian Institutes of Health Research Doctoral Award.

  3. Conflict of interest statement: The authors declare no conflicts of interest regarding this article.

References

1. Spencer, ME, Jain, A, Matteini, A, Beamer, BA, Wang, N-Y, Leng, SX, et al.. Serum levels of the immune activation marker neopterin change with age and gender and are modified by race, bmi, and percentage of body fat. J Gerontol A Biomed Sci Med Sci 2010;65:858–65. https://doi.org/10.1093/gerona/glq066.Search in Google Scholar PubMed PubMed Central

2. Wang, T-Y, Lo, Y-L, Kuo, H-P. Obstructive sleep apnea accelerates decline in fev1 in asthmatic patients. Eur Respir J 2014;44(58 Suppl):1–6.Search in Google Scholar

3. Heinze, G, Wallisch, C, Dunkler, D. Variable selection–a review and recommendations for the practicing statistician. Biom J 2018;60:431–49. https://doi.org/10.1002/bimj.201700067.Search in Google Scholar PubMed PubMed Central

4. Tibshirani, R. Regression shrinkage and selection via the Lasso. J Roy Stat Soc B 1996;58:267–88. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x.Search in Google Scholar

5. Zou, H. The adaptive Lasso and its oracle properties. J Am Stat Assoc 2006;101:1418–29. https://doi.org/10.1198/016214506000000735.Search in Google Scholar

6. Zou, H, Hastie, T. Regularization and variable selection via the elastic net. J Roy Stat Soc B 2005;67:301–20. https://doi.org/10.1111/j.1467-9868.2005.00503.x.Search in Google Scholar

7. Tibshirani, R, Saunders, M, Rosset, S, Zhu, J, Knight, K. Sparsity and smoothness via the fused Lasso. J Roy Stat Soc B 2005;67:91–108. https://doi.org/10.1111/j.1467-9868.2005.00490.x.Search in Google Scholar

8. Yuan, M, Lin, Y. Model selection and estimation in regression with grouped variables. J Roy Stat Soc B 2006;68:49–67. https://doi.org/10.1111/j.1467-9868.2005.00532.x.Search in Google Scholar

9. Zhang, C-H. Nearly unbiased variable selection under minimax concave penalty. Ann Stat 2010;38:894–942. https://doi.org/10.1214/09-aos729.Search in Google Scholar

10. Sun, T, Zhang, C-H. Sparse matrix inversion with scaled Lasso. J Mach Learn Res 2013;14:3385–418.Search in Google Scholar

11. Bien, J, Taylor, J, Tibshirani, R. A Lasso for hierarchical interactions. Ann Stat 2013;41:1111. https://doi.org/10.1214/13-aos1096.Search in Google Scholar

12. Fan, J, Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 2001;96:1348–60. https://doi.org/10.1198/016214501753382273.Search in Google Scholar

13. Efron, B, Hastie, T, Johnstone, I, Tibshirani, R. Least angle regression. Ann Stat 2004;32:407–99.10.1214/009053604000000067Search in Google Scholar

14. George, EI, McCulloch, RE. Variable selection via Gibbs sampling. J Am Stat Assoc 1993;88:881–9. https://doi.org/10.1080/01621459.1993.10476353.Search in Google Scholar

15. Carlin, BP, Chib, S. Bayesian model choice via Markov chain Monte Carlo methods. J Roy Stat Soc B 1995;57:473–84. https://doi.org/10.1111/j.2517-6161.1995.tb02042.x.Search in Google Scholar

16. Dellaportas, P, Forster, JJ, Ntzoufras, I. Bayesian variable selection using the Gibbs sampler. Biostatistics Basel 2000;5:273–86.Search in Google Scholar

17. Kuo, L, Mallick, B. Variable selection for regression models. Sankhya Ser B 1998;60:65–81.Search in Google Scholar

18. Ishwaran, H, Rao, JS. Spike and slab variable selection: frequentist and bayesian strategies. Ann Stat 2005;33:730–73. https://doi.org/10.1214/009053604000001147.Search in Google Scholar

19. Park, T, Casella, G. The Bayesian Lasso. J Am Stat Assoc 2008;103:681–6. https://doi.org/10.1198/016214508000000337.Search in Google Scholar

20. Griffin, JE, Brown, PJ. Inference with normal-gamma prior distributions in regression problems. Bayesian Anal 2010;5:171–88. https://doi.org/10.1214/10-ba507.Search in Google Scholar

21. Carvalho, CM, Polson, NG, Scott, JG. The horseshoe estimator for sparse signals. Biometrika 2010;97:465–80. https://doi.org/10.1093/biomet/asq017.Search in Google Scholar

22. Bhadra, A, Datta, J, Polson, NG, Willard, B. The horseshoe+ estimator of ultra-sparse signals. Bayesian Anal 2017;12:1105–31. https://doi.org/10.1214/16-ba1028.Search in Google Scholar

23. Piironen, J, Vehtari, A. Sparsity information and regularization in the horseshoe and other shrinkage priors. Electron J Stat 2017;11:5018–51. https://doi.org/10.1214/17-ejs1337si.Search in Google Scholar

24. Bhattacharya, A, Pati, D, Pillai, NS, Dunson, DB. Dirichlet–Laplace priors for optimal shrinkage. J Am Stat Assoc 2015;110:1479–90. https://doi.org/10.1080/01621459.2014.960967.Search in Google Scholar PubMed PubMed Central

25. Zhang, YD, Naughton, BP, Bondell, HD, Reich, BJ. Bayesian regression using a prior on the model fit: the r2-d2 shrinkage prior. J Am Stat Assoc 2020:1–37. https://doi.org/10.1080/01621459.2020.1825449.Search in Google Scholar

26. O’Hara, RB, Sillanpää, MJ. A review of Bayesian variable selection methods: what, how and which. Bayesian Anal 2009;4:85–117.10.1214/09-BA403Search in Google Scholar

27. Rockova, V, Lesaffre, E, Luime, J, Löwenberg, B. Hierarchical Bayesian formulations for selecting variables in regression models. Stat Med 2012;31:1221–37. https://doi.org/10.1002/sim.4439.Search in Google Scholar PubMed

28. Forte, A, Garcia-Donato, G, Steel, M. Methods and tools for Bayesian variable selection and model averaging in normal linear regression. Int Stat Rev 2018;86:237–58. https://doi.org/10.1111/insr.12249.Search in Google Scholar

29. Van Erp, S, Oberski, DL, Mulder, J. Shrinkage priors for Bayesian penalized regression. J Math Psychol 2019;89:31–50. https://doi.org/10.1016/j.jmp.2018.12.004.Search in Google Scholar

30. Ročková, V, George, EI. The spike-and-slab Lasso. J Am Stat Assoc 2018;113:431–44.10.1080/01621459.2016.1260469Search in Google Scholar

31. Ročková, V, George, EI. Emvs: the EM approach to Bayesian variable selection. J Am Stat Assoc 2014;109:828–46.10.1080/01621459.2013.869223Search in Google Scholar

32. Schreck, A, Fort, G, Le Corff, S, Moulines, E. A shrinkage–thresholding metropolis adjusted Langevin algorithm for Bayesian variable selection. IEEE J Sel Top Signal Process 2016;10:366–75. https://doi.org/10.1109/jstsp.2015.2496546.Search in Google Scholar

33. Lee, KE, Sha, N, Dougherty, ER, Vannucci, M, Mallick, BK. Gene selection: a Bayesian variable selection approach. Bioinformatics 2003;19:90–7. https://doi.org/10.1093/bioinformatics/19.1.90.Search in Google Scholar PubMed

34. Fridley, BL. Bayesian variable and model selection methods for genetic association studies. Genet Epidemiol 2009;33:27–37. https://doi.org/10.1002/gepi.20353.Search in Google Scholar PubMed

35. Tadesse, MG, Vannucci, M, Liò, P. Identification of DNA regulatory motifs using Bayesian variable selection. Bioinformatics 2004;20:2553–61. https://doi.org/10.1093/bioinformatics/bth282.Search in Google Scholar PubMed

36. Jacobs, R, Lesaffre, E, Teunis, PF, Höhle, M, van de Kassteele, J. Identifying the source of food-borne disease outbreaks: an application of Bayesian variable selection. Stat Methods Med Res 2019;28:1126–40. https://doi.org/10.1177/0962280217747311.Search in Google Scholar PubMed PubMed Central

37. Millar, J, Psychas, P, Abuaku, B, Ahorlu, C, Amratia, P, Koram, K, et al.. Detecting local risk factors for residual malaria in northern Ghana using Bayesian model averaging. Malar J 2018;17:343. https://doi.org/10.1186/s12936-018-2491-2.Search in Google Scholar PubMed PubMed Central

38. Zellner, A. Bayesian estimation and prediction using asymmetric loss functions. J Am Stat Assoc 1986;81:446–51. https://doi.org/10.1080/01621459.1986.10478289.Search in Google Scholar

39. Scott, JG, Berger, JO. An exploration of aspects of Bayesian multiple testing. J Stat Plann Inference 2006;136:2144–62. https://doi.org/10.1016/j.jspi.2005.08.031.Search in Google Scholar

40. Ley, E, Steel, MFJ. Mixtures of g-priors for Bayesian model averaging with economic application. J Econom 2011;171:251–66.10.1596/1813-9450-5732Search in Google Scholar

41. Kass, RE, Raftery, AE. Bayes factors. J Am Stat Assoc 1995;90:773–95. https://doi.org/10.1080/01621459.1995.10476572.Search in Google Scholar

42. Kass, RE, Wasserman, L. A reference Bayesian test for nested hypotheses and its relationship to the schwarz criterion. J Am Stat Assoc 1995;90:928–34. https://doi.org/10.1080/01621459.1995.10476592.Search in Google Scholar

43. Newton, MA, Raftery, AE. Approximate Bayesian inference with the weighted likelihood bootstrap. J Roy Stat Soc B 1994;56:3–48. https://doi.org/10.1111/j.2517-6161.1994.tb01956.x.Search in Google Scholar

44. Meng, X-L, Wong, WH. Simulating ratios of normalizing constants via a simple identity: a theoretical exploration. Stat Sin 1996;6:831–60.Search in Google Scholar

45. Friel, N, Pettitt, AN. Marginal likelihood estimation via power posteriors. J Roy Stat Soc B 2008;70:589–607. https://doi.org/10.1111/j.1467-9868.2007.00650.x.Search in Google Scholar

46. Tierney, L, Kadane, JB. Accurate approximations for posterior moments and marginal densities. J Am Stat Assoc 1986;81:82–6. https://doi.org/10.1080/01621459.1986.10478240.Search in Google Scholar

47. Schwarz, G. Estimating the dimension of a model. Ann Stat 1978;6:461–4. https://doi.org/10.1214/aos/1176344136.Search in Google Scholar

48. Spiegelhalter, DJ, Best, NG, Carlin, BP, Van Der Linde, A. Bayesian measures of model complexity and fit. J Roy Stat Soc B 2002;64:583–639. https://doi.org/10.1111/1467-9868.00353.Search in Google Scholar

49. Geisser, S, Eddy, WF. A predictive approach to model selection. J Am Stat Assoc 1979;74:153–60. https://doi.org/10.1080/01621459.1979.10481632.Search in Google Scholar

50. Watanabe, S. Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. J Mach Learn Res 2010;11:3571–94.Search in Google Scholar

51. Hoeting, JA, Madigan, D, Raftery, AE, Volinsky, CT. Bayesian model averaging: a tutorial. Stat Sci 1999;14:382–401.10.1214/ss/1009212519Search in Google Scholar

52. Madigan, D, Raftery, AE. Model selection and accounting for model uncertainty in graphical models using occam’s window. J Am Stat Assoc 1994;89:1535–46. https://doi.org/10.1080/01621459.1994.10476894.Search in Google Scholar

53. Piironen, J, Vehtari, A. Comparison of Bayesian predictive methods for model selection. Stat Comput 2017;27:711–35. https://doi.org/10.1007/s11222-016-9649-y.Search in Google Scholar

54. Steel, MF. Model averaging and its use in economics. J Econ Lit 2020;58:644–719. https://doi.org/10.1257/jel.20191385.Search in Google Scholar

55. Fragoso, TM, Bertoli, W, Louzada, F. Bayesian model averaging: a systematic review and conceptual classification. Int Stat Rev 2018;86:1–28. https://doi.org/10.1111/insr.12243.Search in Google Scholar

56. Barbieri, MM, Berger, JO. Optimal predictive model selection. Ann Stat 2004;32:870–97. https://doi.org/10.1214/009053604000000238.Search in Google Scholar

57. Ley, E, Steel, MFJ. Jointness in Bayesian variable selection with applications to growth regression. J Macroecon 2006;29:476–93.10.1596/1813-9450-4063Search in Google Scholar

58. Doppelhofer, G, Weeks, M. Jointness of growth determinants. J Appl Econom 2009;24:209–44. https://doi.org/10.1002/jae.1046.Search in Google Scholar

59. Green, PJ. Reversible jump Markov chain Monte Carlo computation and bayesian model determination. Biometrika 1995;82:711–32. https://doi.org/10.1093/biomet/82.4.711.Search in Google Scholar

60. Madigan, D, York, J, Allard, D. Bayesian graphical models for discrete data. Int Stat Rev 1995;63:215–32. https://doi.org/10.2307/1403615.Search in Google Scholar

61. Ghosh, J. Bayesian model selection using the median probability model. Wiley Interdiscip Rev Comput Stat 2015;7:185–93. https://doi.org/10.1002/wics.1352.Search in Google Scholar

62. Garcia-Donato, G, Martinez-Beneito, MA. On sampling strategies in Bayesian variable selection problems with large model spaces. J Am Stat Assoc 2013;108:340–52. https://doi.org/10.1080/01621459.2012.742443.Search in Google Scholar

63. Hans, C, Dobra, A, West, M. Shotgun stochastic search for “large p” regression. J Am Stat Assoc 2007;102:507–16. https://doi.org/10.1198/016214507000000121.Search in Google Scholar

64. Shin, M, Bhattacharya, A, Johnson, VE. Scalable Bayesian variable selection using nonlocal prior densities in ultrahigh-dimensional settings. Stat Sin 2018;28:1053. https://doi.org/10.5705/ss.202016.0167.Search in Google Scholar PubMed PubMed Central

65. Mitchell, TJ, Beauchamp, JJ. Bayesian variable selection in linear regression. J Am Stat Assoc 1988;83:1023–32. https://doi.org/10.1080/01621459.1988.10478694.Search in Google Scholar

66. George, EI, McCulloch, RE. Approaches for Bayesian variable selection. Stat Sin 1997;7:339–73.Search in Google Scholar

67. Ishwaran, H, Rao, JS. Detecting differentially expressed genes in microarrays using bayesian model selection. J Am Stat Assoc 2003;98:438–55. https://doi.org/10.1198/016214503000224.Search in Google Scholar

68. Fahrmeir, L, Kneib, T, Konrath, S. Bayesian regularisation in structured additive regression: a unifying perspective on shrinkage, smoothing and predictor selection. Stat Comput 2010;20:203–19. https://doi.org/10.1007/s11222-009-9158-3.Search in Google Scholar

69. Bhadra, A, Datta, J, Polson, NG, Willard, B. Horseshoe regularization for feature subset selection; 2017. arXiv:1702.07400.10.1007/s13571-019-00217-7Search in Google Scholar

70. Meier, L, Van De Geer, S, Bühlmann, P. The group Lasso for logistic regression. J Roy Stat Soc B 2008;70:53–71. https://doi.org/10.1111/j.1467-9868.2007.00627.x.Search in Google Scholar

71. Johnson, BA. On Lasso for censored data. Electron J Stat 2009;3:485–506. https://doi.org/10.1214/08-ejs322.Search in Google Scholar

72. Ghosh, J, Herring, AH, Siega-Riz, AM. Bayesian variable selection for latent class models. Biometrics 2011;67:917–25. https://doi.org/10.1111/j.1541-0420.2010.01502.x.Search in Google Scholar PubMed PubMed Central

73. Kyung, M, Gill, J, Ghosh, M, Casella, G. Penalized regression, standard errors, and Bayesian Lassos. Bayesian Anal 2010;5:369–411. https://doi.org/10.1214/10-ba607.Search in Google Scholar

74. Lykou, A, Ntzoufras, I. On Bayesian Lasso variable selection and the specification of the shrinkage parameter. Stat Comput 2013;23:361–90. https://doi.org/10.1007/s11222-012-9316-x.Search in Google Scholar

75. Frank, LE, Friedman, JH. A statistical view of some chemometrics regression tools. Technometrics 1993;35:109–35. https://doi.org/10.1080/00401706.1993.10485033.Search in Google Scholar

76. Fu, WJ. Penalized regressions: the bridge versus the Lasso. J Comput Graph Stat 1998;7:397–416. https://doi.org/10.1080/10618600.1998.10474784.Search in Google Scholar

77. Li, Q, Lin, N. The Bayesian elastic net. Bayesian Anal 2010;5:151–70. https://doi.org/10.1214/10-ba506.Search in Google Scholar

78. Polson, NG, Scott, JG. Shrink globally, act locally: sparse Bayesian regularization and prediction. Bayesian Stat 2010;9:501–38.10.1093/acprof:oso/9780199694587.003.0017Search in Google Scholar

79. Carvalho, CM, Polson, NG, Scott, JG. Handling sparsity via the horseshoe. Artif Intell Stat 2009;5:73–80.Search in Google Scholar

80. Zhang, Y, Bondell, HD. Variable selection via penalized credible regions with Dirichlet–Laplace global-local shrinkage priors. Bayesian Anal 2018;17:823–44. https://doi.org/10.1214/17-BA1076.Search in Google Scholar

81. James, G, Witten, D, Hastie, T, Tibshirani, R. An introduction to statistical learning, vol. 112. New York: Springer; 2013.10.1007/978-1-4614-7138-7Search in Google Scholar

82. Martin, AD, Quinn, KM, Park, JH. MCMCpack: Markov chain Monte Carlo in R. J Stat Software 2011;42:22. https://doi.org/10.18637/jss.v042.i09.Search in Google Scholar

83. Garcia-Donato, G, Forte, A. BayesVarSel: Bayes factors, model choice and variable selection in linear models. R package version 1.8.0; 2017.Search in Google Scholar

84. Bayarri, MJ, Berger, JO, Forte, A, García-Donato, G. Criteria for Bayesian model choice with application to variable selection. Ann Stat 2012;40:1550–77. https://doi.org/10.1214/12-aos1013.Search in Google Scholar

85. Liang, F, Paulo, R, Molina, G, Clyde, MA, Berger, JO. Mixtures of g priors for Bayesian variable selection. J Am Stat Assoc 2008;103:410–23. https://doi.org/10.1198/016214507000001337.Search in Google Scholar

86. Zellner, A, Siow, A. Posterior odds ratios for selected regression hypotheses. Trab Estad Invest Oper 1980;31:585–603. https://doi.org/10.1007/bf02888369.Search in Google Scholar

87. Fernandez, C, Ley, E, Steel, MF. Benchmark priors for Bayesian model averaging. J Econom 2001;100:381–427. https://doi.org/10.1016/s0304-4076(00)00076-2.Search in Google Scholar

88. Zeugner, S, Feldkircher, M. Bayesian model averaging employing fixed and flexible priors: the BMS package for R. J Stat Software 2015;68:1–37. https://doi.org/10.18637/jss.v068.i04.Search in Google Scholar

89. Clyde, M. BAS: Bayesian variable selection and model averaging using Bayesian adaptive sampling. R package version 1.5.5; 2020.Search in Google Scholar

90. Raftery, A, Hoeting, J, Volinsky, C, Painter, I, Yeung, KY. BMA: Bayesian model averaging. Rpackage version 3.18.12; 2020.Search in Google Scholar

91. Amini, SM, Parmeter, CF. Bayesian model averaging in R. J Econ Soc Meas 2011;36:253–87. https://doi.org/10.3233/jem-2011-0350.Search in Google Scholar

92. Amini, S, Parmeter, CF. A review of the BMS package for R with focus on jointness. Econometrics 2020;8:6. https://doi.org/10.3390/econometrics8010006.Search in Google Scholar

93. Scott, SL. BoomSpikeSlab: MCMC for spike and slab regression. R package version 1.2.1; 2019.Search in Google Scholar

94. Clyde, MA, Ghosh, J, Littman, ML. Bayesian adaptive sampling for variable selection and model averaging. J Comput Graph Stat 2011;20:80–101. https://doi.org/10.1198/jcgs.2010.09049.Search in Google Scholar

95. Ishwaran, H, Kogalur, UB, Rao, JS. spikeslab: prediction and variable selection using spike and slab regression. R Journal 2010;2:68–73. https://doi.org/10.32614/rj-2010-018.Search in Google Scholar

96. Scheipl, F. spikeslabgam: Bayesian variable selection, model choice and regularization for generalized additive mixed models in R. J Stat Software 2011;43:1–23. https://doi.org/10.18637/jss.v043.i14.Search in Google Scholar

97. Gramacy, RB. monomvn: Estimation for multivariate normal and student-t data with monotone missingness. R package version 1.9-10; 2019.Search in Google Scholar

98. van der Pas, S, Scott, J, Chakraborty, A, Bhattacharya, A. Horseshoe: implementation of the horseshoe prior. R package version 0.2.0; 2019.Search in Google Scholar

99. Polson, NG, Scott, JG, Windle, J. The Bayesian bridge. J Roy Stat Soc B 2014;76:713–33. https://doi.org/10.1111/rssb.12042.Search in Google Scholar

100. Rue, H. Fast sampling of Gaussian Markov random fields. J Roy Stat Soc B 2001;63:325–38. https://doi.org/10.1111/1467-9868.00288.Search in Google Scholar

101. Makalic, E, Schmidt, DF. High-dimensional Bayesian regularised regression with the bayesreg package; 2016. arXiv:1611.06649.Search in Google Scholar

102. Rockova, V, George, E. The spike-and-slab Lasso. J Am Stat Assoc 2018;113:431–44.10.1080/01621459.2016.1260469Search in Google Scholar

103. Leng, C, Tran, M-N, Nott, D. Bayesian adaptive Lasso. Ann Inst Stat Math 2014;66:221–44. https://doi.org/10.1007/s10463-013-0429-6.Search in Google Scholar

104. Tsiatis, AA, Davidian, M, Zhang, M, Lu, X. Covariate adjustment for two-sample treatment comparisons in randomized clinical trials: a principled yet flexible approach. Stat Med 2008;27:4658–77. https://doi.org/10.1002/sim.3113.Search in Google Scholar PubMed PubMed Central

105. Boos, DD, Stefanski, LA, Wu, Y. Fast FSR variable selection with applications to clinical trials. Biometrics 2009;65:692–700. https://doi.org/10.1111/j.1541-0420.2008.01127.x.Search in Google Scholar PubMed PubMed Central

106. Shmueli, G. To explain or to predict? Stat Sci 2010;25:289–310. https://doi.org/10.1214/10-sts330.Search in Google Scholar

107. Hahn, PR, Carvalho, CM, Puelz, D, He, J. Regularization and confounding in linear regression for treatment effect estimation. Bayesian Anal 2018;13:163–82. https://doi.org/10.1214/16-ba1044.Search in Google Scholar

108. Walter, S, Tiemeier, H. Variable selection: current practice in epidemiological studies. Eur J Epidemiol 2009;24:733. https://doi.org/10.1007/s10654-009-9411-2.Search in Google Scholar PubMed PubMed Central

109. Casella, G, Moreno, E. Objective Bayesian variable selection. J Am Stat Assoc 2006;101:157–67. https://doi.org/10.1198/016214505000000646.Search in Google Scholar

110. Moreno, E, Girón, FJ, Casella, G. Consistency of objective Bayes factors as the model dimension grows. Ann Stat 2010;38:1937–52. https://doi.org/10.1214/09-aos754.Search in Google Scholar

111. Gelman, A, Simpson, D, Betancourt, M. The prior can often only be understood in the context of the likelihood. Entropy 2017;19:555. https://doi.org/10.3390/e19100555.Search in Google Scholar

112. Kwon, D, Landi, MT, Vannucci, M, Issaq, HJ, Prieto, D, Pfeiffer, RM. An efficient stochastic search for Bayesian variable selection with high-dimensional correlated predictors. Comput Stat Data Anal 2011;55:2807–18. https://doi.org/10.1016/j.csda.2011.04.019.Search in Google Scholar PubMed PubMed Central

113. Ghosh, J, Ghattas, AE. Bayesian variable selection under collinearity. Am Statistician 2015;69:165–73. https://doi.org/10.1080/00031305.2015.1031827.Search in Google Scholar

114. Hahn, PR, Carvalho, CM. Decoupling shrinkage and selection in bayesian linear models: a posterior summary perspective. J Am Stat Assoc 2015;110:435–48. https://doi.org/10.1080/01621459.2014.993077.Search in Google Scholar

115. Bondell, HD, Reich, BJ. Consistent high-dimensional bayesian variable selection via penalized credible regions. J Am Stat Assoc 2012;107:1610–24. https://doi.org/10.1080/01621459.2012.716344.Search in Google Scholar PubMed PubMed Central

116. Silva-Batista, C, Corcos, DM, Barroso, R, David, FJ, Kanegusuku, H, Forjaz, C, et al.. Instability resistance training improves neuromuscular outcome in Parkinson’s disease. Med Sci Sports Exerc 2017;49:652–60. https://doi.org/10.1249/mss.0000000000001159.Search in Google Scholar

117. You, M, Fang, W, Wang, X, Yang, T. Modelling of the ICF core sets for chronic ischemic heart disease using the Lasso model in Chinese patients. Health Qual Life Outcome 2018;16:139. https://doi.org/10.1186/s12955-018-0957-0.Search in Google Scholar PubMed PubMed Central

118. Carrillo, G, Patron, MJP, Johnson, N, Zhong, Y, Lucio, R, Xu, X. Asthma prevalence and school-related hazardous air pollutants in the US–Mexico border area. Environ Res 2018;162:41–8. https://doi.org/10.1016/j.envres.2017.11.057.Search in Google Scholar PubMed

119. Hornby, TG, Henderson, CE, Holleran, CL, Lovell, L, Roth, EJ, Jang, JH. Stepwise regression and latent profile analyses of locomotor outcomes poststroke. Stroke 2020;51:3074–82. https://doi.org/10.1161/strokeaha.120.031065.Search in Google Scholar

120. Castillo, I, Schmidt-Hieber, J, Van der Vaart, A. Bayesian linear regression with sparse priors. Ann Stat 2015;43:1986–2018. https://doi.org/10.1214/15-aos1334.Search in Google Scholar

121. Xu, X, Ghosh, M. Bayesian variable selection and estimation for group Lasso. Bayesian Analysis 2015;10:909–36. https://doi.org/10.1214/14-ba929.Search in Google Scholar


Supplementary Material

The online version of this article offers supplementary material (https://doi.org/10.1515/ijb-2020-0130).


Received: 2020-02-11
Revised: 2020-11-21
Accepted: 2021-02-27
Published Online: 2021-03-24

© 2021 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 27.4.2024 from https://www.degruyter.com/document/doi/10.1515/ijb-2020-0130/html
Scroll to top button