Tools for Statistical Inference: Methods for the Exploration of Posterior Distributions and Likelihood FunctionsSpringer Science & Business Media, 2012/12/06 - 156 ページ This book provides a unified introduction to a variety of computational algorithms for likelihood and Bayesian inference. In this second edition, I have attempted to expand the treatment of many of the techniques dis cussed, as well as include important topics such as the Metropolis algorithm and methods for assessing the convergence of a Markov chain algorithm. Prerequisites for this book include an understanding of mathematical statistics at the level of Bickel and Doksum (1977), some understanding of the Bayesian approach as in Box and Tiao (1973), experience with condi tional inference at the level of Cox and Snell (1989) and exposure to statistical models as found in McCullagh and Neider (1989). I have chosen not to present the proofs of convergence or rates of convergence since these proofs may require substantial background in Markov chain theory which is beyond the scope ofthis book. However, references to these proofs are given. There has been an explosion of papers in the area of Markov chain Monte Carlo in the last five years. I have attempted to identify key references - though due to the volatility of the field some work may have been missed. |
他の版 - すべて表示
多く使われている語句
a² log American Statistical Association analysis approach asymptotic augmented data set augmented posterior B₁ Bayesian calculate Carlin Censored Regression compute conditional distribution conditional predictive distribution convergence covariate curve data augmentation algorithm denotes Draw EM algorithm equilibrium distribution example exponential family flat prior frequentist Gelfand and Smith Genetic Linkage Genetic Linkage Continued Gibbs sampler given grid griddy Gibbs sampler histogram HPD region importance sampling inference iterations Journal latent data patterns likelihood function linear log p(0 loglikelihood Markov chain maximizer maximum likelihood Metropolis algorithm Monte Carlo method multiple imputations Newton-Raphson normal approximation normal distribution Note observed data obtained p(o² parameters plot PMDA points posterior density posterior distribution posterior mode presents random rejection/acceptance result Rubin Section simulated ẞ₁ ẞo standard error sufficient statistics Table Tanner and Wong Tierney and Kadane tion values variance variance-covariance matrix vector weights Y₁ Zeger and Karim σ²