In real-world problems we are invariably faced with making decisions in an environment of uncertainty (see also the chapter by R. Korsan in this volume). A statistical paradigm then becomes essential for extracting information from observed data and using this to improve our knowledge about the world (inference), and thus guiding us in the decision problem at hand. The underlying probability interpretation for a Bayesian is a subjective one, referring to a personal degree of belief. The rules of probability calculus are used to examine how prior beliefs are transformed to posterior beliefs by incorporating data information. The sampling model is a “window” [see Poirier (1988)] through which the researcher views the world. Here we only consider cases where such a model is parameterized by a parameter vector *θ* of finite dimension. A Bayesian then focuses on the inference on *θ* (treated as a random variable) given the observed data *Y* (fixed), summarized in the posterior density *p*(*θ*|*Y*). The observations in *Y* define a mapping from the prior *p*(*θ*) into *p*(*θ*|*Y*). This posterior distribution can also be used to integrate out the parameters when we are interested in forecasting future values, say, *Ỹ*, leading to the post-sample predictive density *p*(*Ỹ*|*Y*) = *∫**p*(*Ỹ*|*Y*, *θ*)*p*(*θ*|*Y*)*d**θ* where *p*(*Ỹ*|*Y*, *θ*) is obtained from the sampling model.