Showing 1 to 10 of 192 matching Articles
Results per page:
Export (CSV)
By
Merwe, Abrie J.; Hugo, Johan
2 Citations
Statistical intervals, properly calculated from sample data, are likely to be substantially more informative to decision makers than obtaining a point estimate alone and are often of paramount interest to practitioners and thus management (and are usually a great deal more meaningful than statistical significance or hypothesis tests). Wolfinger (1998, J Qual Technol 36:162–170) presented a simulationbased approach for determining Bayesian tolerance intervals in a balanced oneway random effects model. In this note the theory and results of Wolfinger are extended to the balanced twofactor nested random effects model. The example illustrates the flexibility and unique features of the Bayesian simulation method for the construction of tolerance intervals.
more …
By
Nadar, Mustafa; Papadopoulos, Alexander; Kızılaslan, Fatih
23 Citations
In this paper we review some results that have been derived on record values for some well known probability density functions and based on m records from Kumaraswamy’s distribution we obtain estimators for the two parameters and the future sth record value. These estimates are derived using the maximum likelihood and Bayesian approaches. In the Bayesian approach, the two parameters are assumed to be random variables and estimators for the parameters and for the future sth record value are obtained, when we have observed m past record values, using the well known squared error loss (SEL) function and a linear exponential (LINEX) loss function. The findings are illustrated with actual and computer generated data.
more …
By
Nyman, Henrik; Xiong, Jie; Pensar, Johan; Corander, Jukka
Show all (4)
2 Citations
An inductive probabilistic classification rule must generally obey the principles of Bayesian predictive inference, such that all observed and unobserved stochastic quantities are jointly modeled and the parameter uncertainty is fully acknowledged through the posterior predictive distribution. Several such rules have been recently considered and their asymptotic behavior has been characterized under the assumption that the observed features or variables used for building a classifier are conditionally independent given a simultaneous labeling of both the training samples and those from an unknown origin. Here we extend the theoretical results to predictive classifiers acknowledging feature dependencies either through graphical models or sparser alternatives defined as stratified graphical models. We show through experimentation with both synthetic and real data that the predictive classifiers encoding dependencies have the potential to substantially improve classification accuracy compared with both standard discriminative classifiers and the predictive classifiers based on solely conditionally independent features. In most of our experiments stratified graphical models show an advantage over ordinary graphical models.
more …
By
Consonni, Guido; GutiérrezPeña, Eduardo; Veronese, Piero
7 Citations
Suppose we entertain Bayesian inference under a collection of models. This requires assigning a corresponding collection of prior distributions, one for each model’s parameter space. In this paper we address the issue of relating priors across models, and provide both a conceptual and a pragmatic justification for this task. Specifically, we consider the notion of “compatible” priors across models, and discuss and compare several strategies to construct such distributions. To explicate the issues involved, we refer to a specific problem, namely, testing the Hardy–Weinberg Equilibrium model, for which we provide a detailed analysis using Bayes factors.
more …
By
Kubokawa, T.; Marchand, É.; Strawderman, W. E.
1 Citations
We consider a class of mixture models for positive continuous data and the estimation of an underlying parameter θ of the mixing distribution. With a unified approach, we obtain classes of dominating estimators under squared error loss of an unbiased estimator, which include smooth estimators. Applications include estimating noncentrality parameters of chisquare and Fdistributions, as well as ρ^{2}/(1 − ρ^{2}), where ρ is amultivariate correlation coefficient in a multivariate normal setup. Finally, the findings are extended to situations, where there exists a lower bound constraint on θ.
more …
By
Miladinovic, Branko; Tsokos, Chris P.
Extreme value distributions are increasingly being applied in biomedical literature to model unusual behavior or rare events. Two popular methods that are used to estimate the location and scale parameters of the type I extreme value (or Gumbel) distribution, namely, the empirical distribution function and the method of moments, are not optimal, especially for small samples. Additionally, even with the more robust maximum likelihood method, it is difficult to make inferences regarding outcomes based on estimates of location and scale parameters alone. Quantile modeling has been advocated in statistical literature as an intuitive and comprehensive approach to inferential statistics. We derive Bayesian estimates of the Gumbel quantile function by utilizing the Jeffreys noninformative prior and Lindley approximation procedure. The advantage of this approach is that it utilizes information on the prior distribution of parameters, while making minimal impact on the estimated posterior distribution. The Bayesian and maximum likelihood estimates are compared using numerical simulation. Numerical results indicate that Bayesian quantile estimates are closer to the true quantiles than their maximum likelihood counterparts. We illustrate the method by applying the estimates to published extreme data from the analysis of streak artifacts on computed tomography (CT) images.
more …
By
Zens, Gregor
A method for implicit variable selection in mixtureofexperts frameworks is proposed. We introduce a prior structure where information is taken from a set of independent covariates. Robust class membership predictors are identified using a normal gamma prior. The resulting model setup is used in a finite mixture of Bernoulli distributions to find homogenous clusters of women in Mozambique based on their information sources on HIV. Fully Bayesian inference is carried out via the implementation of a Gibbs sampler.
more …
By
Ghosh, Sujit K.; Ebrahimi, Nader
4 Citations
Changepoint hazard rate models arise, for example, in applying “burnin” techniques to screen defective items and in studying times until undesirable side effects occur in clinical trials. The classical approach develops estimates of model parameters, with particular interest in the threshold or changepoint parameter, but exclusively in terms of asymptotic properties. Such asymptotics can be poor for small to moderate sample sizes often encountered in practice. We propose a Bayesian approach, avoiding asymptotics, to provide more reliable inference conditional only upon the data actually observed. The Bayesian models can be fitted using simulation methods. We develop a very general formulation of the model but also look at special cases which offer particularly simple fitting. We illustrate with an application involving failure times of electrical insulation.
more …
By
Moreno, Elías; Martínez, Carmen; Vázquez–Polo, Francisco–José
Selecting a statistical model from a set of competing models is a central issue in the scientific task, and the Bayesian approach to model selection is based on the posterior model distribution, a quantification of the updated uncertainty on the entertained models. We present a Bayesian procedure for choosing a family between the Poisson and the geometric families and prove that the procedure is consistent with rate
$$O(a^{n})$$
,
$$a>1$$
, where a is a function of the parameter of the true model. An extension of this procedure to the multiple testing Poisson and negative binomial with r successes for
$$r=1,\ldots ,L$$
is also proved to be consistent with exponential rate. For small sample sizes, a simulation study indicates that the model selection between the above distributions is made with large uncertainty when sampling from a specific subset of distributions. This difficulty is however mitigated by the large consistency rate of the procedure.
more …
By
Campos, Cassio P.; Zhang, Lei; Tong, Yan; Ji, Qiang
Show all (4)
4 Citations
This paper explores the application of semiqualitative probabilistic networks (SQPNs) that combine numeric and qualitative information to computer vision problems. Our version of SQPN allows qualitative influences and imprecise probability measures using intervals. We describe an Imprecise Dirichlet model for parameter learning and an iterative algorithm for evaluating posterior probabilities, maximum a posteriori and most probable explanations. Experiments on facial expression recognition and image segmentation problems are performed using real data.
more …
