Showing 1 to 10 of 2303 matching Articles
Results per page:
By
Chilès, JeanPaul; Lantuéjoul, Christian
3 Citations
Prediction here refers to the behavior of a regionalized variable: average ozone concentration in April 2004 in Paris, maximum lead concentration in an industrial site, recoverable reserves of an orebody, breakthrough time from a source of pollution to a target, etc. Dedicating a whole chapter of a book in honor to Georges Matheron to prediction by conditional simulation is somewhat paradoxical. Indeed performing simulations requires strong assumptions, whereas Matheron did his utmost to weaken the prerequisites for the prediction methods he developed. Accordingly, he never used them with the aim of predicting and they represented a marginal part of his activity. The turning bands method, for example, is presented very briefly in a technical report on the Radon transform to illustrate the onetoone mapping between ddimensional isotropic covariances and unidimensional covariances^{1} [44]. As for the technique of conditioning by kriging, it is nowhere to be found in Matheron’s entire published works, as he merely regarded it as an immediate consequence of the orthogonality of the kriging estimator and the kriging error.
more …
By
Pitou, Cynthia; Diatta, Jean
Textual information extraction is a challenging issue in Information Retrieval. Two main approaches are commonly distinguished: texturebased and regionbased. In this paper, we propose a method guided by the quadtree
Quadtree
decomposition. The principle of the method is to recursively decompose regions of a document image is four equal regions, starting from the image of the whole document. At each step of the decomposition process an OCR engine is used for retrieving a given textual information from the obtained regions. Experiments on real invoice data provide promising results.
more …
By
Navarro, F.; Saumard, A.
Many interesting functional bases, such as piecewise polynomials or wavelets, are examples of localized bases. We investigate the optimality of V fold crossvalidation and a variant called V fold penalization in the context of the selection of linear models generated by localized bases in a heteroscedastic framework. It appears that while V fold crossvalidation is not asymptotically optimal when V is fixed, the V fold penalization procedure is optimal. Simulation studies are also presented.
more …
By
Doukhan, Paul
This chapter aims at describing stationary sequences generated from independent identically distributed samples
$$(\xi _n)_{n\in \mathbb {Z}}$$
. Most of the material in this chapter is specific to this monograph so that we do not provide a global reference. However Rosenblatt (Stationary processes and random fields. Birkhäuser, Boston, 1985) performs an excellent approach to modelling. Generalized linear models are presented in Kedem and Fokianos (Regression models for time series analysis. Wiley, Hoboken, 2002). The Markov case has drawn much attention, see Duflo (Random iterative models. Springer, NewYork, 1996), and for example Douc et al. (Nonlinear time series: theory, methods, and applications with R examples. CRC Press, Chapman & Hall, Boca Raton, 2015) for the estimation of such Markov models. Many statistical models will be proved in this way. The organization follows the order from natural extensions of linearity to more general settings. From linear processes it is natural to build polynomial models or their limits. Then we consider more general Bernoulli shift models to define recurrence equations besides the standard Markov setting.
more …
By
Tallur, B.; Nicolas, J.
Summary
It is needless to emphasize the importance of classification of protein sequences in molecular biology. Various methods of classification are currently being used by biologists (Landès et aí.1992) but most of them require the sequences to be prealigned — and thus to be of equal length — using one of the several multiple alignment algorithms available, so as to make the sitebysite comparison of sequences possible. Two LLAbased approaches for classifying prealigned sequences were already proposed (Lerman et al. (1994a)) whose results compared favourably with most currently used methods. The first approach made use of the “preordonnance” coding and the second one, the idea of “significant windows”. The new directions of research leading to a clustering method free from this somewhat strong constraint were also suggested by the authors. The present paper gives an account of the recent developments of our research, consisting of a new method that gets round the sequence comparison problem faced with while dealing with unaligned sequences, thanks to the “significant windows” approach.
more …
By
Robert, Christian P.; Casella, George
2 Citations
Until the advent of powerful and accessible computing methods, the experimenter was often confronted with a difficult choice. Either describe an accurate model of a phenomenon, which would usually preclude the computation of explicit answers, or choose a standard model which would allow this computation, but may not be a close representation of a realistic model. This dilemma is present in many branches of statistical applications, for example, in electrical engineering, aeronautics, biology, networks, and astronomy. To use realistic models, the researchers in these disciplines have often developed original approaches for model fitting that are customized for their own problems. (This is particularly true of physicists, the originators of Markov chain Monte Carlo methods.) Traditional methods of analysis, such as the usual numerical analysis techniques, are not well adapted for such settings.
more …
By
Chernoyarov, O. V.; Dachian, S.; Kutoyants, Yu. A.
2 Citations
This work is devoted to the problem of estimation of the localization of Poisson source. The observations are inhomogeneous Poisson processes registered by more than three detectors on the plane. We study the behavior of the Bayes estimators in the asymptotic of large intensities. It is supposed that the intensity functions of the signals arriving in the detectors have cusptype singularity. We show the consistency, limit distributions, the convergence of moments and asymptotic efficiency of these estimators.
more …
By
Wahrendorf, J.; Weber, E.
Zusammenfassung
OrdinalMerkmale, wie z.B. Krankheitsstadium, Altersgruppe, histologische Differenzierung u. ä. fallen sehr häufig bei der Gewinnung von Daten aus Krankenregistern oder ähnlichen Erhebungen, aber auch bei geplanten Studien an. McCullagh (1980) hat in einer umfangreichen Arbeit an Beispielen gezeigt, daß statistische Methoden die Analyse solcher Daten aus verschiedenen Blickwinkeln mit verschieden gearteten Aussagen erlauben. Dabei sind die vorliegende Stichprobensituation und die damit verbundenen Aussagemöglichkeiten sehr gründlich zu beachten. Am Beispiel einer Beobachtungsstudie werden die Einsatzmöglichkeiten verschiedener, zum Teil neuerer Methoden, demonstriert und die unterschiedlichen Interpretationsmöglichkeiten diskutiert, wie z.B. Test auf Trend nach Armitage (1955), verallgemeinerte Logittransformation und relatives Risiko auf eine Antwortklasse. Weiterhin wird die Frage nach der Konsistenz des Assoziationsmusters (Wahrendorf 1980) mit Hilfe weiterer Methoden nachgegangen. Dabei zeigt es sich, daß all die verschiedenen relevanten Fragestellungen letztlich das eine Ziel haben, nämlich die Beantwortung der Frage nach dem Ausmaß einer Wechselwirkung zwischen den ordinalen Merkmalen.
more …
By
Alvares, Danilo; Armero, Carmen; Forte, Anabel; Chopin, Nicolas
Show all (4)
Longitudinal modelling is common in the field of Biostatistical research. In some studies, it becomes mandatory to update posterior distributions based on new data in order to perform inferential process online. In such situations, the use of posterior distribution as the prior distribution in the new application of the Bayes’ theorem is sensible. However, the analytic form of the posterior distribution is not always available and we only have an approximated sample of it, thus making the process “notsoeasy”. Equivalent inferences could be obtained through a Bayesian inferential process based on the set that integrates the old and new data. Nevertheless, this is not always a real alternative, because it may be computationally very costly in terms of both time and resources. This work uses the dynamic characteristics of sequential Monte Carlo methods for “static” setups in the framework of longitudinal modelling scenarios. We used this methodology in real data through a random intercept model.
more …
By
Blair, Aaron; Hohenadel, Karin; Demers, Paul; Marrett, Loraine; Straif, Kurt
Show all (5)
A number of workplace exposures are known to cause cancer. In fact, the workplace has been a major source of information regarding causes of cancer. In most countries, there are sizable public and private efforts to control occupational exposures to minimize disease risks. Despite these considerable and appropriate, preventive efforts, there is relatively little information on their effectiveness. The few studies available do indicate that controlling occupational exposures leads to a reduction in cancer risk. However, details regarding this reduction, e.g., timedependent changes in risk following intervention and potential confounding and effect modification from other occupational exposures or personal habits, are largely lacking. Such information is needed to identify and characterize successful exposurereduction approaches and to reduce the cancer burden on our working population in a timely manner.
more …
