Showing 31 to 40 of 52558 matching Articles
Results per page:
By
Dufner, Julius; Jensen, Uwe; Schumacher, Erich
Zusammenfassung
SAS (Statistical Analysis System) — in den Jahren von 1975 bis 1980 noch eine reine Statistiksoftware — ist inzwischen zu einem umfassenden Softwaresystem zur Verwaltung, Analyse und Darstellung von Daten ausgebaut worden.
By
Besbeas, Panagiotis; Borysiewicz, Rachel S.; Morgan, Bryon J.T.
18 Citations
A challenge for integrated population methods is to examine the extent to which different surveys that measure different demographic features for a given species are compatible. Do the different pieces of the jigsaw fit together? One convenient way of proceeding is to generate a likelihood for census data using the Kalman filter, which is then suitably combined with other likelihoods that might arise from independent studies of mortality, fecundity, and so forth. The combined likelihood may then be used for inference. Typically the underlying model for the census data is a statespace model, and capture–recapture methods of various kinds are used to construct the additional likelihoods. In this paper we provide a brief review of the approach; we present a new way to start the Kalman filter, designed specifically for ecological processes; we investigate the effect of breakdown of the independence assumption; we show how the Kalman filter may be used to incorporate densitydependence, and we consider the effect of introducing heterogeneity in the statespace model.
more …
By
ChuangStein, Christy; Kirby, Simon
In this chapter, we compare successive trials designed and conducted to assess the efficacy of a new drug to a series of diagnostic tests. The condition to diagnose is whether the new drug has a clinically meaningful efficacious effect. This comparison offers us the opportunity to apply properties pertaining to diagnostic tests discussed in Chap. 3 to clinical trials. Building on the results in Chap. 3, we discuss why replication is such a critically important concept in drug development and show why replication is not as easy as some might have hoped. We end the chapter by highlighting the difference between statistical power and the probability of a positive trial. This last point becomes more important as a new drug moves through the various development stages as will be illustrated in Chap. 9.
more …
By
Hornung, Roman
The ordinal forest method is a random forest–based prediction method for ordinal response variables. Ordinal forests allow prediction using both lowdimensional and highdimensional covariate data and can additionally be used to rank covariates with respect to their importance for prediction. An extensive comparison study reveals that ordinal forests tend to outperform competitors in terms of prediction performance. Moreover, it is seen that the covariate importance measure currently used by ordinal forest discriminates influential covariates from noise covariates at least similarly well as the measures used by competitors. Several further important properties of the ordinal forest algorithm are studied in additional investigations. The rationale underlying ordinal forests of using optimized score values in place of the class values of the ordinal response variable is in principle applicable to any regression method beyond random forests for continuous outcome that is considered in the ordinal forest method.
more …
By
Heck, André
2 Citations
In the book Databases & Online Data in Astronomy (Albrecht & Egret 1991), the chapter entitled Astronomical Directories (Heck, 1991) was essentially dealing with classical publications on paper: directories of organizations, lists of electronic addresses, and so on. Some files started to be retrievable electronically. In Sect. 8 (Future Trends) of that chapter, we were looking forward to using online and networking facilities that have become a daily reality since.
more …
By
Pitou, Cynthia; Diatta, Jean
Textual information extraction is a challenging issue in Information Retrieval. Two main approaches are commonly distinguished: texturebased and regionbased. In this paper, we propose a method guided by the quadtree
Quadtree
decomposition. The principle of the method is to recursively decompose regions of a document image is four equal regions, starting from the image of the whole document. At each step of the decomposition process an OCR engine is used for retrieving a given textual information from the obtained regions. Experiments on real invoice data provide promising results.
more …
By
Meintanis, Simos G.; Allison, James; Santana, Leonard
2 Citations
We investigate the finitesample properties of certain procedures which employ the novel notion of the probability weighted empirical characteristic function. The procedures considered are: (1) Testing for symmetry in regression, (2) Testing for multivariate normality with independent observations, and (3) Testing for multivariate normality of random effects in mixed models. Along with the new tests alternative methods based on the ordinary empirical characteristic function as well as other more well known procedures are implemented for the purpose of comparison.
more …
By
Ubøe, Jan
In this chapter we will look at situations that occur when we study two discrete random variables at the same time. One example of that sort could be that X is the price of a stock, while Y is the number of that stock that is sold each day. In many cases there can be more or less strong relations between the two variables, and it is important to be able to decide to what extent such relations are present. The theory has much in common with the theory in Chap.
1
In this case we need to take into account that different outcomes are not in general equally probable.
more …
