## SEARCH

#### Country

##### ( see all 29)

- United States 25 (%)
- Canada 7 (%)
- Germany 7 (%)
- Iran 5 (%)
- United Kingdom 5 (%)

#### Author

##### ( see all 176)

- Ahlgren, Niklas 2 (%)
- Balakrishnan, N. 2 (%)
- Bazyari, Abouzar 2 (%)
- Cordeiro, Gauss M. 2 (%)
- Cribari-Neto, Francisco 2 (%)

#### Publication

##### ( see all 48)

- Metrika 6 (%)
- Annals of the Institute of Statistical Mathematics 5 (%)
- Statistical Papers 5 (%)
- Lifetime Data Analysis 3 (%)
- METRON 3 (%)

#### Subject

##### ( see all 114)

- Statistics 39 (%)
- Statistics, general 32 (%)
- Statistics for Business/Economics/Mathematical Finance/Insurance 29 (%)
- Probability Theory and Stochastic Processes 16 (%)
- Economic Theory 13 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 50 of 75 matching Articles
Results per page:

## Cyclostationary Feature Detection Based Spectrum Sensing Technique of Cognitive Radio in Nakagami-m Fading Environment

### Computational Intelligence in Data Mining - Volume 2 (2015-01-01) 32: 209-219 , January 01, 2015

The main function of Cognitive Radio network is to sense the spectrum band to check whether the primary user is present or not in a given spectrum band at a place. One of the most efficient ways of spectrum sensing technique is cyclostationary feature detection. Though the computational complexity is very high in case of cyclostationary feature detection, still it is very effective in case of unknown level of noise. Here, spectral correlation function (SCF) of the received signal is determined. In this paper, SCF is calculated using Hanning, Hamming and Kaiser windows and also we have determined the cyclic periodogram of the input signal. In each case, we have considered Nakagami-*m* channel fading. Likelihood ratio test has been performed by varying the parameters of the channel fading distribution. Finally, the effects of the windows have been studied in the numerical section.

## On local power properties of the LR, Wald, score and gradient tests in nonlinear mixed-effects models

### Annals of the Institute of Statistical Mathematics (2015-10-01) 67: 885-895 , October 01, 2015

The local powers of some tests under the presence of a parameter vector, $$\varvec{\omega }$$ say, that is orthogonal to the remaining parameters are studied in this paper. We show that some of the coefficients that define the local powers of the tests remain unchanged regardless of whether $$\varvec{\omega }$$ is known or needs to be estimated, whereas the others can be written as the sum of two terms, the first of which being the corresponding term obtained as if $$\varvec{\omega }$$ were known, and the second, an additional term yielded by the fact that $$\varvec{\omega }$$ is unknown. We apply our general result in the class of nonlinear mixed-effects models and compare the local powers of the tests in this class of models.

## The impact of misspecification of residual error or correlation structure on the type I error rate for covariate inclusion

### Journal of Pharmacokinetics and Pharmacodynamics (2009-02-01) 36: 81-99 , February 01, 2009

It has been shown that when using the FOCE method in NONMEM, the likelihood ratio test (LRT) can be sensitive to the use of an inappropriate estimation method in that ignoring an existing η–ε interaction leads to actual significance levels for type I errors being higher than the nominal levels. The objective of this study was to assess through simulations the LRT sensitivity to various types of residual error model misspecifications in both continuous and categorical data. The study contained two parts, simulations based on continuous and categorical data. Data sets containing 250 individuals with up to 24 observations per individual were simulated multiple times (1000) with different types of residual error models for the continuous data and different strength of correlation between observations for the categorical data. The data sets were analyzed using either the correct or a simpler (incorrect) model with or without addition of a covariate. The type I error rate of inclusion of the non-informative covariate on the 5% level was calculated as the number of runs where the drop in the objective function value (OFV) was larger than 3.84 when the covariate relationship was included in the model using the correct or the incorrect model. The difference in OFV between the model with the correct and the incorrect structure was also calculated as a measure of the residual error model misspecification. For continuous data the FOCE method was used in most cases (with interaction when appropriate). The Laplacian estimation method was used for one of the continuous models and for categorical data. The results showed that the residual error model misspecifications when the erroneous model was used were pronounced, as indicated by the OFV being substantially higher than for the corresponding correct models. The significance levels of the LRT with the incorrect model were appropriate in all cases but ignoring (serial) correlations between observations (continuous and categorical data) as well as when the η–ε interaction was ignored (which has previously been shown, continuous data). When ignoring correlation, the type I error rates were shown to be sensitive to the correlation strength, the number of observations per individual and the magnitude of the inter-individual variability on clearance. We conclude that the LRT appears robust towards all tested cases, but ignoring (serial) correlations between observations and η–ε interaction.

## Adjusted Confidence Intervals for a Bounded Parameter

### Behavior Genetics (2012-11-01) 42: 886-898 , November 01, 2012

It is well known that the regular likelihood ratio test of a bounded parameter is not valid if the boundary value is being tested. This is the case for testing the null value of a scalar variance component. Although an adjusted test of variance component has been suggested to account for the effect of its lower bound of zero, no adjustment of its interval estimate has ever been proposed. If left unadjusted, the confidence interval of the variance may still contain zero when the adjusted test rejects the null hypothesis of a zero variance, leading to conflicting conclusions. In this research, we propose two ways to adjust the confidence interval of a parameter subject to a lower bound, one based on the Wald test and the other on the likelihood ratio test. Both are compatible to the adjusted test and parametrization-invariant. A simulation study and two examples are given in the framework of ACDE models in twin studies.

## Methods to test for equality of two normal distributions

### Statistical Methods & Applications (2016-01-29): 1-19 , January 29, 2016

Statistical tests for two independent samples under the assumption of normality are applied routinely by most practitioners of statistics. Likewise, presumably each introductory course in statistics treats some statistical procedures for two independent normal samples. Often, the classical two-sample model with equal variances is introduced, emphasizing that a test for equality of the expected values is a test for equality of both distributions as well, which is the actual goal. In a second step, usually the assumption of equal variances is discarded. The two-sample *t* test with Welch correction and the *F* test for equality of variances are introduced. The first test is solely treated as a test for the equality of central location, as well as the second as a test for the equality of scatter. Typically, there is no discussion if and to which extent testing for equality of the underlying normal distributions is possible, which is quite unsatisfactorily regarding the motivation and treatment of the situation with equal variances. It is the aim of this article to investigate the problem of testing for equality of two normal distributions, and to do so using knowledge and methods adequate to statistical practitioners as well as to students in an introductory statistics course. The power of the different tests discussed in the article is examined empirically. Finally, we apply the tests to several real data sets to illustrate their performance. In particular, we consider several data sets arising from intelligence tests since there is a large body of research supporting the existence of sex differences in mean scores or in variability in specific cognitive abilities.

## The effect of rare variants on inflation of the test statistics in case–control analyses

### BMC Bioinformatics (2015-02-20) 16: 1-5 , February 20, 2015

### Background

The detection of bias due to cryptic population structure is an important step in the evaluation of findings of genetic association studies. The standard method of measuring this bias in a genetic association study is to compare the observed median association test statistic to the expected median test statistic. This ratio is inflated in the presence of cryptic population structure. However, inflation may also be caused by the properties of the association test itself particularly in the analysis of rare variants. We compared the properties of the three most commonly used association tests: the likelihood ratio test, the Wald test and the score test when testing rare variants for association using simulated data.

### Results

We found evidence of inflation in the median test statistics of the likelihood ratio and score tests for tests of variants with less than 20 heterozygotes across the sample, regardless of the total sample size. The test statistics for the Wald test were under-inflated at the median for variants below the same minor allele frequency.

### Conclusions

In a genetic association study, if a substantial proportion of the genetic variants tested have rare minor allele frequencies, the properties of the association test may mask the presence or absence of bias due to population structure. The use of either the likelihood ratio test or the score test is likely to lead to inflation in the median test statistic in the absence of population structure. In contrast, the use of the Wald test is likely to result in under-inflation of the median test statistic which may mask the presence of population structure.

## A Random Effects Model for Binary Mixture Toxicity Experiments

### Journal of Agricultural, Biological, and Environmental Statistics (2010-12-01) 15: 562-577 , December 01, 2010

We suggest a simple isobole analysis for binary mixture toxicity experiments. The analysis is based on estimated logarithmic effect concentrations and their corresponding standard errors. The suggested model allows for synergism/antagonism and incorporates within-mixture variation as well as between-mixture variation in a random effects model. The likelihood ratio test for the hypothesis of concentration addition (CA) is examined, in particular its small sample properties. We study two datasets on the joint effect of acifluorfen versus diquat and glyphosate versus mechlorprop, respectively, on the growth of the aquatic macrophyte *Lemna minor*.

## Maximum Likelihood

### Computational Toxicology (2013-01-01) 930: 581-595 , January 01, 2013

The maximum likelihood method is a popular statistical inferential procedure widely used in many areas to obtain the estimates of the unknown parameters of a population of interest. This chapter gives a brief description of the important concepts underlying the maximum likelihood method, the definition of the key components, the basic theory of the method, and the properties of the resulting estimates. Confidence interval and likelihood ratio test are also introduced. Finally, a few examples of applications are given to illustrate how to derive maximum likelihood estimates in practice. A list of references to relevant papers and software for a further understanding of the method and its implementation is provided.

## A likelihood ratio test of a homoscedastic normal mixture against a heteroscedastic normal mixture

### Statistics and Computing (2008-09-01) 18: 233-240 , September 01, 2008

It is generally assumed that the likelihood ratio statistic for testing the null hypothesis that data arise from a homoscedastic normal mixture distribution versus the alternative hypothesis that data arise from a heteroscedastic normal mixture distribution has an asymptotic *χ*^{2} reference distribution with degrees of freedom equal to the difference in the number of parameters being estimated under the alternative and null models under some regularity conditions. Simulations show that the *χ*^{2} reference distribution will give a reasonable approximation for the likelihood ratio test only when the sample size is 2000 or more and the mixture components are well separated when the restrictions suggested by Hathaway (Ann. Stat. 13:795–800, 1985) are imposed on the component variances to ensure that the likelihood is bounded under the alternative distribution. For small and medium sample sizes, parametric bootstrap tests appear to work well for determining whether data arise from a normal mixture with equal variances or a normal mixture with unequal variances.

## Inference for comparing a multinomial distribution with a known standard

### Statistical Papers (2012-08-01) 53: 775-788 , August 01, 2012

We propose a new type of stochastic ordering which imposes a monotone tendency in differences between one multinomial probability and a known standard one. An estimation procedure is proposed for the constrained maximum likelihood estimate, and then the asymptotic null distribution is derived for the likelihood ratio test statistic for testing equality of two multinomial distributions against the new stochastic ordering. An alternative test is also discussed based on Neyman modified minimum chi-square estimator. These tests are illustrated with a set of heart disease data.

## A proof of independent Bartlett correctability of nested likelihood ratio tests

### Annals of the Institute of Statistical Mathematics (1996-12-01) 48: 603-620 , December 01, 1996

It is well known that likelihood ratio statistic is Bartlett correctable. We consider decomposition of a likelihood ratio statistic into 1 degree of freedom components based on sequence of nested hypotheses. We give a proof of the fact that the component likelihood ratio statistics are distributed mutually independently up to the order *O(1/n)* and each component is independently Bartlett correctable. This was implicit in Lawley (1956, *Biometrika*, *43*, 295–303) and proved in Bickel and Ghosh (1990, *Ann. Statist.*, *18*, 1070–1090) using a Bayes method. We present a more direct frequentist proof.

## The power of bootstrap tests of cointegration rank

### Computational Statistics (2013-12-01) 28: 2719-2748 , December 01, 2013

Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of asymptotic tests. The effect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymptotic test. The bootstrap test therefore estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.

## Bartlett Corrections and Bootstrap Testing Inference

### An Introduction to Bartlett Correction and Bias Reduction (2014-05-09): 13-43 , May 09, 2014

This chapter introduces the Bartlett correction to the likelihood ratio test statistic. The likelihood ratio test typically employs critical values that are only asymptotically correct and, as consequence, size distortions arise. The null distribution of Bartlett-corrected test statistic is typically better approximated by the limiting distribution than that of the corresponding unmodified test statistics. The correction reduces the test error rate, that is, size discrepancies of the corrected test vanish at a faster rate. We also show how to estimate the exact null distribution of a test statistic using data resampling (bootstrap). It is noteworthy that the bootstrap can be used to estimate the Bartlett correction factor.

## Tests of independence in a bivariate exponential distribution

### Metrika (2005-02-01) 61: 47-62 , February 01, 2005

### Abstract.

The problem of testing independence in a two component series system is considered. The joint distribution of component lifetimes is modeled by the Pickands bivariate exponential distribution, which includes the widely used Marshall and Olkin’s distribution and the Gumbel’s type II distribution. The case of identical components is first addressed. Uniformly most powerful unbiased test (UMPU) and likelihood ratio test are obtained. It is shown that inspite of a nuisance parameter, the UMPU test is unconditional and this test turns out to be the same as the likelihood ratio test. The case of nonidentical components is also addressed and both UMPU and likelihood ratio tests are obtained. A UMPU test is obtained to test the identical nature of the components and extensions to the type II censoring scheme and multiple component systems are also discussed. Some modifications to account for the difference in parameters under test and use conditions are also discussed.

## A likelihood ratio test for bimodality in two-component mixtures with application to regional income distribution in the EU

### AStA Advances in Statistical Analysis (2008-02-01) 92: 57-69 , February 01, 2008

We propose a parametric test for bimodality based on the likelihood principle by using two-component mixtures. The test uses explicit characterizations of the modal structure of such mixtures in terms of their parameters. Examples include the univariate and multivariate normal distributions and the von Mises distribution. We present the asymptotic distribution of the proposed test and analyze its finite sample performance in a simulation study. To illustrate our method, we use mixtures to investigate the modal structure of the cross-sectional distribution of per capita log GDP across EU regions from 1977 to 1993. Although these mixtures clearly have two components over the whole time period, the resulting distributions evolve from bimodality toward unimodality at the end of the 1970s.

## Numerical comparison between powers of maximum likelihood and analysis of variance methods for QTL detection in progeny test designs: the case of monogenic inheritance

### Theoretical and Applied Genetics (1995-01-01) 90: 65-72 , January 01, 1995

Simulations are used to compare four statistics for the detection of a quantitative trait locus (QTL) in daughter and grand-daughter designs as defined by Soller and Genizi (1978) and Weller et al. (1990): (1) the Fisher test of a linear model including a marker effect within sire or grand-sire effect; (2) the likelihood ratio test of a segregation analysis without the information given by the marker; (3) the likelihood ratio test of a segregation analysis considering the information from the marker; and (4) the lod score which is the likelihood ratio test of absence of linkage between the marker and the QTL. In all cases the two segregation analyses are more powerful for QTL detection than are either the linear method or the lod score. The differences in power are generally limited but may be significant (in a ratio of 1 to 3 or 4) when the QTL has a small effect (0.2 standard deviations) and is not closely linked to the marker (recombination rate of 20% or more).

## A new bivariate Gamma distribution generated from functional scale parameter with application to drought data

### Stochastic Environmental Research and Risk Assessment (2013-07-01) 27: 1039-1054 , July 01, 2013

Univariate and bivariate Gamma distributions are among the most widely used distributions in hydrological statistical modeling and applications. This article presents the construction of a new bivariate Gamma distribution which is generated from the functional scale parameter. The utilization of the proposed bivariate Gamma distribution for drought modeling is described by deriving the exact distribution of the inter-arrival time and the proportion of drought along with their moments, assuming that both the lengths of drought duration (X) and non-drought duration (Y) follow this bivariate Gamma distribution. The model parameters of this distribution are estimated by maximum likelihood method and an objective Bayesian analysis using Jeffreys prior and Markov Chain Monte Carlo method. These methods are applied to a real drought dataset from the State of Colorado, USA.

## Modified inference about the mean of the exponential distribution using moving extreme ranked set sampling

### Statistical Papers (2009-03-01) 50: 249-259 , March 01, 2009

The maximum likelihood estimator (MLE) and the likelihood ratio test (LRT) will be considered for making inference about the scale parameter of the exponential distribution in case of moving extreme ranked set sampling (MERSS). The MLE and LRT can not be written in closed form. Therefore, a modification of the MLE using the technique suggested by Maharota and Nanda (Biometrika 61:601–606, 1974) will be considered and this modified estimator will be used to modify the LRT to get a test in closed form for testing a simple hypothesis against one sided alternatives. The same idea will be used to modify the most powerful test (MPT) for testing a simple hypothesis versus a simple hypothesis to get a test in closed form for testing a simple hypothesis against one sided alternatives. Then it appears that the modified estimator is a good competitor of the MLE and the modified tests are good competitors of the LRT using MERSS and simple random sampling (SRS).

## Radioactive Target Detection Using Wireless Sensor Network

### Computer, Informatics, Cybernetics and Applications (2012-01-01) 107: 293-301 , January 01, 2012

The detection of radioactive target is becoming more important recently in public safety and national security. By using the physical law for nuclear radiation isotopes, this chapter proposes a statistical method for wireless sensor network data to detect and locate a hidden nuclear target in a large study area. The method assumes multiple radiation detectors have been used as sensor nodes in a wireless sensor network. Radiation counts have been observed by sensors and each is composed of a radiation signal plus a radiation background. By considering the physical properties of radiation signal and background, the proposed method can simultaneously detect and locate the radioactive target in the area. Our simulation results have shown that the proposed method is effective and efficient in detection and location of the nuclear radioactive target. This research will have wide applications in the nuclear safety and security problems.

## Testing the compatibility of constraints for parameters of a geodetic adjustment model

### Journal of Geodesy (2013-06-01) 87: 555-566 , June 01, 2013

Geodetic adjustment models are often set up in a way that the model parameters need to fulfil certain constraints. The normalized Lagrange multipliers have been used as a measure of the strength of constraint in such a way that if one of them exceeds in magnitude a certain threshold then the corresponding constraint is likely to be incompatible with the observations and the rest of the constraints. We show that these and similar measures can be deduced as test statistics of a likelihood ratio test of the statistical hypothesis that some constraints are incompatible in the same sense. This has been done before only for special constraints (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). We start from the simplest case, that the full set of constraints is to be tested, and arrive at the advanced case, that each constraint is to be tested individually. Every test is worked out both for a known as well as for an unknown prior variance factor. The corresponding distributions under null and alternative hypotheses are derived. The theory is illustrated by the example of a double levelled line.

## The estimation and inference on the equal ratios of means to standard deviations of normal populations

### Statistical Papers (2015-02-01) 56: 157-165 , February 01, 2015

This paper considers the estimation and the hypothesis testing on the equality of the ratios of the means to standard deviations of several normal populations with difference sample sizes. We propose an iterative algorithm to find the maximum likelihood estimates (MLEs) of the normal population means and standard deviations when the ratios of the means to the standard deviations are equal. A bootstrap method is introduced to determine the critical value in the likelihood ratio test. The simulation studies indicate that the proposed hypothesis testing performs well.

## Accuracy and Power of the Likelihood Ratio Test for Comparing Evolutionary Rates Among Genes

### Journal of Molecular Evolution (2005-04-01) 60: 426-433 , April 01, 2005

Sequences for multiple protein-coding genes are now commonly available from several, often closely related species. These data sets offer intriguing opportunities to test hypotheses regarding whether different types of genes evolve under different selective pressures. Although maximum likelihood (ML) models of codon substitution that are suitable for such analyses have been developed, little is known about the statistical properties of these tests. We use a previously developed fixed-sites model and computer simulations to examine the accuracy and power of the likelihood ratio test (LRT) in comparing the nonsynonymous-to-synonymous substitution rate ratio (ω = dN/dS) between two genes. Our results show that the LRT applied to fixed-sites models may be inaccurate in some cases when setting significance thresholds using a χ^{2} approximation. Instead, we use a parametric bootstrap to describe the distribution of the LRT statistic for fixed-sites models and examine the power of the test as a function of sampling variables and properties of the genes under study. We find that the power of the test is high (>80%) even when sampling few taxa (e.g., six species) if sequences are sufficiently diverged and the test is largely unaffected by the tree topology used to simulate data. Our simulations show fixed-sites models are suitable for comparing substitution parameters among genes evolving under even strong evolutionary constraint (ω ≈ 0.05), although relative rate differences of 25% or less may be difficult to detect.

## Adaptive Evolution of Scorpion Sodium Channel Toxins

### Journal of Molecular Evolution (2004-02-01) 58: 145-153 , February 01, 2004

Gene duplication followed by positive Darwinian selection is an important evolutionary event at the molecular level, by which a gene can gain new functions. Such an event might have occurred in the evolution of scorpion sodium channel toxin genes (α- and β-groups). To test this hypothesis, a robust statistical method from Yang and co-workers based on the estimation of the nonsynonymous-to-synonymous rate ratio (ω = *d*_{N}/*d*_{S}) was performed. The results provide clear statistical evidence for adaptive molecular evolution of scorpion α- and β-toxin genes. A good match between the positively selected sites (evolutionary epitopes) and the putative bioactive surface (functional epitopes) indicates that these sites are most likely involved in functional recognition of sodium channels. Our results also shed light on the importance of the B-loop in the functional diversification of scorpion α- and β-toxins.

## Coupon collector’s problems with statistical applications to rankings

### Annals of the Institute of Statistical Mathematics (2013-06-01) 65: 571-587 , June 01, 2013

Some new exact distributions on coupon collector’s waiting time problems are given based on a generalized Pólya urn sampling. In particular, usual Pólya urn sampling generates an exchangeable random sequence. In this case, an alternative derivation of the distribution is also obtained from de Finetti’s theorem. In coupon collector’s waiting time problems with $$m$$ kinds of coupons, the observed order of $$m$$ kinds of coupons corresponds to a permutation of $$m$$ letters uniquely. Using the property of coupon collector’s problems, a statistical model on the permutation group of $$m$$ letters is proposed for analyzing ranked data. In the model, as the parameters mean the proportion of the $$m$$ kinds of coupons, the observed ranking can be intuitively understood. Some examples of statistical inference are also given.

## Approximations to most powerful invariant tests for multinormality against some irregular alternatives

### TEST (2010-05-01) 19: 113-130 , May 01, 2010

The problem of testing multinormality against some alternatives invariant with respect to a subgroup of affine transformations is studied. Using the Laplace expansion for integrals, some approximations to the most powerful invariant (MPI) tests are derived. The cases of bivariate exponential and bivariate uniform alternatives are studied in detail, whereas higher-dimensional extensions are only outlined. Those alternatives are irregular and need a special treatment because of the dependence of their supports on the unknown parameters. It is shown that likelihood ratio (LR) tests are asymptotically equivalent to the MPI tests. A Monte Carlo simulation study shows that the powers of the LR tests are very close to the powers of the MPI tests, even in small samples.

## GLMLE: graph-limit enabled fast computation for fitting exponential random graph models to large social networks

### Social Network Analysis and Mining (2015-03-06) 5: 1-19 , March 06, 2015

Large network, as a form of big data, has received increasing amount of attention in data science, especially for large social network, which is reaching the size of hundreds of millions, with daily interactions on the scale of billions. Thus analyzing and modeling these data to understand the connectivities and dynamics of large networks is important in a wide range of scientific fields. Among popular models, exponential random graph models (ERGMs) have been developed to study these complex networks by directly modeling network structures and features. ERGMs, however, are hard to scale to large networks because maximum likelihood estimation of parameters in these models can be very difficult, due to the unknown normalizing constant. Alternative strategies based on Markov chain Monte Carlo (MCMC) draw samples to approximate the likelihood, which is then maximized to obtain the maximum likelihood estimators (MLE). These strategies have poor convergence due to model degeneracy issues and cannot be used on large networks. Chatterjee et al. (Ann Stat 41:2428–2461, 2013) propose a new theoretical framework for estimating the parameters of ERGMs by approximating the normalizing constant using the emerging tools in graph theory—graph limits. In this paper, we construct a complete computational procedure built upon their results with practical innovations which is fast and is able to scale to large networks. More specifically, we evaluate the likelihood via simple function approximation of the corresponding ERGM’s graph limit and iteratively maximize the likelihood to obtain the MLE. We also discuss the methods of conducting likelihood ratio test for ERGMs as well as related issues. Through simulation studies and real data analysis of two large social networks, we show that our new method outperforms the MCMC-based method, especially when the network size is large (more than 100 nodes). One limitation of our approach, inherited from the limitation of the result of Chatterjee et al. (Ann Stat 41:2428–2461, 2013), is that it works only for sequences of graphs with a positive limiting density, i.e., dense graphs.

## Local multiplicity adjustments for spatial cluster detection

### Environmental and Ecological Statistics (2010-03-01) 17: 55-71 , March 01, 2010

The spatial scan statistic is a widely applied tool for cluster detection. The spatial scan statistic evaluates the significance of a series of potential circular clusters using Monte Carlo simulation to account for the multiplicity of comparisons. In most settings, the extent of the multiplicity problem varies across the study region. For example, urban areas typically have many overlapping clusters, while rural areas have few. The spatial scan statistic does not account for these local variations in the multiplicity problem. We propose two new spatially-varying multiplicity adjustments for spatial cluster detection, one based on a nested Bonferroni adjustment and one based on local averaging. Geographic variations in power for the spatial scan statistic and the two new statistics are explored through simulation studies, and the methods are applied to both the well-known New York leukemia data and data from a case–control study of breast cancer in Wisconsin.

## Some Improved Tests for Multivariate One-Sided Hypotheses

### Metrika (2006-08-01) 64: 23-39 , August 01, 2006

Multivariate one-sided hypothesis-testing problems are very common in clinical trials with multiple endpoints. The likelihood ratio test (LRT) and union-intersection test (UIT) are widely used for testing such problems. It is argued that, for many important multivariate one-sided testing problems, the LRT and UIT fail to adapt to the presence of subregions of varying dimensionalities on the boundary of the null parameter space and thus give undesirable results. Several improved tests are proposed that do adapt to the varying dimensionalities and hence reflect the evidence provided by the data more accurately than the LRT and UIT. Moreover, the proposed tests are often less biased and more powerful than the LRT and UIT.

## Optimum simple step-stress plans using reliability estimate

### METRON (2012-08-01) 70: 109-120 , August 01, 2012

### Summary

This paper presents optimum simple step-stress plans under the log-logistic cumulative exposure model. The likelihood function of the model parameters is derived, from which the Fisher information matrix and the asymptotic variance of the reliability estimate are obtained. Optimum times of changing stress level are obtained by minimizing the asymptotic variance of the reliability estimate at a specified time under normal operating condition. Confidence intervals and hypotheses tests about the model parameters have also been discussed and illustrated by an example.

## On runs of ones defined on a q-sequence of binary trials

### Metrika (2015-11-17): 1-24 , November 17, 2015

In a sequence of *n* binary (
$$0{-}1$$
) trials, with probability of ones varying according to a geometric rule, we consider a random variable denoting the number of runs of ones of length at least equal to a fixed threshold *k*,
$$1\le k\le n$$
. Closed and recursive expressions are obtained for the probability mass function, generating functions and moments of this random variable. Statistical inference problems related to the probability of ones are examined by numerical techniques. Numerics illustrate further the theoretical results.

## Testing the Order of Markov Dependence in DNA Sequences

### Methodology and Computing in Applied Probability (2011-03-01) 13: 59-74 , March 01, 2011

DNA or protein sequences are usually modeled as probabilistic phenomena. The simplest model is created on the assumption that the nucleotides at the various sites are independently distributed. Usually the type of nucleotide at some site depends on the type at another site and therefore the DNA sequence is modeled as a Markov chain of random variables taking on the values A, G, C and T corresponding to the four nucleotides. First order or higher order Markov models provide better fit to a DNA sequence. Based on this remark, the aim of this paper is to present and study a family of test statistics for testing order Markov dependence in DNA sequences. This new family includes as a particular case the classical likelihood ratio test. A simulation study is presented in order to find test statistics, in this family, with a better behaviour than the likelihood ratio test.

## Survival analysis applied to proportion data: comparing mammography visits in high and low repeat rate facilities

### Health Services and Outcomes Research Methodology (2013-03-01) 13: 68-83 , March 01, 2013

We examined the relationship between the proportion of time spent in the mammography exam room relative to the total visit time and the rate of repeat visits across mammography facilities. We hypothesized that the means for these proportions would be greater among facilities with high repeat rates compared to those with low repeat rates. The beta distribution was used to model the proportions. However, not all of the observations were observed as some were (right- or interval-) censored due to the inability to observe complete information (e.g., time of arrival at the facility) to calculate the proportions for some observations. Using a maximum likelihood approach, we estimated the mean proportion of time spent during appointments on the actual exams. To analyze these data we utilized survival analysis methods, which typically are applied to data on time scales, to analyze these data restricted to the range (0, 1). We found that facilities with high repeat mammography rates on average had a greater proportion of their visits spent receiving the exams than the low repeat rate facilities (*p* = 0.0003). Women from low repeat rate facilities spent on average 45 % of their appointment time engaging in the exam; thus, these women averaged more time waiting for the exam to begin than receiving the exam. Conversely, those from high repeat rate facilities averaged more of their appointment time engaging in the mammographic exams. These findings were consistent with our hypothesis that women would have greater satisfaction with their mammography appointments—and thus be more likely to return for routine screening—in facilities where their appointments were more efficient.

## Prospects for a Statistical Theory of LC/TOFMS Data

### Journal of The American Society for Mass Spectrometry (2012-05-01) 23: 779-791 , May 01, 2012

The critical importance of employing sound statistical arguments when seeking to draw inferences from inexact measurements is well-established throughout the sciences. Yet fundamental statistical methods such as hypothesis testing can currently be applied to only a small subset of the data analytical problems encountered in LC/MS experiments. The means of inference that are more generally employed are based on a variety of heuristic techniques and a largely qualitative understanding of their behavior. In this article, we attempt to move towards a more formalized approach to the analysis of LC/TOFMS data by establishing some of the core concepts required for a detailed mathematical description of the data. Using arguments that are based on the fundamental workings of the instrument, we derive and validate a probability distribution that approximates that of the empirically obtained data and on the basis of which formal statistical tests can be constructed. Unlike many existing statistical models for MS data, the one presented here aims for rigor rather than generality. Consequently, the model is closely tailored to a particular type of TOF mass spectrometer although the general approach carries over to other instrument designs. Looking ahead, we argue that further improvements in our ability to characterize the data mathematically could enable us to address a wide range of data analytical problems in a statistically rigorous manner.

## One-sided tests in shared frailty models

### TEST (2008-05-01) 17: 69-82 , May 01, 2008

Tests for the presence of heterogeneity in frailty models use an alternative hypothesis in which the heterogeneity parameter is subject to an inequality constraint. As a result, the classical likelihood ratio asymptotic chi-square distribution theory is no longer valid. Our main result states the limiting distribution of the likelihood ratio and score statistic for the one-sided testing problem. The resulting distribution is a mixture of chi-square distributed random variables. The results are shown for gamma and positive stable frailty distributions, and hold when covariate information is present. A data example illustrates the tests. We also assess, in a simulation study, the performance of the tests regarding the significance level and power.

## Likelihood ratio tests for model selection of stochastic frontier models

### Journal of Productivity Analysis (2010-08-01) 34: 3-13 , August 01, 2010

An important issue when conducting stochastic frontier analysis is how to choose a proper parametric model, which includes choices of the functional form of the frontier function, distributions of the composite errors, and also the exogenous variables. In this paper, we extend the likelihood ratio test of Vuong, Econometrica 57(2):307–333, (1989) and Takeuchi’s, Suri-Kagaku (Math Sci) 153:12–18, (1976) model selection criterion to the stochastic frontier models. The most attractive feature of this test is that it can not only be used for testing a non-nested model, but also still be applicable even when the general model is misspecified. Finally, we also demonstrate how to apply this test to the Indian farm data used by Battese and Coelli, J Prod Anal 3:153–169, (1992), Empir Econ 20(2):325–332, (1995) and Alvarez et al., J Prod Anal 25:201–212, (2006).

## Likelihood-Based Inference and Finite-Sample Corrections: A Brief Overview

### An Introduction to Bartlett Correction and Bias Reduction (2014-05-09): 1-11 , May 09, 2014

This chapter introduces the likelihood function and estimation by maximum likelihood. Some important properties of maximum likelihood (ML) estimators are outlined. We also briefly present several important concepts that will be used throughout the book. Three asymptotic testing criteria are also introduced. The chapter also motivates the use of Bartlett and Bartlett-type corrections. The underlying idea is to transform the test statistic in such a way that its null distribution is better approximated by the reference $$\chi ^2$$ distribution. We also investigate the use of bias corrections. They are used to reduce systematic errors in the point estimation process. Finally, we motivate the use of a data resampling method: the bootstrap.

## Correlated evolution of fig size and color supports the dispersal syndromes hypothesis

### Oecologia (2008-07-01) 156: 783-796 , July 01, 2008

The influence of seed dispersers on the evolution of fruit traits remains controversial, largely because most studies have failed to account for phylogeny and or have focused on conservative taxonomic levels. Under the hypothesis that fruit traits have evolved in response to different sets of selective pressures by disparate types of seed dispersers (the dispersal syndromes hypothesis), we test for two dispersal syndromes, defined as groups of fruit traits that appear together more often than expected by chance. (1) Bird syndrome fruits are brightly colored and small, because birds have acute color vision, and commonly swallow fruits whole. (2) Mammal syndrome fruits are dull-colored and larger on average than bird syndrome fruits, because mammals do not rely heavily on visual cues for finding fruits, and can eat fruits piecemeal. If, instead, phylogenetic inertia determines the co-occurrence of fruit size and color, we will observe that specific combinations of size and color evolved in a small number of ancestral species. We performed a comparative analysis of fruit traits for 64 species of *Ficus* (Moraceae), based on a phylogeny we constructed using nuclear ribosomal DNA. Using a concentrated changes test and assuming fruit color is an independent variable, we found that small-sized fruits evolve on branches with red and purple figs, as predicted by the dispersal syndromes hypothesis. When using diameter as the independent variable, results vary with the combination of algorithms used, which is discussed in detail. A likelihood ratio test confirms the pattern found with the concentrated changes test using color as the independent variable. These results support the dispersal syndromes hypothesis.

## Inference for the bivariate Birnbaum–Saunders lifetime regression model and associated inference

### Metrika (2015-10-01) 78: 853-872 , October 01, 2015

In this paper, we discuss a regression model based on the bivariate Birnbaum–Saunders distribution. We derive the maximum likelihood estimates of the model parameters and then develop associated inference. Next, we briefly describe likelihood-ratio tests for some hypotheses of interest as well as some interval estimation methods. Monte Carlo simulations are then carried out to examine the performance of the estimators as well as the interval estimation methods. Finally, a numerical data analysis is performed for illustrating all the inferential methods developed here.

## Estimating and testing generalized linear models under inequality restrictions

### Statistical Papers (1994-12-01) 35: 211-229 , December 01, 1994

We consider maximum likelihood estimation and likelihood ratio tests under inequality restrictions on the parameters. A special case are order restrictions, which may appear for example in connection with effects of an ordinal qualitative covariate. Our estimation approach is based on the principle of sequential quadratic programming, where the restricted estimate is computed iteratively and a quadratic optimization problem under inequality restrictions is solved in each iteration. Testing for inequality restrictions is based on the likelihood ratio principle. Under certain regularity assumptions the likelihood ratio test statistic is asymptotically distributed like a mixture of χ^{2}, where the weights are a function of the restrictions and the information matrix. A major problem in theory is that in general there is no unique least favourable point. We present some empirical findings on finite-sample behaviour of tests and apply the methods to examples from credit scoring and dentistry.

## Fuzzy hypothesis testing with vague data using likelihood ratio test

### Soft Computing (2013-09-01) 17: 1629-1641 , September 01, 2013

Hypothesis testing is one of the most significant facets of statistical inference, which like other situations in the real world is definitely affected by uncertain conditions. The aim of this paper is to develop hypothesis testing based on likelihood ratio test in fuzzy environment, where it is supposed that both hypotheses under study and sample data are fuzzy. The main idea is to employ Zadeh’s extension principle. In this regard, a pair of non-linear programming problems is exploited toward obtaining membership function of likelihood ratio test statistic. Afterwards, the membership function is compared with critical value of the test in order to assess acceptability of the fuzzy null hypothesis under consideration. In this step, two distinct procedures are applied. In the first procedure, a ranking method for fuzzy numbers is utilized to make an absolute decision about acceptability of fuzzy null hypothesis. From a different point of view, in the second procedure, membership degrees of fuzzy null hypothesis acceptance and rejection are first derived using resolution identity and then, a relative decision is made on fuzzy null hypothesis acceptance or rejection based on some arbitrary decision rules. Flexibility of the proposed approach in testing fuzzy hypothesis with vague data is presented using some numerical examples.

## Detection Performance and Risk Stratification Using a Model-Based Shape Index Characterizing Heart Rate Turbulence

### Annals of Biomedical Engineering (2010-10-01) 38: 3173-3184 , October 01, 2010

A detection–theoretic approach to quantify heart rate turbulence (HRT) following a ventricular premature beat is proposed and validated using an extended integral pulse frequency modulation (IPFM) model which accounts for HRT. The modulating signal of the extended IPFM model is projected into a three-dimensional subspace spanned by the Karhunen–Loève basis functions, characterizing HRT shape. The presence or absence of HRT is decided by means of a likelihood ratio test, the Neyman–Pearson detector, resulting in a quadratic detection statistic. Using a labeled dataset built from different interbeat interval series, detection performance is assessed and found to outperform the two widely used indices: turbulence onset (TO) and turbulence slope (TS). The ability of the proposed method to predict the risk of cardiac death is evaluated in a population of patients (*n* = 90) with ischemic cardiomyopathy and mild-to-moderate congestive heart failure. While both TS and the novel HRT index differ significantly in survivors and cardiac death patients, mortality analysis shows that the latter index exhibits much stronger association with risk of cardiac death (hazard ratio = 2.8, CI = 1.32–5.97, *p* = 0.008). It is also shown that the model-based shape indices, but not TO and TS, remain predictive of cardiac death in our population when computed from 4-h instead of 24-h ambulatory ECGs.

## Two-sided tests for change in level for correlated data

### Statistical Papers (1990-12-01) 31: 181-194 , December 01, 1990

The classical problem of change point is considered when the data are assumed to be correlated. The nuisance parameters in the model are the initial level μ and the common variance σ^{2}. The four cases, based on none, one, and both of the parameters are known are considered. Likelihood ratio tests are obtained for testing hypotheses regarding the change in level, δ, in each case. Following Henderson (1986), a Bayesian test is obtained for the two sided alternative. Under the Bayesian set up, a locally most powerful unbiased test is derived for the case μ=0 and σ^{2}=1. The exact null distribution function of the Bayesian test statistic is given an integral representation. Methods to obtain exact and approximate critical values are indicated.

## Hypothesis Testing of Hazard Ratio Parameters in Marginal Models for Multivariate Failure Time Data

### Lifetime Data Analysis (1999-03-01) 5: 39-53 , March 01, 1999

Marginal hazard models for multivariate failure time data have been studied extensively in recent literature. However, standard hypothesis test statistics based on the likelihood method are not exactly appropriate for this kind of model. In this paper, extensions of the three commonly used likelihood hypothesis test statistics are discussed. Generalized Wald, generalized score and generalized likelihood ratio tests for hazard ratio parameters in a marginal hazard model for multivariate failure time data are proposed and their asymptotic distributions examined. The finite sample properties of these statistics are studied through simulations. The proposed method is applied to data from Busselton Population Health Surveys.

## Adding a Statistical Wrench to the “Toolbox”

### Research in Higher Education (2008-03-01) 49: 172-179 , March 01, 2008

This paper demonstrates a formal statistical test that can be used to help researchers make decisions about alternative statistical model specifications. This test is commonly used by researchers who would like to test whether adding new variables to a model improves the model fit. However, we demonstrate that this formal test can also be employed when alternative representations of variables or constructs are considered for inclusion in a regression. An empirical example is provided using information from a widely cited US Department of Education report. Substantively, we find evidence that an alternative representation of an important policy variable would have been a better fit to the data than the index that was used in the analysis conducted in the report.

## Empirical likelihood method for linear transformation models

### Annals of the Institute of Statistical Mathematics (2011-04-01) 63: 331-346 , April 01, 2011

Empirical likelihood inferential procedure is proposed for right censored survival data under linear transformation models, which include the commonly used proportional hazards model as a special case. A log-empirical likelihood ratio test statistic for the regression coefficients is developed. We show that the proposed log-empirical likelihood ratio test statistic converges to a standard chi-squared distribution. The result can be used to make inference about the entire regression coefficients vector as well as any subset of it. The method is illustrated by extensive simulation studies and a real example.

## Techniques for temporal detection of neural sensitivity to external stimulation

### Biological Cybernetics (2009-04-01) 100: 289-297 , April 01, 2009

We propose a simple measure of neural sensitivity for characterizing stimulus coding. Sensitivity is defined as the fraction of neurons that show positive responses to *n* stimuli out of a total of *N*. To determine a positive response, we propose two methods: Fisherian statistical testing and a data-driven Bayesian approach to determine the response probability of a neuron. The latter is non-parametric, data-driven, and captures a lower bound for the probability of neural responses to sensory stimulation. Both methods are compared with a standard test that assumes normal probability distributions. We applied the sensitivity estimation based on the proposed method to experimental data recorded from the mushroom body (MB) of locusts. We show that there is a broad range of sensitivity that the MB response sweeps during odor stimulation. The neurons are initially tuned to specific odors, but tend to demonstrate a generalist behavior towards the end of the stimulus period, meaning that the emphasis shifts from discrimination to feature learning.

## Modelling of Cooperative Spectrum Sensing over Rayleigh Fading Without CSI in Cognitive Radio Networks

### Wireless Personal Communications (2016-02-01) 86: 1281-1297 , February 01, 2016

It is proved that cooperation among secondary users (SUs) in a cognitive radio network (CRN) greatly improves the spectrum sensing performance. However, modelling and analysis of cooperative spectrum sensing in a CRN over fading channels is an important issue. This paper derives a new fusion rule based on likelihood ratio test which requires exact channel statistics instead of instantaneous channel state information. This scheme provides Neyman–Pearson criteria based optimal sensing while considering both sensing and reporting channels as independently Rayleigh faded. We derive closed form solutions for local probabilities of detection and false alarm by assuming that all SUs perform energy detection, which makes real-time computations simple. Further, closed form solutions for system-level performance are derived by considering that all SUs experience independent fading and statistically identical signal-to-noise ratios. Performance of the proposed cooperative sensing scheme has been evaluated both analytically and by simulations.

## Tests for cointegration rank and the initial condition

### Empirical Economics (2012-06-01) 42: 667-691 , June 01, 2012

Many economic events involve initial observations that substantially deviate from long-run steady state. Such initial conditions are known to affect the power of univariate unit root tests diversely, whereas their impact on multivariate tests is largely unknown. This paper investigates the impact of the initial condition on the power of tests for cointegration rank, such as Johansen’s widely used likelihood ratio test, tests with prior adjustment for deterministic terms, and a test based on the eigenvalues of the companion matrix. We find that the power of the likelihood ratio test is increasing in the magnitude of the initial condition, whereas the power of the other tests is generally decreasing. We exploit these findings in an application to price convergence.

## Stochastic time‐concentration activity models for cytotoxicity in 3D brain cell cultures

### Theoretical Biology and Medical Modelling (2013-03-14) 10: 1-18 , March 14, 2013

### Background

*In vitro* aggregating brain cell cultures containing all types of brain cells have been shown to be useful for neurotoxicological investigations. The cultures are used for the detection of nervous system‐specific effects of compounds by measuring multiple endpoints, including changes in enzyme activities. Concentration‐dependent neurotoxicity is determined at several time points.

### Methods

A Markov model was set up to describe the dynamics of brain cell populations exposed to potentially neurotoxic compounds. Brain cells were assumed to be either in a healthy or stressed state, with only stressed cells being susceptible to cell death. Cells may have switched between these states or died with concentration‐dependent transition rates. Since cell numbers were not directly measurable, intracellular lactate dehydrogenase (LDH) activity was used as a surrogate. Assuming that changes in cell numbers are proportional to changes in intracellular LDH activity, stochastic enzyme activity models were derived. Maximum likelihood and least squares regression techniques were applied for estimation of the transition rates. Likelihood ratio tests were performed to test hypotheses about the transition rates. Simulation studies were used to investigate the performance of the transition rate estimators and to analyze the error rates of the likelihood ratio tests. The stochastic time‐concentration activity model was applied to intracellular LDH activity measurements after 7 and 14 days of continuous exposure to propofol. The model describes transitions from healthy to stressed cells and from stressed cells to death.

### Results

The model predicted that propofol would affect stressed cells more than healthy cells. Increasing propofol concentration from 10 to 100 *μ* M reduced the mean waiting time for transition to the stressed state by 50%, from 14 to 7 days, whereas the mean duration to cellular death reduced more dramatically from 2.7 days to 6.5 hours.

### Conclusion

The proposed stochastic modeling approach can be used to discriminate between different biological hypotheses regarding the effect of a compound on the transition rates. The effects of different compounds on the transition rate estimates can be quantitatively compared. Data can be extrapolated at late measurement time points to investigate whether costs and time‐consuming long‐term experiments could possibly be eliminated.

## On the asymptotic distribution of likelihood ratio test when parameters lie on the boundary

### Sankhya B (2011-07-07) 73: 20-41 , July 07, 2011

The paper discusses statistical inference dealing with the asymptotic theory of likelihood ratio tests when some parameters may lie on boundary of the parameter space. We derive a closed form solution for the case when one parameter of interest and one nuisance parameter lie on the boundary. The asymptotic distribution is *not* always a mixture of several chi-square distributions. For the cases when one parameter of interest and two nuisance parameters or two parameters of interest and one nuisance parameter are on the boundary, we provide an explicit solution which can be easily computed by simulation. These results can be used in many applications, e.g. testing for random effects in genetics. Contrary to the claim of some authors in the applied literature that use of chi-square distribution with degrees of freedom as in case of interior parameters will be too conservative when some parameters are on the boundary, we show that when nuisance parameters are on the boundary, that approach may often be anti-conservative.