## SEARCH

#### Author

##### ( see all 167)

- Ahlgren, Niklas 2 (%)
- Balakrishnan, N. 2 (%)
- Bazyari, Abouzar 2 (%)
- Cordeiro, Gauss M. 2 (%)
- Cribari-Neto, Francisco 2 (%)

#### Publication

##### ( see all 46)

- Metrika 5 (%)
- Statistical Papers 5 (%)
- Annals of the Institute of Statistical Mathematics 4 (%)
- Lifetime Data Analysis 3 (%)
- METRON 3 (%)

#### Subject

##### ( see all 111)

- Statistics 36 (%)
- Statistics, general 30 (%)
- Statistics for Business/Economics/Mathematical Finance/Insurance 26 (%)
- Probability Theory and Stochastic Processes 15 (%)
- Economic Theory 13 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 50 of 71 matching Articles
Results per page:

## Cyclostationary Feature Detection Based Spectrum Sensing Technique of Cognitive Radio in Nakagami-m Fading Environment

### Computational Intelligence in Data Mining - Volume 2 (2015-01-01) 32: 209-219 , January 01, 2015

Download PDF | Post to Citeulike

The main function of Cognitive Radio network is to sense the spectrum band to check whether the primary user is present or not in a given spectrum band at a place. One of the most efficient ways of spectrum sensing technique is cyclostationary feature detection. Though the computational complexity is very high in case of cyclostationary feature detection, still it is very effective in case of unknown level of noise. Here, spectral correlation function (SCF) of the received signal is determined. In this paper, SCF is calculated using Hanning, Hamming and Kaiser windows and also we have determined the cyclic periodogram of the input signal. In each case, we have considered Nakagami-*m* channel fading. Likelihood ratio test has been performed by varying the parameters of the channel fading distribution. Finally, the effects of the windows have been studied in the numerical section.

#### Images from this Article show all 11 images

## Techniques for temporal detection of neural sensitivity to external stimulation

### Biological Cybernetics (2009-04-21) 100: 289-297 , April 21, 2009

Download PDF | Post to Citeulike

We propose a simple measure of neural sensitivity for characterizing stimulus coding. Sensitivity is defined as the fraction of neurons that show positive responses to *n* stimuli out of a total of *N*. To determine a positive response, we propose two methods: Fisherian statistical testing and a data-driven Bayesian approach to determine the response probability of a neuron. The latter is non-parametric, data-driven, and captures a lower bound for the probability of neural responses to sensory stimulation. Both methods are compared with a standard test that assumes normal probability distributions. We applied the sensitivity estimation based on the proposed method to experimental data recorded from the mushroom body (MB) of locusts. We show that there is a broad range of sensitivity that the MB response sweeps during odor stimulation. The neurons are initially tuned to specific odors, but tend to demonstrate a generalist behavior towards the end of the stimulus period, meaning that the emphasis shifts from discrimination to feature learning.

#### Images from this Article show all 8 images

## Effective Elasticity Tensors in Context of Random Errors

### Journal of Elasticity (2015-02-21): 1-13 , February 21, 2015

Download PDF | Post to Citeulike

We introduce the effective elasticity tensor of a chosen material-symmetry class to represent a measured generally anisotropic elasticity tensor by minimizing the weighted Frobenius distance from the given tensor to its symmetric counterpart, where the weights are determined by the experimental errors. The resulting effective tensor is the highest-likelihood estimate within the specified symmetry class. Given two material-symmetry classes, with one included in the other, the weighted Frobenius distance from the given tensor to the two effective tensors can be used to decide between the two models—one with higher and one with lower symmetry—by means of the likelihood ratio test.

## Adjusted Confidence Intervals for a Bounded Parameter

### Behavior Genetics (2012-11-01) 42: 886-898 , November 01, 2012

Download PDF | Post to Citeulike

It is well known that the regular likelihood ratio test of a bounded parameter is not valid if the boundary value is being tested. This is the case for testing the null value of a scalar variance component. Although an adjusted test of variance component has been suggested to account for the effect of its lower bound of zero, no adjustment of its interval estimate has ever been proposed. If left unadjusted, the confidence interval of the variance may still contain zero when the adjusted test rejects the null hypothesis of a zero variance, leading to conflicting conclusions. In this research, we propose two ways to adjust the confidence interval of a parameter subject to a lower bound, one based on the Wald test and the other on the likelihood ratio test. Both are compatible to the adjusted test and parametrization-invariant. A simulation study and two examples are given in the framework of ACDE models in twin studies.

#### Images from this Article show all 17 images

## The effect of rare variants on inflation of the test statistics in case–control analyses

### BMC Bioinformatics (2015-02-20) 16: 1-5 , February 20, 2015

Download PDF | Post to Citeulike

### Background

The detection of bias due to cryptic population structure is an important step in the evaluation of findings of genetic association studies. The standard method of measuring this bias in a genetic association study is to compare the observed median association test statistic to the expected median test statistic. This ratio is inflated in the presence of cryptic population structure. However, inflation may also be caused by the properties of the association test itself particularly in the analysis of rare variants. We compared the properties of the three most commonly used association tests: the likelihood ratio test, the Wald test and the score test when testing rare variants for association using simulated data.

### Results

We found evidence of inflation in the median test statistics of the likelihood ratio and score tests for tests of variants with less than 20 heterozygotes across the sample, regardless of the total sample size. The test statistics for the Wald test were under-inflated at the median for variants below the same minor allele frequency.

### Conclusions

In a genetic association study, if a substantial proportion of the genetic variants tested have rare minor allele frequencies, the properties of the association test may mask the presence or absence of bias due to population structure. The use of either the likelihood ratio test or the score test is likely to lead to inflation in the median test statistic in the absence of population structure. In contrast, the use of the Wald test is likely to result in under-inflation of the median test statistic which may mask the presence of population structure.

## A Random Effects Model for Binary Mixture Toxicity Experiments

### Journal of Agricultural, Biological, and Environmental Statistics (2010-11-30) 15: 562-577 , November 30, 2010

Download PDF | Post to Citeulike

We suggest a simple isobole analysis for binary mixture toxicity experiments. The analysis is based on estimated logarithmic effect concentrations and their corresponding standard errors. The suggested model allows for synergism/antagonism and incorporates within-mixture variation as well as between-mixture variation in a random effects model. The likelihood ratio test for the hypothesis of concentration addition (CA) is examined, in particular its small sample properties. We study two datasets on the joint effect of acifluorfen versus diquat and glyphosate versus mechlorprop, respectively, on the growth of the aquatic macrophyte *Lemna minor*.

## Adding a Statistical Wrench to the “Toolbox”

### Research in Higher Education (2008-01-17) 49: 172-179 , January 17, 2008

Download PDF | Post to Citeulike

This paper demonstrates a formal statistical test that can be used to help researchers make decisions about alternative statistical model specifications. This test is commonly used by researchers who would like to test whether adding new variables to a model improves the model fit. However, we demonstrate that this formal test can also be employed when alternative representations of variables or constructs are considered for inclusion in a regression. An empirical example is provided using information from a widely cited US Department of Education report. Substantively, we find evidence that an alternative representation of an important policy variable would have been a better fit to the data than the index that was used in the analysis conducted in the report.

## Maximum Likelihood

### Computational Toxicology (2013-01-01) 930: 581-595 , January 01, 2013

Download PDF | Post to Citeulike

The maximum likelihood method is a popular statistical inferential procedure widely used in many areas to obtain the estimates of the unknown parameters of a population of interest. This chapter gives a brief description of the important concepts underlying the maximum likelihood method, the definition of the key components, the basic theory of the method, and the properties of the resulting estimates. Confidence interval and likelihood ratio test are also introduced. Finally, a few examples of applications are given to illustrate how to derive maximum likelihood estimates in practice. A list of references to relevant papers and software for a further understanding of the method and its implementation is provided.

## Inference for comparing a multinomial distribution with a known standard

### Statistical Papers (2012-08-01) 53: 775-788 , August 01, 2012

Download PDF | Post to Citeulike

We propose a new type of stochastic ordering which imposes a monotone tendency in differences between one multinomial probability and a known standard one. An estimation procedure is proposed for the constrained maximum likelihood estimate, and then the asymptotic null distribution is derived for the likelihood ratio test statistic for testing equality of two multinomial distributions against the new stochastic ordering. An alternative test is also discussed based on Neyman modified minimum chi-square estimator. These tests are illustrated with a set of heart disease data.

## A proof of independent Bartlett correctability of nested likelihood ratio tests

### Annals of the Institute of Statistical Mathematics (1996-12-01) 48: 603-620 , December 01, 1996

Download PDF | Post to Citeulike

It is well known that likelihood ratio statistic is Bartlett correctable. We consider decomposition of a likelihood ratio statistic into 1 degree of freedom components based on sequence of nested hypotheses. We give a proof of the fact that the component likelihood ratio statistics are distributed mutually independently up to the order *O(1/n)* and each component is independently Bartlett correctable. This was implicit in Lawley (1956, *Biometrika*, *43*, 295–303) and proved in Bickel and Ghosh (1990, *Ann. Statist.*, *18*, 1070–1090) using a Bayes method. We present a more direct frequentist proof.

## The power of bootstrap tests of cointegration rank

### Computational Statistics (2013-12-01) 28: 2719-2748 , December 01, 2013

Download PDF | Post to Citeulike

Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of asymptotic tests. The effect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymptotic test. The bootstrap test therefore estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.

#### Images from this Article show all 17 images

## Fuzzy hypothesis testing with vague data using likelihood ratio test

### Soft Computing (2013-01-08): 1-13 , January 08, 2013

Download PDF | Post to Citeulike

Hypothesis testing is one of the most significant facets of statistical inference, which like other situations in the real world is definitely affected by uncertain conditions. The aim of this paper is to develop hypothesis testing based on likelihood ratio test in fuzzy environment, where it is supposed that both hypotheses under study and sample data are fuzzy. The main idea is to employ Zadeh’s extension principle. In this regard, a pair of non-linear programming problems is exploited toward obtaining membership function of likelihood ratio test statistic. Afterwards, the membership function is compared with critical value of the test in order to assess acceptability of the fuzzy null hypothesis under consideration. In this step, two distinct procedures are applied. In the first procedure, a ranking method for fuzzy numbers is utilized to make an absolute decision about acceptability of fuzzy null hypothesis. From a different point of view, in the second procedure, membership degrees of fuzzy null hypothesis acceptance and rejection are first derived using resolution identity and then, a relative decision is made on fuzzy null hypothesis acceptance or rejection based on some arbitrary decision rules. Flexibility of the proposed approach in testing fuzzy hypothesis with vague data is presented using some numerical examples.

## A decoding-based fusion rule for cooperative spectrum sensing with nonorthogonal transmission of local decisions

### EURASIP Journal on Wireless Communications and Networking (2013-07-08) 2013: 1-10 , July 08, 2013

Download PDF | Post to Citeulike

Cooperative spectrum sensing in cognitive radio (CR) networks is studied in which each CR performs energy detection to obtain a binary decision on the absence/presence of the primary user. The problem of interest is how to efficiently report and combine the local decisions to/at the fusion center under fading channels. In order to reduce the required transmission bandwidth in the reporting phase, the paper examines nonorthogonal transmission of local decisions by means of on-off keying. Proposed and analyzed is a novel decoding-based fusion rule that essentially performs in three steps: (1) estimating minimum mean-square error of the transmitted information from cognitive radios, (2) making hard decisions of the transmitted bits based on the estimated information, and (3) combining the hard decisions in a linear manner. Simulation results support the theoretical analysis and show that the added complexity of the decoding-based fusion rule leads to a considerable performance gain over the simpler energy-based fusion rule when the reporting links are reasonably strong.

## Inference for the bivariate Birnbaum–Saunders lifetime regression model and associated inference

### Metrika (2015-01-30): 1-20 , January 30, 2015

Download PDF | Post to Citeulike

In this paper, we discuss a regression model based on the bivariate Birnbaum–Saunders distribution. We derive the maximum likelihood estimates of the model parameters and then develop associated inference. Next, we briefly describe likelihood-ratio tests for some hypotheses of interest as well as some interval estimation methods. Monte Carlo simulations are then carried out to examine the performance of the estimators as well as the interval estimation methods. Finally, a numerical data analysis is performed for illustrating all the inferential methods developed here.

## Bartlett Corrections and Bootstrap Testing Inference

### An Introduction to Bartlett Correction and Bias Reduction (2014-05-09): 13-43 , May 09, 2014

Download PDF | Post to Citeulike

This chapter introduces the Bartlett correction to the likelihood ratio test statistic. The likelihood ratio test typically employs critical values that are only asymptotically correct and, as consequence, size distortions arise. The null distribution of Bartlett-corrected test statistic is typically better approximated by the limiting distribution than that of the corresponding unmodified test statistics. The correction reduces the test error rate, that is, size discrepancies of the corrected test vanish at a faster rate. We also show how to estimate the exact null distribution of a test statistic using data resampling (bootstrap). It is noteworthy that the bootstrap can be used to estimate the Bartlett correction factor.

#### Images from this Article show all 10 images

## Tests of independence in a bivariate exponential distribution

### Metrika (2005-02-01) 61: 47-62 , February 01, 2005

Download PDF | Post to Citeulike

### Abstract.

The problem of testing independence in a two component series system is considered. The joint distribution of component lifetimes is modeled by the Pickands bivariate exponential distribution, which includes the widely used Marshall and Olkin’s distribution and the Gumbel’s type II distribution. The case of identical components is first addressed. Uniformly most powerful unbiased test (UMPU) and likelihood ratio test are obtained. It is shown that inspite of a nuisance parameter, the UMPU test is unconditional and this test turns out to be the same as the likelihood ratio test. The case of nonidentical components is also addressed and both UMPU and likelihood ratio tests are obtained. A UMPU test is obtained to test the identical nature of the components and extensions to the type II censoring scheme and multiple component systems are also discussed. Some modifications to account for the difference in parameters under test and use conditions are also discussed.

## Numerical comparison between powers of maximum likelihood and analysis of variance methods for QTL detection in progeny test designs: the case of monogenic inheritance

### Theoretical and Applied Genetics (1995-01-01) 90: 65-72 , January 01, 1995

Download PDF | Post to Citeulike

Simulations are used to compare four statistics for the detection of a quantitative trait locus (QTL) in daughter and grand-daughter designs as defined by Soller and Genizi (1978) and Weller et al. (1990): (1) the Fisher test of a linear model including a marker effect within sire or grand-sire effect; (2) the likelihood ratio test of a segregation analysis without the information given by the marker; (3) the likelihood ratio test of a segregation analysis considering the information from the marker; and (4) the lod score which is the likelihood ratio test of absence of linkage between the marker and the QTL. In all cases the two segregation analyses are more powerful for QTL detection than are either the linear method or the lod score. The differences in power are generally limited but may be significant (in a ratio of 1 to 3 or 4) when the QTL has a small effect (0.2 standard deviations) and is not closely linked to the marker (recombination rate of 20% or more).

## A new bivariate Gamma distribution generated from functional scale parameter with application to drought data

### Stochastic Environmental Research and Risk Assessment (2013-07-01) 27: 1039-1054 , July 01, 2013

Download PDF | Post to Citeulike

Univariate and bivariate Gamma distributions are among the most widely used distributions in hydrological statistical modeling and applications. This article presents the construction of a new bivariate Gamma distribution which is generated from the functional scale parameter. The utilization of the proposed bivariate Gamma distribution for drought modeling is described by deriving the exact distribution of the inter-arrival time and the proportion of drought along with their moments, assuming that both the lengths of drought duration (X) and non-drought duration (Y) follow this bivariate Gamma distribution. The model parameters of this distribution are estimated by maximum likelihood method and an objective Bayesian analysis using Jeffreys prior and Markov Chain Monte Carlo method. These methods are applied to a real drought dataset from the State of Colorado, USA.

#### Images from this Article show all 13 images

## Radioactive Target Detection Using Wireless Sensor Network

### Computer, Informatics, Cybernetics and Applications (2012-01-01) 107: 293-301 , January 01, 2012

Download PDF | Post to Citeulike

The detection of radioactive target is becoming more important recently in public safety and national security. By using the physical law for nuclear radiation isotopes, this chapter proposes a statistical method for wireless sensor network data to detect and locate a hidden nuclear target in a large study area. The method assumes multiple radiation detectors have been used as sensor nodes in a wireless sensor network. Radiation counts have been observed by sensors and each is composed of a radiation signal plus a radiation background. By considering the physical properties of radiation signal and background, the proposed method can simultaneously detect and locate the radioactive target in the area. Our simulation results have shown that the proposed method is effective and efficient in detection and location of the nuclear radioactive target. This research will have wide applications in the nuclear safety and security problems.

## Testing the compatibility of constraints for parameters of a geodetic adjustment model

### Journal of Geodesy (2013-06-01) 87: 555-566 , June 01, 2013

Download PDF | Post to Citeulike

Geodetic adjustment models are often set up in a way that the model parameters need to fulfil certain constraints. The normalized Lagrange multipliers have been used as a measure of the strength of constraint in such a way that if one of them exceeds in magnitude a certain threshold then the corresponding constraint is likely to be incompatible with the observations and the rest of the constraints. We show that these and similar measures can be deduced as test statistics of a likelihood ratio test of the statistical hypothesis that some constraints are incompatible in the same sense. This has been done before only for special constraints (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). We start from the simplest case, that the full set of constraints is to be tested, and arrive at the advanced case, that each constraint is to be tested individually. Every test is worked out both for a known as well as for an unknown prior variance factor. The corresponding distributions under null and alternative hypotheses are derived. The theory is illustrated by the example of a double levelled line.

## A novel approach to identify optimal metabotypes of elongase and desaturase activities in prevention of acute coronary syndrome

### Metabolomics (2015-02-21): 1-11 , February 21, 2015

Download PDF | Post to Citeulike

Both metabolomic and genomic approaches are valuable for risk analysis, however typical approaches evaluating differences in means do not model the changes well. Gene polymorphisms that alter function would appear as distinct populations, or metabotypes, from the predominant one, in which case risk is revealed as changed mixing proportions between control and case samples. Here we validate a model accounting for mixed populations using biomarkers of fatty acid metabolism derived from a case/control study of acute coronary syndrome subjects in which both metabolomic and genomic approaches have been used previously. We first used simulated data to show improved power and sensitivity in the approach compared to classic approaches. We then used the metabolic biomarkers to test for evidence of distinct metabotypes and different proportions among cases and controls. In simulation, our model outperformed all other approaches including Mann–Whitney, *t* tests, and *χ*^{2}. Using real data, we found distinct metabotypes of six of the seven activities tested, and different mixing proportions in five of the six activity biomarkers: D9D, ELOVL6, ELOVL5, FADS1, and Sprecher pathway chain shortening (SCS). High activity metabotypes of non-essential fatty acids and SCS decreased odds for acute coronary syndrome, however high activity metabotypes of 20-carbon fatty acid synthesis increased odds. Our study validates an approach that accounts for both metabolomic and genomic theory by demonstrating improved sensitivity and specificity, better performance in real world data, and more straightforward interpretability.

## The estimation and inference on the equal ratios of means to standard deviations of normal populations

### Statistical Papers (2015-02-01) 56: 157-165 , February 01, 2015

Download PDF | Post to Citeulike

This paper considers the estimation and the hypothesis testing on the equality of the ratios of the means to standard deviations of several normal populations with difference sample sizes. We propose an iterative algorithm to find the maximum likelihood estimates (MLEs) of the normal population means and standard deviations when the ratios of the means to the standard deviations are equal. A bootstrap method is introduced to determine the critical value in the likelihood ratio test. The simulation studies indicate that the proposed hypothesis testing performs well.

## Accuracy and Power of the Likelihood Ratio Test for Comparing Evolutionary Rates Among Genes

### Journal of Molecular Evolution (2005-04-01) 60: 426-433 , April 01, 2005

Download PDF | Post to Citeulike

Sequences for multiple protein-coding genes are now commonly available from several, often closely related species. These data sets offer intriguing opportunities to test hypotheses regarding whether different types of genes evolve under different selective pressures. Although maximum likelihood (ML) models of codon substitution that are suitable for such analyses have been developed, little is known about the statistical properties of these tests. We use a previously developed fixed-sites model and computer simulations to examine the accuracy and power of the likelihood ratio test (LRT) in comparing the nonsynonymous-to-synonymous substitution rate ratio (ω = dN/dS) between two genes. Our results show that the LRT applied to fixed-sites models may be inaccurate in some cases when setting significance thresholds using a χ^{2} approximation. Instead, we use a parametric bootstrap to describe the distribution of the LRT statistic for fixed-sites models and examine the power of the test as a function of sampling variables and properties of the genes under study. We find that the power of the test is high (>80%) even when sampling few taxa (e.g., six species) if sequences are sufficiently diverged and the test is largely unaffected by the tree topology used to simulate data. Our simulations show fixed-sites models are suitable for comparing substitution parameters among genes evolving under even strong evolutionary constraint (ω ≈ 0.05), although relative rate differences of 25% or less may be difficult to detect.

## Some Improved Tests for Multivariate One-Sided Hypotheses

### Metrika (2006-07-10) 64: 23-39 , July 10, 2006

Download PDF | Post to Citeulike

Multivariate one-sided hypothesis-testing problems are very common in clinical trials with multiple endpoints. The likelihood ratio test (LRT) and union-intersection test (UIT) are widely used for testing such problems. It is argued that, for many important multivariate one-sided testing problems, the LRT and UIT fail to adapt to the presence of subregions of varying dimensionalities on the boundary of the null parameter space and thus give undesirable results. Several improved tests are proposed that do adapt to the varying dimensionalities and hence reflect the evidence provided by the data more accurately than the LRT and UIT. Moreover, the proposed tests are often less biased and more powerful than the LRT and UIT.

## Adaptive Evolution of Scorpion Sodium Channel Toxins

### Journal of Molecular Evolution (2004-02-01) 58: 145-153 , February 01, 2004

Download PDF | Post to Citeulike

Gene duplication followed by positive Darwinian selection is an important evolutionary event at the molecular level, by which a gene can gain new functions. Such an event might have occurred in the evolution of scorpion sodium channel toxin genes (α- and β-groups). To test this hypothesis, a robust statistical method from Yang and co-workers based on the estimation of the nonsynonymous-to-synonymous rate ratio (ω = *d*_{N}/*d*_{S}) was performed. The results provide clear statistical evidence for adaptive molecular evolution of scorpion α- and β-toxin genes. A good match between the positively selected sites (evolutionary epitopes) and the putative bioactive surface (functional epitopes) indicates that these sites are most likely involved in functional recognition of sodium channels. Our results also shed light on the importance of the B-loop in the functional diversification of scorpion α- and β-toxins.

## Coupon collector’s problems with statistical applications to rankings

### Annals of the Institute of Statistical Mathematics (2013-06-01) 65: 571-587 , June 01, 2013

Download PDF | Post to Citeulike

Some new exact distributions on coupon collector’s waiting time problems are given based on a generalized Pólya urn sampling. In particular, usual Pólya urn sampling generates an exchangeable random sequence. In this case, an alternative derivation of the distribution is also obtained from de Finetti’s theorem. In coupon collector’s waiting time problems with $$m$$ kinds of coupons, the observed order of $$m$$ kinds of coupons corresponds to a permutation of $$m$$ letters uniquely. Using the property of coupon collector’s problems, a statistical model on the permutation group of $$m$$ letters is proposed for analyzing ranked data. In the model, as the parameters mean the proportion of the $$m$$ kinds of coupons, the observed ranking can be intuitively understood. Some examples of statistical inference are also given.

#### Images from this Article show all 8 images

## GLMLE: graph-limit enabled fast computation for fitting exponential random graph models to large social networks

### Social Network Analysis and Mining (2015-03-06) 5: 1-19 , March 06, 2015

Download PDF | Post to Citeulike

Large network, as a form of big data, has received increasing amount of attention in data science, especially for large social network, which is reaching the size of hundreds of millions, with daily interactions on the scale of billions. Thus analyzing and modeling these data to understand the connectivities and dynamics of large networks is important in a wide range of scientific fields. Among popular models, exponential random graph models (ERGMs) have been developed to study these complex networks by directly modeling network structures and features. ERGMs, however, are hard to scale to large networks because maximum likelihood estimation of parameters in these models can be very difficult, due to the unknown normalizing constant. Alternative strategies based on Markov chain Monte Carlo (MCMC) draw samples to approximate the likelihood, which is then maximized to obtain the maximum likelihood estimators (MLE). These strategies have poor convergence due to model degeneracy issues and cannot be used on large networks. Chatterjee et al. (Ann Stat 41:2428–2461, 2013) propose a new theoretical framework for estimating the parameters of ERGMs by approximating the normalizing constant using the emerging tools in graph theory—graph limits. In this paper, we construct a complete computational procedure built upon their results with practical innovations which is fast and is able to scale to large networks. More specifically, we evaluate the likelihood via simple function approximation of the corresponding ERGM’s graph limit and iteratively maximize the likelihood to obtain the MLE. We also discuss the methods of conducting likelihood ratio test for ERGMs as well as related issues. Through simulation studies and real data analysis of two large social networks, we show that our new method outperforms the MCMC-based method, especially when the network size is large (more than 100 nodes). One limitation of our approach, inherited from the limitation of the result of Chatterjee et al. (Ann Stat 41:2428–2461, 2013), is that it works only for sequences of graphs with a positive limiting density, i.e., dense graphs.

## Likelihood Ratio Test for and Against Nonlinear Inequality Constraints

### Metrika (2006-11-29) 65: 93-108 , November 29, 2006

Download PDF | Post to Citeulike

In applied statistics a finite dimensional parameter involved in the distribution function of the observed random variable is very often constrained by a number of nonlinear inequalities. This paper is devoted to studying the likelihood ratio test for and against the hypothesis that the parameter is restricted by some nonlinear inequalities. The asymptotic null distributions of the likelihood ratio statistics are derived by using the limits of the related optimization problems. The author also shows how to compute critical values for the tests.

## A likelihood ratio test for bimodality in two-component mixtures with application to regional income distribution in the EU

### AStA Advances in Statistical Analysis (2008-02-15) 92: 57-69 , February 15, 2008

Download PDF | Post to Citeulike

We propose a parametric test for bimodality based on the likelihood principle by using two-component mixtures. The test uses explicit characterizations of the modal structure of such mixtures in terms of their parameters. Examples include the univariate and multivariate normal distributions and the von Mises distribution. We present the asymptotic distribution of the proposed test and analyze its finite sample performance in a simulation study. To illustrate our method, we use mixtures to investigate the modal structure of the cross-sectional distribution of per capita log GDP across EU regions from 1977 to 1993. Although these mixtures clearly have two components over the whole time period, the resulting distributions evolve from bimodality toward unimodality at the end of the 1970s.

## Optimum simple step-stress plans using reliability estimate

### METRON (2012-08-01) 70: 109-120 , August 01, 2012

Download PDF | Post to Citeulike

### Summary

This paper presents optimum simple step-stress plans under the log-logistic cumulative exposure model. The likelihood function of the model parameters is derived, from which the Fisher information matrix and the asymptotic variance of the reliability estimate are obtained. Optimum times of changing stress level are obtained by minimizing the asymptotic variance of the reliability estimate at a specified time under normal operating condition. Confidence intervals and hypotheses tests about the model parameters have also been discussed and illustrated by an example.

## A likelihood ratio test of a homoscedastic normal mixture against a heteroscedastic normal mixture

### Statistics and Computing (2008-06-12) 18: 233-240 , June 12, 2008

Download PDF | Post to Citeulike

It is generally assumed that the likelihood ratio statistic for testing the null hypothesis that data arise from a homoscedastic normal mixture distribution versus the alternative hypothesis that data arise from a heteroscedastic normal mixture distribution has an asymptotic *χ*^{2} reference distribution with degrees of freedom equal to the difference in the number of parameters being estimated under the alternative and null models under some regularity conditions. Simulations show that the *χ*^{2} reference distribution will give a reasonable approximation for the likelihood ratio test only when the sample size is 2000 or more and the mixture components are well separated when the restrictions suggested by Hathaway (Ann. Stat. 13:795–800, 1985) are imposed on the component variances to ensure that the likelihood is bounded under the alternative distribution. For small and medium sample sizes, parametric bootstrap tests appear to work well for determining whether data arise from a normal mixture with equal variances or a normal mixture with unequal variances.

## Testing the Order of Markov Dependence in DNA Sequences

### Methodology and Computing in Applied Probability (2011-01-11) 13: 59-74 , January 11, 2011

Download PDF | Post to Citeulike

DNA or protein sequences are usually modeled as probabilistic phenomena. The simplest model is created on the assumption that the nucleotides at the various sites are independently distributed. Usually the type of nucleotide at some site depends on the type at another site and therefore the DNA sequence is modeled as a Markov chain of random variables taking on the values A, G, C and T corresponding to the four nucleotides. First order or higher order Markov models provide better fit to a DNA sequence. Based on this remark, the aim of this paper is to present and study a family of test statistics for testing order Markov dependence in DNA sequences. This new family includes as a particular case the classical likelihood ratio test. A simulation study is presented in order to find test statistics, in this family, with a better behaviour than the likelihood ratio test.

## Change point estimation in phase I monitoring of logistic regression profile

### The International Journal of Advanced Manufacturing Technology (2012-12-09): 1-11 , December 09, 2012

Download PDF | Post to Citeulike

In statistical process control, an important issue in phase I is to identify the time of a change in process parameters. Control charts monitor the process over time, but the time an alarm is signaled by a control chart is not necessarily the real time of change in the process. Finding the real time of change, called as change point, is important because it leads to saving cost and time in detecting the assignable cause. Recently, profile monitoring in which a response variable and one or more explanatory variables are modeled by a regression function is attracted by many researchers. One type of profiles considered in the literature is a logistic profile where the distribution of the response variable is binary. In this paper, we develop two methods including likelihood ratio test and clustering to estimate the real time of a step change in phase I monitoring of the logistic profiles. The performance of the proposed methods is evaluated and compared through simulation studies. The results show the efficiency of both estimator methods. A real case is also studied to show the applicability of the proposed methods in practice.

## Survival analysis applied to proportion data: comparing mammography visits in high and low repeat rate facilities

### Health Services and Outcomes Research Methodology (2013-03-01) 13: 68-83 , March 01, 2013

Download PDF | Post to Citeulike

We examined the relationship between the proportion of time spent in the mammography exam room relative to the total visit time and the rate of repeat visits across mammography facilities. We hypothesized that the means for these proportions would be greater among facilities with high repeat rates compared to those with low repeat rates. The beta distribution was used to model the proportions. However, not all of the observations were observed as some were (right- or interval-) censored due to the inability to observe complete information (e.g., time of arrival at the facility) to calculate the proportions for some observations. Using a maximum likelihood approach, we estimated the mean proportion of time spent during appointments on the actual exams. To analyze these data we utilized survival analysis methods, which typically are applied to data on time scales, to analyze these data restricted to the range (0, 1). We found that facilities with high repeat mammography rates on average had a greater proportion of their visits spent receiving the exams than the low repeat rate facilities (*p* = 0.0003). Women from low repeat rate facilities spent on average 45 % of their appointment time engaging in the exam; thus, these women averaged more time waiting for the exam to begin than receiving the exam. Conversely, those from high repeat rate facilities averaged more of their appointment time engaging in the mammographic exams. These findings were consistent with our hypothesis that women would have greater satisfaction with their mammography appointments—and thus be more likely to return for routine screening—in facilities where their appointments were more efficient.

## A new meta-ANOVA approach for synthesizing information under signal-heterogeneity setting with application to nuclear magnetic resonance spectroscopic data

### Metabolomics (2008-08-29) 4: 283-291 , August 29, 2008

Download PDF | Post to Citeulike

A new synthesizing statistical methodology is proposed to resolve issues of signal-heterogeneity in data sets collected through high-resolution ^{1}H nuclear magnetic resonance (NMR) spectroscopy. This signal-heterogeneity is typically caused by subjective operations for processing spectral profiles and measuring peak areas, non-homogeneous biological phases of experimental subjects, and variations of systems in multi-center. All these causes are likely to simultaneously impact signals of metabolic changes and their precision in a nonlinear fashion. As a combined effect, signal-heterogeneity chiefly manifests through non-homomorphic patterns of standardized treatment mean deviations spanning all experiments, and makes most remedial statistical models with linearity structure invalid. By avoiding a huge and very complex model, we develop a simple meta-ANOVA approach to synthesize many one-way-layout ANOVA analyses from individual experiments. A scale-invariant *F*-ratio statistic is taken as the summarizing sufficient statistic of a non-centrality parameter that supposedly captures the information about metabolic change from each experiment. Then a joint-likelihood function of a common non-centrality is constructed as the basis for maximum likelihood estimation and Chi-square likelihood ratio testing for statistical inference. We apply the meta-ANOVA to detect metabolic changes of three metabolites identified through pattern recognition on NMR spectral profiles obtained from muscle and liver tissues. We also detect effect differences among different treatments via meta-ANOVA multiple comparison.

## Prospects for a Statistical Theory of LC/TOFMS Data

### Journal of The American Society for Mass Spectrometry (2012-05-01) 23: 779-791 , May 01, 2012

Download PDF | Post to Citeulike

The critical importance of employing sound statistical arguments when seeking to draw inferences from inexact measurements is well-established throughout the sciences. Yet fundamental statistical methods such as hypothesis testing can currently be applied to only a small subset of the data analytical problems encountered in LC/MS experiments. The means of inference that are more generally employed are based on a variety of heuristic techniques and a largely qualitative understanding of their behavior. In this article, we attempt to move towards a more formalized approach to the analysis of LC/TOFMS data by establishing some of the core concepts required for a detailed mathematical description of the data. Using arguments that are based on the fundamental workings of the instrument, we derive and validate a probability distribution that approximates that of the empirically obtained data and on the basis of which formal statistical tests can be constructed. Unlike many existing statistical models for MS data, the one presented here aims for rigor rather than generality. Consequently, the model is closely tailored to a particular type of TOF mass spectrometer although the general approach carries over to other instrument designs. Looking ahead, we argue that further improvements in our ability to characterize the data mathematically could enable us to address a wide range of data analytical problems in a statistically rigorous manner.

## Likelihood ratio tests for model selection of stochastic frontier models

### Journal of Productivity Analysis (2010-07-01) 34: 3-13 , July 01, 2010

Download PDF | Post to Citeulike

An important issue when conducting stochastic frontier analysis is how to choose a proper parametric model, which includes choices of the functional form of the frontier function, distributions of the composite errors, and also the exogenous variables. In this paper, we extend the likelihood ratio test of Vuong, Econometrica 57(2):307–333, (1989) and Takeuchi’s, Suri-Kagaku (Math Sci) 153:12–18, (1976) model selection criterion to the stochastic frontier models. The most attractive feature of this test is that it can not only be used for testing a non-nested model, but also still be applicable even when the general model is misspecified. Finally, we also demonstrate how to apply this test to the Indian farm data used by Battese and Coelli, J Prod Anal 3:153–169, (1992), Empir Econ 20(2):325–332, (1995) and Alvarez et al., J Prod Anal 25:201–212, (2006).

#### Images from this Article show all 7 images

## Likelihood-Based Inference and Finite-Sample Corrections: A Brief Overview

### An Introduction to Bartlett Correction and Bias Reduction (2014-05-09): 1-11 , May 09, 2014

Download PDF | Post to Citeulike

This chapter introduces the likelihood function and estimation by maximum likelihood. Some important properties of maximum likelihood (ML) estimators are outlined. We also briefly present several important concepts that will be used throughout the book. Three asymptotic testing criteria are also introduced. The chapter also motivates the use of Bartlett and Bartlett-type corrections. The underlying idea is to transform the test statistic in such a way that its null distribution is better approximated by the reference $$\chi ^2$$ distribution. We also investigate the use of bias corrections. They are used to reduce systematic errors in the point estimation process. Finally, we motivate the use of a data resampling method: the bootstrap.

## Approximations to most powerful invariant tests for multinormality against some irregular alternatives

### TEST (2010-03-31) 19: 113-130 , March 31, 2010

Download PDF | Post to Citeulike

The problem of testing multinormality against some alternatives invariant with respect to a subgroup of affine transformations is studied. Using the Laplace expansion for integrals, some approximations to the most powerful invariant (MPI) tests are derived. The cases of bivariate exponential and bivariate uniform alternatives are studied in detail, whereas higher-dimensional extensions are only outlined. Those alternatives are irregular and need a special treatment because of the dependence of their supports on the unknown parameters. It is shown that likelihood ratio (LR) tests are asymptotically equivalent to the MPI tests. A Monte Carlo simulation study shows that the powers of the LR tests are very close to the powers of the MPI tests, even in small samples.

## Estimating and testing generalized linear models under inequality restrictions

### Statistical Papers (1994-12-01) 35: 211-229 , December 01, 1994

Download PDF | Post to Citeulike

We consider maximum likelihood estimation and likelihood ratio tests under inequality restrictions on the parameters. A special case are order restrictions, which may appear for example in connection with effects of an ordinal qualitative covariate. Our estimation approach is based on the principle of sequential quadratic programming, where the restricted estimate is computed iteratively and a quadratic optimization problem under inequality restrictions is solved in each iteration. Testing for inequality restrictions is based on the likelihood ratio principle. Under certain regularity assumptions the likelihood ratio test statistic is asymptotically distributed like a mixture of χ^{2}, where the weights are a function of the restrictions and the information matrix. A major problem in theory is that in general there is no unique least favourable point. We present some empirical findings on finite-sample behaviour of tests and apply the methods to examples from credit scoring and dentistry.

## Detection Performance and Risk Stratification Using a Model-Based Shape Index Characterizing Heart Rate Turbulence

### Annals of Biomedical Engineering (2010-09-14) 38: 3173-3184 , September 14, 2010

Download PDF | Post to Citeulike

A detection–theoretic approach to quantify heart rate turbulence (HRT) following a ventricular premature beat is proposed and validated using an extended integral pulse frequency modulation (IPFM) model which accounts for HRT. The modulating signal of the extended IPFM model is projected into a three-dimensional subspace spanned by the Karhunen–Loève basis functions, characterizing HRT shape. The presence or absence of HRT is decided by means of a likelihood ratio test, the Neyman–Pearson detector, resulting in a quadratic detection statistic. Using a labeled dataset built from different interbeat interval series, detection performance is assessed and found to outperform the two widely used indices: turbulence onset (TO) and turbulence slope (TS). The ability of the proposed method to predict the risk of cardiac death is evaluated in a population of patients (*n* = 90) with ischemic cardiomyopathy and mild-to-moderate congestive heart failure. While both TS and the novel HRT index differ significantly in survivors and cardiac death patients, mortality analysis shows that the latter index exhibits much stronger association with risk of cardiac death (hazard ratio = 2.8, CI = 1.32–5.97, *p* = 0.008). It is also shown that the model-based shape indices, but not TO and TS, remain predictive of cardiac death in our population when computed from 4-h instead of 24-h ambulatory ECGs.

#### Images from this Article show all 12 images

## Two-sided tests for change in level for correlated data

### Statistical Papers (1990-12-01) 31: 181-194 , December 01, 1990

Download PDF | Post to Citeulike

The classical problem of change point is considered when the data are assumed to be correlated. The nuisance parameters in the model are the initial level μ and the common variance σ^{2}. The four cases, based on none, one, and both of the parameters are known are considered. Likelihood ratio tests are obtained for testing hypotheses regarding the change in level, δ, in each case. Following Henderson (1986), a Bayesian test is obtained for the two sided alternative. Under the Bayesian set up, a locally most powerful unbiased test is derived for the case μ=0 and σ^{2}=1. The exact null distribution function of the Bayesian test statistic is given an integral representation. Methods to obtain exact and approximate critical values are indicated.

## Hypothesis Testing of Hazard Ratio Parameters in Marginal Models for Multivariate Failure Time Data

### Lifetime Data Analysis (1999-03-01) 5: 39-53 , March 01, 1999

Download PDF | Post to Citeulike

Marginal hazard models for multivariate failure time data have been studied extensively in recent literature. However, standard hypothesis test statistics based on the likelihood method are not exactly appropriate for this kind of model. In this paper, extensions of the three commonly used likelihood hypothesis test statistics are discussed. Generalized Wald, generalized score and generalized likelihood ratio tests for hazard ratio parameters in a marginal hazard model for multivariate failure time data are proposed and their asymptotic distributions examined. The finite sample properties of these statistics are studied through simulations. The proposed method is applied to data from Busselton Population Health Surveys.

## Empirical likelihood method for linear transformation models

### Annals of the Institute of Statistical Mathematics (2011-03-08) 63: 331-346 , March 08, 2011

Download PDF | Post to Citeulike

Empirical likelihood inferential procedure is proposed for right censored survival data under linear transformation models, which include the commonly used proportional hazards model as a special case. A log-empirical likelihood ratio test statistic for the regression coefficients is developed. We show that the proposed log-empirical likelihood ratio test statistic converges to a standard chi-squared distribution. The result can be used to make inference about the entire regression coefficients vector as well as any subset of it. The method is illustrated by extensive simulation studies and a real example.

## An alternative competing risk model to the Weibull distribution for modelling aging in lifetime data analysis

### Lifetime Data Analysis (2006-11-11) 12: 481-504 , November 11, 2006

Download PDF | Post to Citeulike

A simple competing risk distribution as a possible alternative to the Weibull distribution in lifetime analysis is proposed. This distribution corresponds to the minimum between exponential and Weibull distributions. Our motivation is to take account of both accidental and aging failures in lifetime data analysis. First, the main characteristics of this distribution are presented. Then, the estimation of its parameters are considered through maximum likelihood and Bayesian inference. In particular, the existence of a unique consistent root of the likelihood equations is proved. Decision tests to choose between an exponential, Weibull and this competing risk distribution are presented. And this alternative model is compared to the Weibull model from numerical experiments on both real and simulated data sets, especially in an industrial context.

#### Images from this Article show all 12 images

## Modified inference about the mean of the exponential distribution using moving extreme ranked set sampling

### Statistical Papers (2009-01-19) 50: 249-259 , January 19, 2009

Download PDF | Post to Citeulike

The maximum likelihood estimator (MLE) and the likelihood ratio test (LRT) will be considered for making inference about the scale parameter of the exponential distribution in case of moving extreme ranked set sampling (MERSS). The MLE and LRT can not be written in closed form. Therefore, a modification of the MLE using the technique suggested by Maharota and Nanda (Biometrika 61:601–606, 1974) will be considered and this modified estimator will be used to modify the LRT to get a test in closed form for testing a simple hypothesis against one sided alternatives. The same idea will be used to modify the most powerful test (MPT) for testing a simple hypothesis versus a simple hypothesis to get a test in closed form for testing a simple hypothesis against one sided alternatives. Then it appears that the modified estimator is a good competitor of the MLE and the modified tests are good competitors of the LRT using MERSS and simple random sampling (SRS).

## Tests for cointegration rank and the initial condition

### Empirical Economics (2012-06-01) 42: 667-691 , June 01, 2012

Download PDF | Post to Citeulike

Many economic events involve initial observations that substantially deviate from long-run steady state. Such initial conditions are known to affect the power of univariate unit root tests diversely, whereas their impact on multivariate tests is largely unknown. This paper investigates the impact of the initial condition on the power of tests for cointegration rank, such as Johansen’s widely used likelihood ratio test, tests with prior adjustment for deterministic terms, and a test based on the eigenvalues of the companion matrix. We find that the power of the likelihood ratio test is increasing in the magnitude of the initial condition, whereas the power of the other tests is generally decreasing. We exploit these findings in an application to price convergence.

## Stochastic time-concentration activity models for cytotoxicity in 3D brain cell cultures

### Theoretical Biology and Medical Modelling (2013-03-14) 10: 1-18 , March 14, 2013

Download PDF | Post to Citeulike

### Background

*In vitro* aggregating brain cell cultures containing all types of brain cells have been shown to be useful for neurotoxicological investigations. The cultures are used for the detection of nervous system‐specific effects of compounds by measuring multiple endpoints, including changes in enzyme activities. Concentration‐dependent neurotoxicity is determined at several time points.

### Methods

A Markov model was set up to describe the dynamics of brain cell populations exposed to potentially neurotoxic compounds. Brain cells were assumed to be either in a healthy or stressed state, with only stressed cells being susceptible to cell death. Cells may have switched between these states or died with concentration‐dependent transition rates. Since cell numbers were not directly measurable, intracellular lactate dehydrogenase (LDH) activity was used as a surrogate. Assuming that changes in cell numbers are proportional to changes in intracellular LDH activity, stochastic enzyme activity models were derived. Maximum likelihood and least squares regression techniques were applied for estimation of the transition rates. Likelihood ratio tests were performed to test hypotheses about the transition rates. Simulation studies were used to investigate the performance of the transition rate estimators and to analyze the error rates of the likelihood ratio tests. The stochastic time‐concentration activity model was applied to intracellular LDH activity measurements after 7 and 14 days of continuous exposure to propofol. The model describes transitions from healthy to stressed cells and from stressed cells to death.

### Results

The model predicted that propofol would affect stressed cells more than healthy cells. Increasing propofol concentration from 10 to 100 *μ*M reduced the mean waiting time for transition to the stressed state by 50%, from 14 to 7 days, whereas the mean duration to cellular death reduced more dramatically from 2.7 days to 6.5 hours.

### Conclusion

The proposed stochastic modeling approach can be used to discriminate between different biological hypotheses regarding the effect of a compound on the transition rates. The effects of different compounds on the transition rate estimates can be quantitatively compared. Data can be extrapolated at late measurement time points to investigate whether costs and time‐consuming long‐term experiments could possibly be eliminated.

#### Images from this Article show all 8 images

## On the asymptotic distribution of likelihood ratio test when parameters lie on the boundary

### Sankhya B (2011-05-01) 73: 20-41 , May 01, 2011

Download PDF | Post to Citeulike

The paper discusses statistical inference dealing with the asymptotic theory of likelihood ratio tests when some parameters may lie on boundary of the parameter space. We derive a closed form solution for the case when one parameter of interest and one nuisance parameter lie on the boundary. The asymptotic distribution is *not* always a mixture of several chi-square distributions. For the cases when one parameter of interest and two nuisance parameters or two parameters of interest and one nuisance parameter are on the boundary, we provide an explicit solution which can be easily computed by simulation. These results can be used in many applications, e.g. testing for random effects in genetics. Contrary to the claim of some authors in the applied literature that use of chi-square distribution with degrees of freedom as in case of interior parameters will be too conservative when some parameters are on the boundary, we show that when nuisance parameters are on the boundary, that approach may often be anti-conservative.

## The local power of the gradient test

### Annals of the Institute of Statistical Mathematics (2012-04-01) 64: 373-381 , April 01, 2012

Download PDF | Post to Citeulike

The asymptotic expansion of the distribution of the gradient test statistic is derived for a composite hypothesis under a sequence of Pitman alternative hypotheses converging to the null hypothesis at rate *n*^{−1/2}, *n* being the sample size. Comparisons of the local powers of the gradient, likelihood ratio, Wald and score tests reveal no uniform superiority property. The power performance of all four criteria in one-parameter exponential family is examined.