Likelihoodist statistics or likelihoodism is an approach to statistics that exclusively or primarily uses the likelihood function. Likelihoodist statistics is a more minor school than the main approaches of Bayesian statistics and frequentist statistics, but has some adherents and applications. The central idea of likelihoodism is the likelihood principle: data are interpreted as evidence, and the strength of the evidence is measured by the likelihood function. Beyond this, there are significant differences within likelihood approaches: "orthodox" likelihoodists consider data only as evidence, and do not use it as the basis of statistical inference, while others make inferences based on likelihood, but without using Bayesian inference or frequentist inference. Likelihoodism is thus criticized for either not providing a basis for belief or action (if it fails to make inferences), or not satisfying the requirements of these other schools.

The likelihood function is also used in Bayesian statistics and frequentist statistics, but they differ in how it is used. Some likelihoodists consider their use of likelihood as an alternative to other approaches, while others consider it complementary and compatible with other approaches; see § Relation with other theories.

Relation with other theories

While likelihoodism is a distinct approach to statistical inference, it can be related to or contrasted with other theories and methodologies in statistics. Here are some notable connections:

  1. Bayesian statistics: Bayesian statistics is an alternative approach to statistical inference that incorporates prior information and updates it using observed data to obtain posterior probabilities. Likelihoodism and Bayesian statistics are compatible in the sense that both methods utilize the likelihood function. However, they differ in their treatment of prior information. Bayesian statistics incorporates prior beliefs into the analysis explicitly, whereas likelihoodism focuses solely on the likelihood function without specifying a prior distribution.
  2. Frequentist statistics: Frequentist statistics, also known as classical or frequentist inference, is another major framework for statistical analysis. Frequentist methods emphasize properties of repeated sampling and focus on concepts such as unbiasedness, consistency, and hypothesis testing. Likelihoodism can be seen as a departure from traditional frequentist methods, as it places the likelihood function at the core of statistical inference. Likelihood-based methods provide a bridge between the likelihoodist perspective and frequentist approaches by using likelihood ratios for hypothesis testing and constructing confidence intervals.
  3. Fisherian statistics: Likelihoodism has deep connections to the statistical philosophy of Ronald Fisher.[1] Fisher introduced the concept of likelihood and its maximization as a criterion for estimating parameters. Fisher's approach emphasized the concept of sufficiency and the maximum likelihood estimation (MLE). Likelihoodism can be seen as an extension of Fisherian statistics, refining and expanding the use of likelihood in statistical inference.
  4. Information theory: Information theory, developed by Claude Shannon, provides a mathematical framework for quantifying information content and communication. The concept of entropy in information theory has connections to the likelihood function and the AIC criterion. AIC, which incorporates a penalty term for model complexity, can be viewed as an information-theoretic approach to model selection and balances model fit with model complexity.
  5. Decision theory: Decision theory combines statistical inference with decision-making under uncertainty. It considers the trade-off between risks and potential losses in decision-making processes. Likelihoodism can be integrated with decision theory to make decisions based on the likelihood function, such as choosing the model with the highest likelihood or evaluating different decision options based on their associated likelihoods.

Criticism

While likelihood-based statistics have been widely used and have many advantages, they are not without criticism. Here are some common criticisms of likelihoodist statistics:

  1. Model dependence: Likelihood-based inference heavily relies on the choice of a specific statistical model.[2] If the chosen model does not accurately represent the true underlying data-generating process, the resulting estimates and inferences may be biased or misleading. Model misspecification can lead to incorrect conclusions, especially in complex real-world scenarios where the true model may be unknown or difficult to capture.
  2. Difficulty of interpretability: Likelihood-based statistics focus on optimizing the likelihood function to estimate parameters, but they may not provide intuitive or easily interpretable estimates. The estimated parameters may not have a direct and meaningful interpretation in the context of the problem being studied. This can make it challenging for practitioners to communicate the results to non-technical audiences or make practical decisions based on the estimates.
  3. Sensitivity to sample size: Likelihood-based methods can be sensitive to the sample size of the data. In situations with small sample sizes, the likelihood function can be highly variable, leading to unstable estimates. This instability can also affect the model selection process, as the likelihood ratio test or information criteria may not perform well when sample sizes are small.
  4. Assumption of independence: Likelihood-based inference often assumes that the observed data are independent and identically distributed (IID). However, in many real-world scenarios, data points may exhibit dependence or correlation. Ignoring this dependence can lead to biased estimates or inaccurate hypothesis testing.
  5. Lack of robustness: Likelihood-based methods are not always robust to violations of model assumptions or outliers in the data. If the data deviate from the assumed distribution or if extreme observations are present, the estimates can be heavily influenced by these outliers, leading to unreliable results.
  6. Computational complexity: Estimating parameters based on likelihood functions can be computationally intensive, especially for complex models, large datasets, or highly non-linear systems.[3] Optimization algorithms used to maximize the likelihood function may require substantial computational resources or may not converge to the global maximum, leading to suboptimal estimates.
  7. Lack of uncertainty quantification: Likelihood-based inference often provides point estimates of parameters without explicit quantification of uncertainty. While techniques such as confidence intervals or standard errors can be used to approximate uncertainty, they rely on assumptions that may not always hold. Bayesian methods, on the other hand, provide a more formal and coherent framework for uncertainty quantification.

History

Likelihoodism as a distinct school dates to Edwards (1972), which gives a systematic treatment of statistics, based on likelihood. This built on significant earlier work; see Dempster (1972) for a contemporary review.

While comparing ratios of probabilities dates to early statistics and probability, notably Bayesian inference as developed by Pierre-Simon Laplace from the late 1700s, likelihood as a distinct concept is due to Ronald Fisher in Fisher (1921). Likelihood played an important role in Fisher's statistics, but he developed and used many non-likelihood frequentist techniques as well. His late writings, notably Fisher (1955), emphasize likelihood more strongly, and can be considered a precursor to a systematic theory of likelihoodism.

The likelihood principle was proposed in 1962 by several authors, notably Barnard, Jenkins & Winsten (1962), Birnbaum (1962), and Savage (1962), and followed by the law of likelihood in Hacking (1965); these laid the foundation for likelihoodism. See Likelihood principle § History for early history.

While Edwards's version of likelihoodism considered likelihood as only evidence, which was followed by Royall (1997), others proposed inference based only on likelihood, notably as extensions of maximum likelihood estimation. Notable is John Nelder, who declared in Nelder (1999, p. 264):

At least once a year I hear someone at a meeting say that there are two modes of inference: frequentist and Bayesian. That this sort of nonsense should be so regularly propagated shows how much we have to do. To begin with there is a flourishing school of likelihood inference, to which I belong.

Textbooks that take a likelihoodist approach include the following: Kalbfleisch (1985), Azzalini (1996), Pawitan (2001), Rohde (2014), and Held & Sabanés Bové (2014). A collection of relevant papers is given by Taper & Lele (2004).

See also

References

  1. Efron, B. (February 1986). "Why Isn't Everyone a Bayesian?". The American Statistician. 40 (1): 1. doi:10.2307/2683105. ISSN 0003-1305.
  2. Fitelson, Branden (2007-03-24). "Likelihoodism, Bayesianism, and relational confirmation". Synthese. 156 (3): 473–489. doi:10.1007/s11229-006-9134-9. ISSN 0039-7857.
  3. Drignei, Dorin; Forest, Chris E.; Nychka, Doug (2008-12-01). "Parameter estimation for computationally intensive nonlinear regression with an application to climate modeling". The Annals of Applied Statistics. 2 (4). arXiv:0901.3665. doi:10.1214/08-aoas210. ISSN 1932-6157.

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.