Ever since I read the hysterically incorrect interpretation of a confidence interval from a person who purports to be a policy analyst, I’ve been looking for a succint explanation from a statistician, as a handy reference. Here it is (h/t David Giles via Mark Thoma):
The speciﬁc 95 % conﬁdence interval presented by a study has a 95 % chance of containing the true effect size. No! A reported conﬁdence interval is a range between two numbers. The frequency with which an observed interval (e.g., 0.72–2.88) contains the true effect is either 100 % if the true effect is within the interval or 0 % if not; the 95 % refers only to how often 95 % conﬁdence intervals computed from very many studies would contain the true size if all the assumptions used to compute the intervals were correct. It is possible to compute an interval that can be interpreted as having 95 % probability of containing the true value; nonetheless, such computations require not only the assumptions used to compute the conﬁdence interval, but also furtherSource: Greenland et al. (2016).
assumptions about the size of effects in the model. These further assumptions are summarized in what is called a prior distribution, and the resulting intervals are usually called Bayesian posterior (or credible) intervals to distinguish them from conﬁdence intervals.