Reader rsm responds to my citation of GDP and Human Development Index data thusly:
Why not include the standard errors?
Again, because they would be so wide, you could tell any story and back it up with data?
Besides being an incredibly nihilistic statement, it’s also a generally ignorant one.
The simple answer is, there’s no such thing as a standard error on a GDP estimate, at least not in the sense of classical statistics. On the other hand, that doesn’t mean one shouldn’t try to convey the imprecision in GDP. And indeed, the BEA reports in each and every GDP release the following table:
So the BEA reports the imprecision with respect to subsequent revisions. It’s not a standard error — I don’t even know what it means to think of a standard error that represents the standard deviation of the sample population for GDP. Rather the table conveys the degree of imprecision with respect to subsequent revisions, with some view that subsequent vintages will more closely represent the “actual” GDP.
A more extensive table is in this document (“Comparisons of Revisions to GDP,” Sept. 2021) dedicated to tabulating the magnitudes of revisions.
Source: BEA (2021).
In a Survey of Current Business article (Dennis J. Fixler, Eva de Francisco, and Danit Kanal, “The Revisions to Gross Domestic Product, Gross Domestic Income, and Their Major Components,” January 2021), comparable statistics for GDP, GDP components, GDI, and GDI components, are provided.
But it’s important to remember that these measures of spread are not “standard errors” in the conventional sense (at least not to me!).
In his extensive discussion of how government statistics could be reported conveying more strongly the uncertainty surrounding estimates, Manski (JEL, 2015) cites the Bank of England’s uncertainty measures (“fan charts”) for UK GDP.
Galvao and Mitchell (2019) provide this example:
Source: Galvao and Mitchell (2019).
Suppose one really wanted to comprehensively address the issue of imprecision. What would it take? Manski (JEL, 2015) writes
Considering the sources and implications of error from the perspective of users of economic statistics, rather than the perspective of statisticians, I think it essential to distinguish errors in measurement of well-defined concepts from uncertainty about the concepts that should be measured. I also think it useful to distinguish errors that diminish with time from ones that persist. To highlight these distinctions, I will separately discuss transitory statistical uncertainty , permanent statistical uncertainty , and conceptual uncertainty.
Data revisions falls under the first category. That leaves lots of other issues unaddressed. For permanent statistical uncertainty, Manski highlights survey nonresponse and imputations. For conceptual uncertainty, he examines the impact of seasonal adjustment. These latter are thorny problems, and providing a straightforward measurement of (as reader rsm wants) total — not just transitory — statistical uncertainty would be very difficult.
Returning to transitory uncertainty (specifically revisions), I reprise these graphs (from this post):
Figure 1: Real GDP normalized to 1999Q1 as of 1/31/2001 (blue), as of 4/27/2001 (red), as of 7/27/2001 (green), as of 6/28/2018 (black). NBER defined recession dates shaded gray. Source: ALFRED, BEA, NBER, and author’s calculations.
Figure 2: Quarter-on-quarter annualized growth rates of real GDP as of 1/31/2001 (blue), as of 4/27/2001 (red), as of 7/27/2001 (green), as of 6/28/2018 (black). NBER defined recession dates shaded gray. Source: ALFRED, BEA, NBER, and author’s calculations.
Addendum, 11am Pacific:
For information on over-the-month revisions for establishment survey NFP, see here.
On benchmark revisions to the establishment survey numbers, see here.