“Energy regulation efficiency” and economic growth.
This particular piece of research was brought to my attention by Patrick R. Sullivan, who is fond of quoting talking points from the MacIver Institute, the National Center for Policy Analysis, Cato, in addition to the Pacific Research Institute. The study in question purports to show:
The most interesting relationship is between a state’s [energy regulation efficiency] ranking and its economic growth rate. High ranked states on average grow faster than those ranked low. Moreover, the higher rate of economic growth is associated with faster employment growth. Energy regulation can, therefore, be an important factor in determining the eventual prosperity of a state.
The authors painstakingly compile indices for all fifty states; the indices and aggregate index are reproduced in all their technicolor glory in Table 16 from the study.
They then show the statistics for the quintiles for energy regulation efficiency ranking and growth, and note a positive correlation.
[NB: As far as I can tell, the authors have used the nominal growth rates of GDP, rather than real (which is pretty odd); the reported growth rates are not expressed in annual rates]
The document notes:
Interestingly, the strongest relationship to ranking is a state’s growth rate. High ranked states have faster growth rates than those ranked low. Table 18 below provides 5-year and 10-year growth rates by quintiles. The average growth rates for states within the quintiles follow a consistent trend. Over the 10-year period 2002-2012, states in the top quintile had on average cumulative growth rates that were more than 20 percentage points higher than those in the bottom quintile. The top quintile also had growth rates that exceeded those of middle three quintiles. The bottom quintile’s cumulative growth was lower than most of these other three.
The table and the text are notable for the omission of any discussion of statistical significance. At this point, any researcher worth his/her salt should hear the sirens going off. (The hand-waving in footnote 81 is also a tip-off, and also some cause for hilarity.)
If one estimates the regression analog to Table 18, using ordered probit, one finds that the relationship is not statistically significant at the 10% level for the ten year growth rate.
Of course, there is no particular reason to enter the dependent variable as a ranking (which requires the ordered probit estimation). One could just use the average growth rate (over ten or five years) as the dependent variable. Here are graphs of the underlying data.
Figure 1: Average ten year growth rates 2003-13, by state, vs. Pacific Research Institute energy efficiency ranking (higher, such is 1, is “better” than lower, such as 5) (blue circles); nearest neighbor (LOESS) fit (red), window = 0.3. Source: BEA, Pacific Research Institute, and author’s calculations.
Figure 2: Average five year growth rates 2007-12, by state, vs. Pacific Research Institute energy efficiency ranking (higher, such is 1, is “better” than lower, such as 5) (blue circles); nearest neighbor (LOESS) fit (red), window = 0.3. Source: BEA, Pacific Research Institute, and author’s calculations.
I estimate the regression:
y = α + β×rank + u
Where y is an average annual growth rate, and rank is a quintile rank. Estimation using ten year average growth rates leads to:
y = 0.025 – 0.002×rank + u
Adj.-R2 = 0.05. bold face denotes significance at 10% MSL, using heteroskedasticity robust standard errors.
Using five year average growth rates:
y = 0.018 – 0.003×rank + u
Adj.-R2 = 0.06. bold face denotes significance at 10% MSL, using heteroskedasticity robust standard errors.
Notice that dropping North Dakota (ND) further reduces statistical significance. Moreover, any borderline statistical significance is obliterated by inclusion of a dummy for states with large oil reserves (top ten). I include a dummy variable into the ten year growth rate regression, and obtain:
y = 0.018 – 0.001×rank + 0.012×oil + u
Adj.-R2 = 0.20. bold face denotes significance at 10% MSL, using heteroskedasticity robust standard errors.
Notice that the adjusted R2 increases substantially with the inclusion of the oil dummy, indicating the minimal explanatory power associated with the Pacific Research Institute index.
It is astounding to me that an organization can spend all the resources to compile these indices, and yet not do the most basic statistical analysis taught in an econometrics course. It is even more astounding that some people take these results at face value. Apparently the aphorism that there is “one born every minute” holds true.