And some lessons from the 1930’s for the 2000’s
John Taylor returns to the topic of how much impact the stimulus package has had on output. The heart of the argument is summarized by his extension of a graph presented in the NYT (and reproduced in this post).
Figure from Taylor (2009)
As I noted earlier in my post about counterfactuals, this is the right way to assess the impact of the stimulus — that is to compare the outcome against a counterfactual, and here Professor Taylor has done so, bringing into the mix the Frank Smets and Raf Wouters model as well as Barro’s. (In other words, comparing forecasts w/stimulus to outcomes incorporating unforecasted shocks that have been realized is like comparing apples and oranges, as in .)
Why is this the right way to assess the stimulus? Here’s a medical analogy for why we need to look at things from the perspective of counterfactuals and model predictions.
I give a patient with a fever aspirin. However, the fever continues to rise. I could conclude that aspirin caused the fever to rise relative to what would have occurred without aspirin. Or I could use information regarding the effect of aspirin on fevers, obtained from previous experiments and experiences, and use that information to infer what the fever would have been in the absence of dosing with aspirin. Now, we know that the impact of aspirin on fever varies across individuals, and across types of infection. Does that mean the information from the past is useless? I would say not, and that we should make inferences, allowing for those variations, with some special reference to the mid-point of the range of estimates.
The Models and the Scenario Assumptions
Once we have determined what is the right way to proceed, it then makes sense to think a little about the models. First, what models have not been added. Well, there’s the Eichenbaum and Christiano as well as Laxton et al. models (see this post for discussion) as well as the Hall model (see this paper presented at the last Brookings panel on economic activity). Those papers presents substantially larger multipliers, while several have New Keynesian elements in a DSGE framework. In other words, they are DSGEs like the model used by Professor Taylor (see the ECB working paper for the full explanation of their approach. Why the difference? I think it has to do with (among many other things) the assumptions regarding the conduct of monetary policy (this is a point that Brad Delong makes with respect to the Cogan et al. paper).
Another issue: How are these models fitted? The macroeconometric models cited in the NYT article are estimated with judgmental factors incorporated (a very nontechnical description of the Macroeconomic Advisers model is here. How are DSGE’s fitted? The parameters are tweaked until the impulse-response functions conform to priors and data (see Camilo Tovar’s overview of using DSGEs in policy institutions here). (And in the case of the Barro model, that’s estimated over a very specific period, none of which begins after 1950 — see comments to this post).
Once the methodological approach is resolved, one still needs an understanding of where the specific models — and model conclusions — came from, and what assumptions drive the results. That’s why I often refer to surveys of multipliers   (somebody should do a meta-study), and ranges of multipliers (as the CBO does ) in order to make conclusions regarding the impact of any policy.
At the end of the post, Professor Taylor concludes:
…Moreover, in my view, the models have had their say. It is now time to look at the direct impacts using hard data and real life experiences.
I think one should always look to the data; whether we have sufficient data yet (given that the stimulus started in 09Q2, and we have not yet seen the advance 09Q4 release) makes me less than optimistic that we can yet tease out the effect of the stimulus using, for instance, SVAR approaches.
Multipliers from the Great Depression
I end on a slightly different note for the New Year (and the new decade), namely the Miguel Almunia, Agustin S. Benetrix, Barry Eichengreen,
Kevin H. O’Rourke, and Gisela Rua estimates for the government spending multiplier during the 1929-39 period, for a panel of 27 countries. From their paper (h/t Paul Krugman):
Figure 14 presents the responses to a shock to defence spending. It shows that
innovations in this variable are expansionary. This shock explains, on average, 6 per cent of
the forecast error variance of the GDP equation in a five-year horizon. Defence-spending
multipliers are 2.5 on impact and 1.2 after the initial year. These are at the upper end of the
range of multipliers estimated using modern U.S. public spending data.50 The absence of a
fiscal policy effect on output during the 1930s does not reflect the absence of a positive fiscal
policy multiplier, it would appear. Note that this is also the conclusion of Romer (1992) in
her calibration exercise for the United States in the 1930s.
Food for thought: empirical evidence, based on data for a period when nominal interest rates were low…and no wartime rationing in place..
Happy New Year to all Econbrowser readers!