Empirical evidence on Inter-war multipliers
There is a growing literature that seeks to evaluate multipliers during periods of slack or at the zero lower bound.[1] In the former case, threshold regression has been used. For the latter, it is more difficult since our post-war experience is restricted to the last five years. So it is natural to appeal to cross-country data from the last episode of low interest rates — the 1930’s. From Miguel Almunia, Agustín Bénétrix, Barry Eichengreen, Kevin H. O’Rourke and Gisela Rua, “Lessons from the Great Depression,” Economic Policy (2010):
…They suggest that fiscal policy
made little difference during the 1930s because it was not deployed on the requisite
scale, not because it was ineffective. They suggest a positive impact of government
expenditure on GDP during the interwar period, with substantial fiscal multipliers:
for example, the first set of VAR exercises suggested that these were 2.5 on impact
and 1.2 after one year. Where significant fiscal stimulus was provided, output and
employment responded accordingly. Where monetary policy was loosened, recovery
occurred sooner. In the VARs in differences, we found that central bank discount
policy was effective in boosting GDP.69 These results are less robust than those for
fiscal policy, but again we think that the implications are clear. The most successful
economies during the 1930s were those whose governments pursued the least
‘orthodox’ policies.
I find these results of particular interest, given the strong parallels in critiques between the 1930s and now. The baseline impulse response functions are shown below:
Moreover, these results pertain to a period not characterized by rationing (e.g. WWII).
It has been argued that fiscal policy is unlikely to boost output
today because it did not work in the 1930s. … But, as we show, fiscal policy where applied worked in the 1930s, whether
because spending from other sources was limited by uncertainty and liquidity constraints
or because with interest rates close to the zero bound there was little crowding
out of private spending. Previous studies have found no effect of fiscal policy,
not because it was ineffectual, but because it was hardly tried (that is, the magnitude
of the fiscal impulse was small).
While there are other studies that have encompassed the interwar period by using threshold regressions, this is to my knowledge one of the few papers that deals with the special characteristics of a liquidity trap by expanding the sample across countries.
Ungated version of the paper here; additional discussion of the ZLB issue here.
Addendum: Reader Jeff points out that discount rates were not at zero during the 1930’s. However, short term rates for surplus funds for financial institutions were substantially lower than discount rates. Here are US and UK series.
Figure 1: Short term rates for surplus funds of financial institutions. Source: Measuring Worth.
Menzie: Fact-free analysis check: Why do you think the 1930’s was the “last episode of low interest rates?” A simple check of Figure 8 in the Almunia paper would show many non-zero rates.
But again this is all besides that point. Your Keynesian tenancies are causing you to fixate on interest rates as the key variable in monetary policy. The real question is whether or not the Fed has the ability to engage in open-market operations. Friedman and Swartz answered that question for the 1930’s long ago and the existence of open-ended quantitative easing answers the question for the current situation.
Jeff: Thank you for your comment. I should have clarified for you the fact that the discount rate was above the excess funds rate (something related to Bagehot and the idea of lending freely at a penalty rate — but I don’t expect you to know history). So, I have now added in the post time series plots of interest rates; I don’t know if you count 0.4% (US, 1934) as low or not, but data are data – despite my alleged Keynesian “tenancies” (as you write).
The Professor delivers a couple of nice counter punches to his critic.
Menzie,
With interest rates at such a low level how much do you plan to borrow this year? Since artifically low interest rates are so good for the economy I would expect you to plan to borrow a lot!
Let us try simplicity
« Financial reviews are sharing their considerations with regard to the basis and founding principles that should be retained for driving the next prosperity cycle. They all agreed on a balance between the producers output prices and the consumers real disposable incomes.
They all recognise that cash settlement as opposed to credit should be the driving price of transactions for goods and services.
The Consequences of a new equilibrium of real incomes and producers prices would be detrimental to the illusion of money but would ensure a better distribution of incomes and employment stability among the workforce. The actual loss in purchasing power in the domestic economy of the USA is estimated to 6 billions USD »
The annalyst 1930
A little more complex, an experiment in vitro.
In absence of international reserve currency status,the Russian Rouble in 1996 was heavily depreciated by the international financial markets,domestic interest rates were lifted in 1993 up to 200% and will be hovering around 50% years after.An asymetry assets prices were cheap and interest rates were high.
As for the effectivess of the fiscal multiplier, in 1930 the USA are a net exporter of capital and the public debts are gradually loading the accounts.
So you dont have to be “keynesian” to understand that multipliers should usually be 1 or more. you just need to understand arithmetic and addition. govt spending will add one to one for GDP. Its part of its addition, its definition and nature c+i+G+x, note the “G” duhhh.
so start with factor of one. Then you can add second third round effects RAISING the multiplier. All antikeynsian losers can tune out here, but the theory and the very name suggest that more spending produces more income that gets spent. Hence multiplier, not divider
if you really are at full employment or are delusional about “Ricardian Equivalence” you can argue knock on second third round effects offset part, meaning crowding out and so on. So your point about slack is good, the multipliers will tend to be higher or not offset in any way with “slack.” Its a rare day lately when there is no slack.
ppcm – There is no requirement that the multiplier be greater than 1, or, in fact, even positive. Thus, there can easily be spending that crowds out more economic activity than it creates. Taken to an extreme, we can imagine an unemployment insurance payment that pays more than the prevailing average wage; what would you expect the impact of such a program to be on the aggregate productivity of the community where it is in force? In what situations might such a system be desirable (or undesirable)? How would this compare to programs like non-production farm subsidies?
Menzie: Not exactly sure where you get 0.4%. The source you cite lists the US 1934 rate as 1%. Low–yes. Zero–no. The data also show that the rate stayed at 1.00 from 1937-1945. You trust this?
And thank you for the spell check. To avoid confusion I meant to say “Keynesian tendencies.” It’s telling that you chose to address typographical points rather than substantive ones. It must make you feel good to be right about something. Unfortunately you’re an economist and not and not an editor.
Jeff: I checked, and I did make a mistake — it’s 0.63% in 1934, and 0.35% in 1935 (US Short-Term Rate: Surplus Funds, Consistent Series). You cited 1% which is from the contemporary series; if you read the user guide, you’d understand that the consistent series was the more appropriate.
Now, if you have questions about the data’s reliability, then I’d welcome your alternative. My guess is that Lawrence Officer knows a heck of a lot more than you and I combined about the historical data.
You guys are focused on how to stimulate the economy. The prior question is when to stimulate the economy. Keynes said governments should run a fiscal surplus when the economy is doing OK. So as to maintain credit worthiness for the government – to be used in the next downturn.
Apparently those in charge tend to think the economy needs stimulating always. But don’t call that being a Keynesian.
Once again, the authors of this paper assume that government spending is essentially exogenous in the way they order the VaR shocks. That’s problematic as I (and many others) have noted before. It’s better in this case since they use defense spending as their measure of government spending. Defense spending is more credible as an exogenous variable.
Theory without evidence is meaningless. But econometric studies without theory can be just as meaningless. If we look at the new Keynesian models that predict spending multipliers, we see that the logic is fundamentally different from the textbook Keynesianism deceptively taught to undergraduates. In the modern new Keynesian models, the problem with the zero interest rate bound is that nominal interest rates need to be negative to be consistent with consumer preferences for current versus future consumption. However, since the zero bound prevents nominal interest rates from being negative, for a given expected inflation rate, the real rate of interest is too high. That means that consumers are shifting current consumption to future consumption. The theoretical reason government spending can have a multiplier effect is that it raises expected inflation, thus lowering the real interest rate and shifting consumption from the future to the present.
This econometric study would be more persuasive if it showed that defense spending raised expected inflation when rates are near zero. That’s very hard to do in a cross section of countries during the 1930s, but, since this paper is designed to be relevant for policy today, we should fact check the most recent stimulus on this point.
The table below shows forecasts of expected inflation by professional forecasters surveyed by the Philadelphia Fed at a 1-year and 10-year horizons.
Year Quarter 1-year 10-year
2006 1 2.42946723 2.5
2006 2 2.386486191 2.5
2006 3 2.628308515 2.5
2006 4 2.617979666 2.5
2007 1 2.456268914 2.35
2007 2 2.434120182 2.4
2007 3 2.228502238 2.4
2007 4 2.442422089 2.4
2008 1 2.381896099 2.5
2008 2 2.665143958 2.5
2008 3 2.523045958 2.5
2008 4 1.755961563 2.5
2009 1 1.555062803 2.4
2009 2 1.706867229 2.5
2009 3 1.802163125 2.5
2009 4 1.63291815 2.26272376
2010 1 1.788267775 2.39
2010 2 1.877465561 2.4
2010 3 1.720942578 2.3
2010 4 1.609018302 2.2
2011 1 1.724670849 2.3
2011 2 2.130371698 2.4
2011 3 2.012497703 2.4
2011 4 1.969219404 2.5
2012 1 2.074990813 2.3
2012 2 2.183089713 2.48
2012 3 2.109819493 2.35
2012 4 2.193065469 2.3
2013 1 2.1 2.3
I don’t really see any evidence for an increase in expected inflation in 2009 and 2010.
One more point: to get a multiplier, it’s important in these models for the spending to come on line quickly. As I’ve mentioned before in other comments, that didn’t happen with the stimulus and it’s a problem in general for stimulus programs. You can’t increase spending fast enough when the magnitudes are large. Increases in defense spending are probably your best bet. But that also means that if you want to reduce the effect of a contractionary government spending shock most efficiently, you should remove the defense component.
Thus, if you really believe this paper, the policy conclusions are clear. It’s very important to cancel the defense cuts in the coming sequestration. But we can go ahead with the other cuts. That’s a compromise that Republicans can live with. Somebody call the White House.
Question: So France should be spending past the target?
http://telegraph.co.uk/finance/financialcrisis/9886776/France-freezes-spending-to-hit-EU-targets-as-slump-deepens.html
Rick Stryker Once again, the authors of this paper assume that government spending is essentially exogenous in the way they order the VaR shocks.
I think you misunderstood their paper. In their first model VAR model they ordered the endogenous variables such that a change in defense spending is not contemporaneously affected by a feedback effect from higher output; however, in a Cholesky decomposition there is a lagged feedback effect. So I guess I don’t understand your objection to the way they did things. If you want to identify the model, then you need to make the number of parameters equal to the number of equations. A Cholesky decomposition makes minimal structural assumptions. Would you rather they used a structural VAR model? Given some of your comments I would have thought that a more structural VAR would have been the last thing you would have wanted.
If we look at the new Keynesian models that predict spending multipliers, we see that the logic is fundamentally different from the textbook Keynesianism deceptively taught to undergraduates.
Is that a criticism of undergraduate Keynesian models or are you recognizing the possible inapplicability of NK models at the ZLB?
However, since the zero bound prevents nominal interest rates from being negative, for a given expected inflation rate, the real rate of interest is too high. That means that consumers are shifting current consumption to future consumption.
Huh? Fiscal stimulus is an alternative to monetary policy. Consumer expectations of higher inflation would help stimulate current consumption and would lower the real interest rate, but this does not mean the ZLB dampens the effectiveness of fiscal policy. In a standard NK DSGE model the demand shock is a function of the nominal interest rate less inflation and expectations about future real economic activity. Fiscal stimulus is part of those expectations about future real economic activity.
As I’ve mentioned before in other comments, that didn’t happen with the stimulus and it’s a problem in general for stimulus programs. You can’t increase spending fast enough when the magnitudes are large.
This is wrong on three counts. First, the ARRA stimulus did happen relatively quickly. It’s not expenditures that stimulate economic activity, it’s contract awards…and those happened pretty fast. Second, the quickness of the stimulus is only important if the recession is of normal duration. Finally, going back to the NK DSGE formulation, a long drawn out fiscal stimulus program should have effects on the expectations of future output; therefore, a stimulus program that was perceived ex ante as being too much for too long a period should increase inflationary expectations. As you yourself pointed out, inflationary expectations are pretty low. Conclusion: the ARRA did not have enough spending built into the tails.
It’s very important to cancel the defense cuts in the coming sequestration. But we can go ahead with the other cuts.
If you only want to keep spending that can be executed quickly, then you would want to restore funding to teachers, highway projects, and O&M defense spending; but you would not want to increase defense spending in other appropriation categories because those other categories have long budget execution lags built into them. This was one of the reasons that I refused to participate in an OSD brainstorming session during the Obama transition period…everyone was influenced too much by Martin Feldstein’s poorly reasoned op-ed piece in the WSJ. Feldstein did not seem to understand the DoD budget and execution process.
2slugbaits,
No, I don’t misunderstand the paper. They are using the standard identification strategy of Blanchard and Perotti that I have critiqued before. I’m certainly not alone in raising questions about it. Indeed, the authors acknowledge the controversy, saying:
“We start by estimating government expenditure multipliers in VAR models, using
recursive ordering to identify shocks. Since assumptions regarding ordering are cen-
tral to the identification strategy, it is important to acknowledge that there is less
than complete consensus on the appropriate ordering when the impact of total
government spending on output is being considered. The common assumption is
that government spending does not respond to output in the current period – in
other words, that contemporaneous government spending is exogenous to output.
When, however, those responsible for government spending decisions take them
with future output movements in mind – since they worry about the depth of the
impending recession – this ordering will be problematic. It can be argued that dur-
ing the Great Depression, before the triumph of Keynesianism and when there was
little recognition of how spending decisions might be used to offset changes, both
contemporaneous and future, in output and employment, this assumption is defensi-
ble. But, regardless of period, the assumption is strong.”
On your second question, I’m merely performing a public service by pointing out to readers of this blog who are not familiar with academic economics that the debate over the effectiveness of government spending when interest rates are at the zero lower bound has nothing to do with the Keynesian model found in undergraduate textbooks. I’ve noticed from the comments that many readers believe that. But modern new Keynesian models work differently and I think it’ important to know what the real arguments are before we can assess the evidence.
To see what a real new Keynesian model looks like, and to answer your third point, I recommend Christiano, Eichenbaum, and Robello’s 2011 JPE article “When is the Government Spending Multiplier Large?” available here:
http://faculty.wcas.northwestern.edu/~lchrist/course/Korea_2012/JPE_2011.pdf
It’s quite a clear paper and model, which shows how a large multiplier is possible in a fully consistent and rigorous macro model. Unlike the textbook Keynesian model, in the modern model government spending in the presence of the zero interest rate lower bound works by increasing expected inflation.
Given that fact, a simple gut check on whether the stimulus worked consistently with these models is just to check expected inflation in 2009 and 2010. By that measure, the stimulus didn’t work.
Interestingly enough, the authors agree with me about the ineffectiveness of the stimulus despite their point that large government spending multipliers are possible. They took a DSGE and examined whether it could explain the crisis and its aftermath. Here is what they have to say about the stimulus:
“Despite the fiscal stimulus plan enacted in February 2009 (the American Recovery and Reinvestment Act), total government con-
sumption rose by only 2 percent. Total government purchases, which include both consumption and investment, rose by even less. This result
reflects two facts. First, a substantial part ofthe stimulus plan involved an increase in transfers to households. Second, there was a large fall instate and local purchases that offset a substantial part of the increase
in federal government purchases.”
I’ve made exactly those points in comments. The authors go on to conclude:
“We conclude by noting that, consistent with the data, in our simulations, government purchases rise by only 2 percent for 11 periods.
Recall from figure 5 that the peak value of the multiplier in Altig et al.’s model is 2.3. So the rise in government purchases accounts for, at
most, a 0.7 percent rise in annual GDP.
The modest contribution of government purchases to the recovery reflects the very modest increase
in government spending rather than a small multiplier.”
Yes, the conclusion I’ve come to a few times in my comments on this blog. And since their paper is about government spending, they did not make the other point I’ve made a few times in comments: temporary tax cuts, which were a substantial portion of the stimulus, have low multipliers in new Keynesian models.
Menzie, 2slugbaits, and…well, you know who you are out there. You guys have to face the facts: the stimulus is one of many failed policies of a failed administration. You voted for it; you continue to defend it despite the evidence; and you want to do it again. At this point, you own the failure.
Every time I read a Rick Stryker comment I just know it’s going to be some combination of misunderstanding econometrics and moving the goal posts. It’s like repeating the same Christmas morning every day, but it’s the Christmas where your parents were too busy fighting over their divorce to get you presents.
AWH,
The multiplier is never one for one. There are always additional inefficiencies with one methodology over another. In truth the multiplier for government spending is almost always less than one for one because of government inefficiencies by definition, government intervention always distorts (adds cost to) the normal demand of the market.
Rodrigo,
You can disagree with what I say, but I always make real arguments which I support with evidence that I put in my comment or link to. I’ve gotten pretty use to expecting very little in return from the other side though. You’ve come back with the usual ad hominem attack without explaining in any way how I fail to understand econometrics or have moved the goal posts.
I’m always happy when it turns to this. You are hurling the insults because you are frustrated and don’t know how to stop me. You know I’m making a well supported and persuasive argument that the stimulus was bad policy and you are afraid other people might think so too. You worry that readers of this blog might actually see that “Lessons From the Great Depression” is just another desperate life line being thrown to a failed stimulus policy that is drowning in a sea of theory and empirical data. When I see the ad hominems coming my way, I know I’m winning.
Rick Stryker No, I don’t misunderstand the paper.
If you say so. But as the authors pointed out, reordering the variables did not change the basic results. The ordering in a Cholesky decomposition can be important, but it isn’t always importatant…particularly if the composite error terms from the reduced form are only weakly correlated. Do you think they should have ordered the variables differently?
the debate over the effectiveness of government spending when interest rates are at the zero lower bound has nothing to do with the Keynesian model found in undergraduate textbooks.
This is wrong. The paper that Menzie referenced wasn’t even an old-fashioned macro structural model; it was a stripped down VAR. As to modern NK models, you don’t have to look all that hard to look inside a DSGE model and find the equivalent of an IS curve. Add a monetary policy (MP) curve ala Romer and you’re not that far. What an NK DSGE model brings to the discussion are expectations and intertemporal optimization. Increasing the complexity of the model does not necessarily improve the economic intuitions that come out of the model. Modern NK models were bred to explain the stagflation of the 1970s. Just because a model explains 1979 well does not mean it explains 2009 or 1929 very well.
As to the JPE article, once again I’m afraid you may have misunderstood the authors’ point. To start with, much of their project is to try to find an NK DSGE model that could be calibrated to yield the same intuitions as an Old School Keynesian model.
Unlike the textbook Keynesian model, in the modern model government spending in the presence of the zero interest rate lower bound works by increasing expected inflation.
That is not what the JPE model does. In the JPE model an increase in government spending increases output, which increases marginal costs, which increases inflation. There are two additive terms in the DSGE pseudo-IS curve. One of those terms is government spending. A higher inflation rate that reduces the real interest rate (notice that because we’re talking about the real interest rate we’re still on the DSGE IS curve) facilitates a multiplier greater than 1.0. It’s also possible that monetary policy could explicitly target a higher inflation rate by adopting a zero nominal rate and this could increase aggregate demand. And some modern models do work this way, but notice that now you’ve shifted to the MP curve. In any event, that is not what the JPE model does. The JPE model focuses on the fiscal side.
the authors agree with me about the ineffectiveness of the stimulus despite their point that large government spending multipliers are possible. They took a DSGE and examined whether it could explain the crisis and its aftermath. Here is what they have to say about the stimulus:
So am I to understand that you think ARRA should have been much bigger with a lot more spending? Because that’s exactly what the authors in the JPE paper are saying. If that’s what you mean, then welcome comrade! I’ll let Menzie speak for himself, but I for one never argued that the spending in ARRA was up to the job. It could have been worse if we had followed what the Tea Party crazies wanted. But even though the scale of the stimulus was way too small, that does not mean the underlying modeling behind ARRA was wrong.
One last thing. You appear to have misunderstood what the JPE authors mean by “timely” fiscal spending. Your comments suggest that you interpret “timely” to mean “quick.” This is not what the authors mean. They are using “timely” in the sense of not being “mistimed” such that the fiscal stimulus kicks in at just the wrong time when the Fed is likely to move away from the ZLB. Given the depth of the Great Recession any concerns about fiscal stimulus being mistimed are beside the point.
2slugbaits,
You’ve missed all my points yet again. I’ll try once more.
1)My critique of the Blanchard and Perotti identification method is not about the ordering per se. I’m raising a more fundamental issue, which is that we need exogenous observations of government spending increases to measure the multiplier. The VaR identification procedure tries to do that but I and others are skeptical that it really works. As I pointed out, the authors themselves acknowledge the controversy. See for example Valerie Ramey’s paper
http://qje.oxfordjournals.org/content/early/2011/03/21/qje.qjq008.full.pdf+html
and Leeper et. al
http://www.imf.org/external/pubs/ft/wp/2012/wp12153.pdf
for recent analysis of some of these issues.
2)Sorry, but you need to go back and study basic economics if you think the textbook IS-LM model and the modern NK models are saying the same thing about the multiplier given that interest rates are at the zero lower bound.
In the old textbook Keynesian IS-LM model, in a simple closed economy the multiplier is determined by the marginal propensity to consume out of disposable income. When government spending goes up, the multiplier effect is reduced because real interest rates rise, reducing investment, the so-called “crowding out” effect. However, in a liquidty trap, the LM curve is flat when rates are very low and interest rates are not responsive to an increase in national income. So there is no crowding out effect and the multiplier is bigger.
In the modern NK model, you have explicit treatment of tastes and preferences, dynamic choice, rational expectations, etc. You get different more nuanced results. In these models, as I have already explained, the problem with the zero lower nominal interest rate bound is that nominal interest rates can’t fall enough to reduce real rates enough. The multiplier in these models works by raising expected inflation, which reduces real interest rates.
Contrary to your assertion, increasing the complexity of the model does indeed change the economic intuition coming out, which has practical implications for designing a stimulus as well as for testing the model empirically. In the old model, the main thing you have to worry about is the size of the stimulus. The stimulus needs to be big enough to plug the hole in aggregate demand.
In the new Keynesian model, not only does the stimulus have to be big enough, but you also have to worry about the details of how you do the stimulus. You shouldn’t do temporary tax cuts or transfers, because they have low multipliers. You need to increase government purchases and you need to worry about the timing, with government purchases hitting when nominal rates are at the zero lower bound. In the new Keynesian model, twice the stimulus won’t matter at all if it’s twice the wrong kind of stimulus.
There is no debate going on right now between the old and new model. The old model has been resoundingly rejected by modern macro. Anybody who is designing a stimulus should understand modern macro.
3) Your third point is bizarre. You deny my claim that fiscal policy with zero interest rates works by increasing expected inflation in new Keynesian models and then you point out that it works by increasing inflation!!
4)In your fourth point, you missed it again. It’s only about the size if you have the old text book Keynesian model in mind. The new Keynesian models emphasize that you need the right composition of stimulus spending.
I’ve said it before and I’ll say it again. The Administration designed the stimulus incompetently. They ignored the insights of decades of macro research and loaded up on temporary tax cuts and transfers. The policy failed. Doing even more of the same wouldn’t make any difference. Moreover, there is no real proof that had the Administration done done the stimulus competently it would have worked. I don’t believe that a study such as the one Menzie posted establishes the case for the multiplier being greater than one empirically.
We shouldn’t be so worried about stimulus. That failed policy can’t be undone. But there is still time to correct the other damaging policies of the Adminsitration, such as Obamacare and Dodd-Frank. That’s what we should be focused on.
“You’ve come back with the usual ad hominem attack without explaining in any way how I fail to understand econometrics or have moved the goal posts.”
What would be the point of that exercise? 2slugs has been trying to set you straight for months and it hasn’t worked yet.
You just finished arguing that the administration should “own the failure” of the stimulus because it wasn’t large enough and had a sub-optimal spending mix. Basically, that it wasn’t close enough to the progressive proposal. You’ve finally argued yourself around to something none of the progressives you’re debating will disagree with. Congratulations!
Rodrigo,
The point of the exercise would be to give an assist to 2slugbaits.
I’m afraid that like 2slugbaits you’ve missed the point if you think I’m defending the progressive view. I’ve been saying the progressive view is wrong.
I’ve been distinguishing the textbook IS-LM model from modern new Keynesian models for a reason. The progressive view, which seems to be led by Krugman, is that IS-LM is a good enough model with insights that apply to today’s problems. You just need to do a back of the envelope like Krugman does to see how much aggregate demand you need to replace and you’ll see that would have needed a much larger stimulus.
But that’s blogosphere economics. Unfortunately, Krugman is seriously misleading his readers on the nature of modern macro research. People with a background in economics know better but most of Krugman’s readers can’t tell the difference.
The modern new Keynesian models have more specific policy advice for stimulus, advice that was ignored by the Administration. I don’t think that happened because their economists didn’t know better. I think the administration economists weren’t listened to. All of these policies, stimulus, Dodd-Frank, Obamacare, etc. were decided with breakneck speed, without proper consideration and due diligence, and over the objections of a steamrolled opposition. The decisions were made according to political considerations in Congress, with the Administration rubber stamping them.
Once you come to terms with the requirements for stimulus to work according to the modern models, you have to realize that actually executing an effective stimulus is very hard, given the political and technical constraints of getting large amounts of government purchases going quickly. But we never had that debate.
Rick Stryker
I’m raising a more fundamental issue, which is that we need exogenous observations of government spending increases to measure the multiplier.
Referring to the paper Menzie highlighted, the authors did in fact run a version of their VAR model in which defense spending was purely exogenous…and they got the same result.
The VaR identification procedure tries to do that but I and others are skeptical that it really works.
No, that is not the point of a VAR. A VAR tries to make all variables endogenous derived from reduced form. VAR models were born out of Sims’ critique of old fashioned structural models with subjectively determined exogenous variables. The identification procedure in a Cholesky decomposition VAR selects one of the endogenous variables as contemporaneously exogenous, but it is endogenously determined in all lags. This is what the authors were doing in the first, baseline VAR model. In a third model they set defense spending as a purely exogenous variable and got the same results.
When government spending goes up, the multiplier effect is reduced because real interest rates rise, reducing investment, the so-called “crowding out” effect.
Nope. Wrong again. In the old school IS-LM model the first order effect of government spending is to increase by whatever the multiplier is. The rise in interest rates is a response to the positive shock in the real goods market. Crowding out will be second order effects, but only if there is no more slack in the economy.
in a liquidty trap, the LM curve is flat when rates are very low and interest rates are not responsive to an increase in national income. So there is no crowding out effect and the multiplier is bigger.
No, in a liquidity trap the LM curve is flat, which means that additional monetary stimulus has no effect…people just stuff more cash in the mattress. The only effective way to increase aggregate demand is via th IS curve; either through tax cuts (which flatten the IS curve) or government spending (which shifts the IS curve). That’s how the old school IS-LM model works. You seem to have the cart before the horse.
In these[NK] models, as I have already explained, the problem with the zero lower nominal interest rate bound is that nominal interest rates can’t fall enough to reduce real rates enough.
The NK models are not unique in that regard. In fact, Krugman used the IS-LM construct to make exactly that same point because it is much clearer in the IS-LM framework.
In the new Keynesian model, not only does the stimulus have to be big enough, but you also have to worry about the details of how you do the stimulus.
Again, this is just wrong. Older Keynesian models distinguished between transfer multipliers and spending multipliers. “Spending” in the context of a Keynesian model means the purchase of goods and services.
You deny my claim that fiscal policy with zero interest rates works by increasing expected inflation in new Keynesian models and then you point out that it works by increasing inflation!!
Then I wasn’t clear enough. What I said was that fiscal policy does not work exclusively through inflation expectations when you’re talking about the real goods side of things. Now on the monetary policy side, then inflation expectations are dominant.
They ignored the insights of decades of macro research and loaded up on temporary tax cuts and transfers.
I believe the tax cuts were inserted because that’s what was needed to get the 60th vote in the Senate. If Sen. Franken had been seated and if the Democrats had 60 liberal votes, then I’m pretty sure the stimulus would have been larger and had more spending and less in the way of tax cuts.
Finally, as you said, the JPE paper that you cited did not use a VAR approach. But what you failed to point is why they didn’t use a VAR approach. According to the authors the impact multipliers of a VAR are biased towards zero for the same reason Milton Friedman explained regarding your home thermostat. It didn’t have anything to do with any of the issues you mentioned; it was simply because they felt the multipliers were biased towards zero. But in the study Menzie cited the multipliers were greater than 1.0 and significant. So even in the face of downward bias, they still got a result greater than 1.0. So much for your argument against VARs.
“So you dont have to be “keynesian” to understand that multipliers should usually be 1 or more. you just need to understand arithmetic and addition. govt spending will add one to one for GDP. Its part of its addition, its definition and nature c+i+G+x, note the “G” duhhh.”
#1 This formula is just that, a formula. It’s only an estimation of output.
#2 The rest of your post is pure drivel. If you really believe multipliers are 1+, then why not have the government spend a quadrillion dollars in a stimulus package, and we’ll all be RICH!!!
Rick Stryker The progressive view, which seems to be led by Krugman, is that IS-LM is a good enough model with insights that apply to today’s problems.
Fifteen years ago Krugman’s analysis of Japan’s liquidity trap was dismissed by most NK economists. Today there is widespread agreement among those same economists that Krugman was right about Japan. Krugman was very clear that he only came to those insights by referring back to the old Hicks IS-LM model. And it’s significant that no one working in th NK DSGE framework was able to stumble across those same insights. Now once Krugman fleshed out those insights all kinds of NK DSGE models were created that dotted all the i’s and crossed all the t’s. But you have to recognize that all those modeling efforts were exercises in reverse engineering. The insights came from an older tradition that was born and bred to describe a liquidity trap world. In the NK DSGE model it wasn’t even clear how there could be a liquidity trap because all markets cleared after each period.
The modern new Keynesian models have more specific policy advice for stimulus, advice that was ignored by the Administration. I don’t think that happened because their economists didn’t know better. I think the administration economists weren’t listened to.
Sorry, but this doesn’t pass the laugh test. If you want to argue that Christina Romer and Larry Summers expunged the ODEs and “dummied down” the Powerpoint slides, then fine. And if you want to argue that the Administration reduced the size of the stimulus from what Romer’s analysis actually recommended, then that’s fine too. But to try and argue that Romer actually wanted a smaller and better targeted stimulus is just over the top nonsense.
you have to realize that actually executing an effective stimulus is very hard…
So therefore we shouldn’t have even tried for any stimulus?
…given the political and technical constraints
Translation: given the intransigence of Sen. Mitch McConnell
…of getting large amounts of government purchases going quickly.
I dunno. A lot of the stimulus was out the door by June 2009. Sounds pretty quick to me. We certainly saw a bump in CY2009Q3 GDP, so they must have been doing something right. If there was a problem it was that the stimulus did not go far enough out and pretty much faded out after 18 months. There should have been more stimulus loaded onto the back end.
2slugbaits,
Oh boy. You’ve thrown out a long series of irrelevant and/or wrong points. I know if I answer them you’ll just out throw new ones and it will never end. I think the best way to clear all the smoke you are blowing is to just pick one point, put it under a microscope, and make you justify it. That way, it will be much harder for you to wriggle off the hook with your distracting irrelevancies and misconceptions. This will be instructive for Rodrigo as well.
In my last comment to you I said:
“I’m raising a more fundamental issue, which is that we need exogenous observations of government spending increases to measure the multiplier.”
You said:
“Finally, as you said, the JPE paper that you cited did not use a VAR approach. But what you failed to point is why they didn’t use a VAR approach. According to the authors the impact multipliers of a VAR are biased towards zero for the same reason Milton Friedman explained regarding your home thermostat. It didn’t have anything to do with any of the issues you mentioned; it was simply because they felt the multipliers were biased towards zero. But in the study Menzie cited the multipliers were greater than 1.0 and significant. So even in the face of downward bias, they still got a result greater than 1.0. So much for your argument against VARs.”
So according to you, the JPE paper did not use VaRs for the reasons I brought up but rather because using VaRs biased the estimation of multipliers towards zero. Now let me quote from the paper, page 81:
“The obvious next step would be to use
reduced-form methods, such as identified VARs, to estimate the government-spending multiplier when the zero bound binds. Unfortunately,this task is fraught with difficulties. First, we cannot mix evidence from states in which the zero bound binds with evidence from other states because the multipliers are very different in the two states. SECOND, WE HAVE TO IDENTIFY EXOGENOUS MOVEMENTS IN GOVERNMENT SPENDING WHEN THE ZERO BOUND BINDS.(2) THIS TASK SEEMS DAUNTING AT BEST. Almost
surely government spending would rise in response to large output losses in the zero-bound state. To know the government-spending multiplier
we need to know what output would have been had government spending not risen. For example, the simple observation that output did not grow quickly in Japan in the zero-bound state, even though there were large increases in government spending, tells us nothing about the question
of interest.
Now, compare my statement
“I’m raising a more fundamental issue, which is that we need exogenous observations of government spending increases to measure the multiplier.”
to what I typed in all caps from page 81 of the paper:
SECOND, WE HAVE TO IDENTIFY EXOGENOUS MOVEMENTS IN GOVERNMENT SPENDING WHEN THE ZERO BOUND BINDS. (2) THIS TASK SEEMS DAUNTING AT BEST.
You claimed that the reason the paper didn’t use VaRs “didn’t have anything to do with any of the issues you mentioned” to quote you. But it was for exactly the reason I mentioned–that you need exogenous observations of government spending, a tough problem.
You go on to claim that “it was simply because they felt the multipliers were biased towards zero” to quote you again. You got that misconception by misunderstanding footnote 2, which is (2) in the capitalized quote and reads:
“To see how critical this step is, suppose that the government chooses spending to keep output exactly constant in the face of shocks that make the zero bound bind. A naive econometrician who simply regressed output on government spending would falsely conclude that the government-spending multiplier is zero. This example is, of course, just an application of Tobin’s (1970) post hoc, ergo propter hoc argument.”
The footnote is not saying that estimates of the multiplier are biased towards zero. It is giving a particular example of how you could draw a false econometric inference that the multiplier is zero if government spending was not exogenous in a particular way. Exactly my point again.
Slugbaits, not only did you get this completely wrong, you somehow got Milton Friedman into it! I’m interested to see if you can defend yourself on this particular point without bringing up monetary policy on the planet Saturn.
Rick Stryker Ugh! Do you understand the difference between a transfer model and a VAR? A transfer model assumes that the explanatory variable is completely exogenous and immune from any and all feedback effects. In most econometric applications this assumption is implausible. So a transfer model is clearly inappropriate. Milton Friedman’s thermostat example is the textbook (literally) example of how to distinguish between your adjusting the thermostat and the temperature in your room. If you adjusted the thermostat in response to changes in the room temperature, then you would not be able to identify the true causal relationship, which is you adjusting the thermostat in response to changes in the room temperature. Friedman further argued that if you controlled the room temperature perfectly, the correlation between your adjusting the thermostat and changes in the room temperature would be zero. In fact, the only way you would be able to determine the true interaction would be if you adjusted the thermostat randomly. It was precisely this problem of feedback and reverse causality that led Sims to develop a nonstructural VAR. The problem is that nonstructural VARs can be good at prediction, but they don’t have a lot of economic content because the VAR errors are composites of all of the shocks from all of the endogenous variables in the VAR. A Cholesky decomposition is one (sort of) solution to this problem. The problem is that you have to set one of the variables as contemporaneously exogenous, although that variable is endogenous in all lags. But again, if policymakers perfectly controlled spending to maintain constant output, the impact multiplier coming out of the VAR would be zero. It’s exactly the same problem that Friedman was talking about in his thermostat example if you perfectly controlled the temperature in your room; viz., the impact multiplier describing your setting the thermostat would be biased towards zero. Get it?
Rodrigo,
I hope that example helped to clarify what’s been going on. Rather than let 2slugbaits continue to engage in his usual obfuscation strategy of making tons of false and/or irrelevant points, I isolated to just one point. I then focused the microscope on his two claims: 1) The JPE paper did not reject the use of VaR estimation for the reason I was concerned about; and 2) they did reject estimation of VaRs because the authors of the JPE paper thought the multiplier was biased to zero.
I then showed just by quoting the text that both claims are clearly false. I also suggested that he’d respond with new irrelevant points in my joke about monetary policy on Saturn.
What happened? Did he provide counter evidence that yes indeed the article did not reject the use of VaRs for any reason I was concerned about? Did he demonstrate that yes indeed the authors of the JPE article rejected the use of VaRs because they thought the estimates were biased to zero?
No, he didn’t.
Instead he went into an extended riff on transfer models vs. VaR models, Friedman’s thermostat analogy, Sim’s reasons for developing VaR, etc, etc. The point of all that smoke is to obscure his misunderstanding of the JPE article, to reassure about the depth of his knowledge, and to make it appear that he’s “setting me straight” to use your words.
But clear all that smoke away and ask yourself this: can you really trust 2slugbaits’ arguments if he can come away from an article with serious misconceptions about what it says, especially when it’s so easy to check?
Rick Stryker:
Point of information: I thought VAR was short for “Vector Auto Regression” and VaR for “Value at Risk”.