I had the privilege of attending a conference in St. Louis this week on Monetary Policy under Uncertainty at which I presented a paper on the response of interest rates to changes in the fed funds target. One of the interesting themes that came up in some of the other papers concerned whether the public’s interests are best served when monetary policy follows mechanical rules as opposed to responding to events in a discretionary way. Here I report on some of the discussion of this issue from the conference.
Federal Reserve Bank of Philadelphia President Charles Plosser suggested that simple rules, such as Stanford Professor John Taylor’s suggestion that the Fed should mechanically raise its interest rate target when either inflation or output were above the long-run objectives, had much to recommend them, as mechanisms that proscribe the actions of the Federal Reserve, could help frame the FOMC’s discussion of the various policy options, and provide a common ground for discussion among people with different models and understandings of how the economy works.
Federal Reserve Chair Ben Bernanke, who participated from Washington via video conferencing, offered a more qualified endorsement of that thesis:
A promising alternative approach … focuses on simple policy rules, such as the one proposed by John Taylor, and compares the performance of alternative rules across a range of possible models and sets of parameter values (Levin, Wieland, and Williams, 1999 and 2003). That approach is motivated by the notion that the perfect should not be the enemy of the good; rather than trying to find policies that are optimal in the context of specific models, the central bank may be better served by adopting simple and predictable policies that produce reasonably good results in a variety of circumstances.
Perhaps not surprisingly, John Taylor also argued that there are strong benefits when the actions of the central bank are easily understood and predictable by the private sector.
There was also a very interesting paper on this topic by Central Bank of Cyprus Governor Athanasios Orphanides and Goethe University Professor Volker Wieland (both former staff economists of the U.S. Federal Reserve Board). Their paper explored what happens when we try to explain the fed funds rate not from the actual values of inflation and GDP, as in Taylor’s original formulation, but instead with the forecasts of inflation and GDP that the Fed provides through its semiannual Humphrey-Hawkins report. Brandeis Professor Stephen Cecchetti, former Director of Research at the Federal Reserve Bank of New York and Associate Economist of the Federal Open Market Committee, protested that these forecasts were sometimes produced in a somewhat ad hoc fashion. But Orphanides and Wieland noted that the fit of a Taylor Rule to the data improved substantially when forecasts were used in place of actual outcomes as explanatory values in the regression, with the Rbar squared increasing from 0.74 to 0.91. In particular, the forecasts explain why the Fed chose to cut interest rates a little sooner in the early phases of the recessions of 1990 and 2001, as the Fed (correctly) anticipated the downturn. On the other hand, an error in predicting the resurgence of inflation in 2003-2004 may explain some of the slowness of the Fed to raise interest rates, on which we’ve commented previously.
One curious aspect of the success of Orphanides and Wieland’s calculations is the fact that the basis on which the Fed reported its forecasts changed significantly over this period. The Humphrey-Hawkins inflation forecast started out as a prediction for the CPI, was converted to a prediction of the personal consumption expenditure deflator in 2000, and then changed to a prediction of the core PCE inflation in 2004. The surprising thing the researchers found is that the Fed seemed to treat these three forecasts as essentially the same number, even though the three series in fact behaved very differently. This seems like the kind of thing that could provide more ammunition to cynics of the core PCE concept.
This observation also raises a practical objection to the claims for “discipline” of a policy rule, which Chair Bernanke articulated in some of the floor discussion. This is the fact that, although a mechanical rule might appear to constrain Fed actions, in practice there are very relevant issues as to how one measures “potential GDP” or even inflation itself, which allow even a policymaker notionally constrained by a rule a fair bit of practical wiggle room, putting us back into dependence on the wisdom of the discretionary actions of the Fed, for better or worse. On this point, I thought that some of the closing remarks from retiring Federal Reserve Bank of St. Louis President William Poole were rather sage. Poole opined that the best way for the central bank to achieve credibility is to (1) say what you’re going to do, and then (2) do it. He noted the tremendous pressures in a place like Washington D.C. to bend the rules, the natural cynicism with which the public greets pronouncements from the government, and the way that a lack of credibility greatly complicates the ability of the central bank to do its job. He concluded that, above all, a central bank needs to be run by people of unquestionable integrity who can steer a clear course through such pressures.
Amen to that, Bill. And would that it could be the same for Congress and the White House.
Technorati Tags: Federal Reserve,
economics,
macroeconomics,
Taylor Rule
In particular, the forecasts explain why the Fed chose to cut interest rates a little sooner in the early phases of the recessions of 1990 and 2001, as the Fed (correctly) anticipated the downturn. On the other hand, an error in predicting the resurgence of inflation in 2003-2004 may explain some of the slowness of the Fed to raise interest rates, on which we’ve commented previously.
So here’s my question– why did the “most sophisticated” economists apparently become less and less sophisticated as time went on?
The only driving force of (and the source of the uncertainty related to) inflation (and unemployment) is changing labor force level.
For the period after 1960, the change rate of labor force level in the USA (and other developed countries) explained (Rbar) >95% of inflation variability, including the period between 1965 and 1985.
Cointegration tests (both Engle/Granger and Johansen) confirm the existence of a long-term equilibrium relation between inflation, p(t) and labor force change rate (with 2 year lag), dLF(t-2)/LF(t-2):
p(t) = 4.0dLF(t-2)/LF(t-2) 0.03075.
So, the uncertainty of inflation prediction at any time horizon completely depends on labor force projections.
For example, CBO (http://www.cbo.gov/ftpdoc.cfm?index=5803&type=0 ) projection says that a deflationary period is very likely (90%) will start in 2012, according to the above relationship.
Since I am a fun of visual representation of quantitative results I would like to recommend Figures 7 and 8 in http://inflationusa.blogspot.com/search/label/linear%20regression
for an eye-estimate of inflation prediction at 2-year horizon in the USA.
I thought that some of the closing remarks from retiring Federal Reserve Bank of St. Louis President William Poole were rather sage. Poole opined that the best way for the central bank to achieve credibility is to (1) say what you’re going to do, and then (2) do it. He noted the tremendous pressures in a place like Washington D.C. to bend the rules, the natural cynicism with which the public greets pronouncements from the government, and the way that a lack of credibility greatly complicates the ability of the central bank to do its job. He concluded that, above all, a central bank needs to be run by people of unquestionable integrity who can steer a clear course through such pressures.
Amen to that, Bill. And would that it could be the same for Congress and the White House.
Professor,
Thanks for the post. There is a lot to digest.
Question – did anyone have the courage to present a gold anchored system? I realize that would be much like wearing a sirloin steak suite into a lion’s den, but I just wanted to ask.
Concerning Poole’s comments:
August 17, 2007, 11:44 am
Poole Aimed to Reduce Suspicion With Absence
St. Louis Fed President William Poole on Wednesday told an interviewer that he saw no need for Fed action amid the current market turmoil, and last night when the central bank concluded it was time to act, Mr. Poole wasnt part of the decision.
Although he is a member of the rate-setting Federal Open Market Committee, Mr. Poole didnt vote in last nights decision to issue a policy announcement about growth risks. Dallas Fed President Richard Fisher voted in his place.
A spokesman for Mr. Poole said the call conflicted with a long-scheduled dinner meeting at the University of Arkansas. (Mr. Poole delivered a speech in Little Rock this morning.) President Poole was concerned that failure to appear at the dinner meeting might have been noted, which could have led to speculation about the possibility that the FOMC was holding an unscheduled meeting, the spokesman said.
Mr. Poole said Wednesday in the Bloomberg Television interview: Theres no need for the Federal Reserve, unless theres some sort of calamity taking place, to make a decision before the next meeting.
Yesterday when told about the remarks, Fed spokeswoman Michelle Smith said, President Poole is speaking for himself and not for the committee.
If Mr. Poole had been on the call, a spokesman said, he would have supported the FOMC policy announcement, which did not change the federal funds rate. (The decision to change the discount rate is made by Federal Reserve Board members.) Sudeep Reddy
Hummmm… Say it then do it. A little dissention on the FOMC? Cynicism is alive and well.
Good summary of some key elements of the conference. For me, the VAR estimate of housing market impacts on GDP (elasticity 0.07 at 3 year lag)was interesting, especially since it supported a similar result by a completely different method. (Of course, Cecchetti citing Leamer doesn’t agree with these figures). Also, although he’e a bit salty, Patrick Minford’s reminding everyone that the Taylor Rule suffers from serious identification issues is very important– the rule maybe fine as a simple rule for practitioners but reading too much into its interpretation is not wise. Also, Jim Hamilton presented a very interesting paper on extracting Fed signals from noisy Fed Fund futures — one that provides lots of possible offshoots.
I don’t think the change from CPI to PCE to core PCE says anything to support “core” critics, merely that the Fed believes these measures will converge in the long run. I think in the face of persistent moves in energy particularly, the Fed wanted to move away from numbers which were short-term distorted.
1 Labor force is not the worst (actually good) substitute for output gap or unemployment
2. Watson and Stock used hundreds of variables (aggregated and disaggregated) to model inflation dynamics. Labor force is absent, to my knowledge
3. VAR and cointegration test allow several years lag of inflation behind other variables
4. The change rate of labor force level is a natural variable of the same dimension (1/y) as inflation rate
5. Labor force level is defined for the middle of corresponding year because it is an average value. One needs to use July to June next year instead of BLS readings
6. There are steps in labor force time series due to revisions after censuses. One needs to evenly redistribute these steps back in the past.
Simple exercise. Take two data columns – GDP deflator and labor force change rate, plot them and regress measured inflation against the change rate, with the former two years lagged behind the later.
For the period between 1965 and 2004 Rsq.=0.83.
year GDPdeflator LF/2 dLF/LF
2004 0.0219 146894 0.0075
2003 0.0189 145802 0.0082
2002 0.0168 144617 0.0092
2001 0.0242 143293 0.0104
2000 0.0226 141825 0.0126
1999 0.0151 140057 0.0131
1998 0.0116 138248 0.0140
1997 0.0174 136344 0.0149
1996 0.0197 134343 0.0126
1995 0.0210 132671 0.0136
1994 0.0221 130886 0.0131
1993 0.0237 129192 0.0132
1992 0.0238 127506 0.0130
1991 0.0349 125865 0.0105
1990 0.0393 124554 0.0146
1989 0.0392 122761 0.0166
1988 0.0355 120762 0.0161
1987 0.0282 118849 0.0175
1986 0.0228 116802 0.0177
1985 0.0317 114770 0.0179
1984 0.0402 112754 0.0152
1983 0.0414 111070 0.0133
1982 0.0598 109610 0.0155
1981 0.0963 107932 0.0179
1980 0.0905 106030 0.0233
1979 0.0855 103618 0.0290
1978 0.0742 100694 0.0307
1977 0.0665 97695 0.0283
1976 0.0609 95003 0.0227
1975 0.0941 92898 0.0239
1974 0.0899 90731 0.0284
1973 0.0590 88223 0.0279
1972 0.0457 85829 0.0237
1971 0.0517 83841 0.0228
1970 0.0530 81976 0.0259
1969 0.0511 79903 0.0219
1968 0.0447 78194 0.0197
1967 0.0317 76681 0.0198
1966 0.0304 75189 0.0185
1965 0.0194 73826 0.0184
1964 0.0162 72490 0.0161
1963 0.0111 71341 0.0099
1962 0.0145 70645 0.0084
1961 0.0115 70054 0.0151
By the way, RMSFE at a two year horizon (natural horizon for this setting) for the period between 1965 and 2004 is 0.01 or 1%. I would guess that such an unceratainty could be useful for the Fed between 1965 and 1985.
When the Fed Funds rate and the 3 month T bill rate are plotted together, the eyeball says that the Fed Funds rate lags the T bill rate by 2 to 4 months. A bit of regression could make that more precise, but would it matter? If someone paid me half the salaries of all the Fed analysts working on the Fed Funds data, I’d be happy to provide the Fed Funds number to Bernanke. Taxpayers would get a cost reduction and we’d all get about the same Fed Funds rate as we have been getting. But there would be a lot less drama. Guess I won’t get the job.