In honor of Mark Thoma’s retirement, let me discuss the relevance of one of his papers (coauthored with Tim Duy) regarding the usefulness of imposing cointegrating restrictions.
To motivate this examination, consider this critique of my modeling Palmer Drought Severity Index (PDSI) as an I(1) series cointegrated with Kansas GDP by reader Rick Stryker:
Cointegration methods are bedeviled with specification error problems. In Johansen, how you set up the form of the VECM matters. How you specify the constant terms matters. Often the multivariate trace and maximum eigenvalue tests disagree. The procedure is a multi-step process in which an error in one stage can pollute the results in the next. Small sample bias is a huge problem to deal with. It’s very easy to misspecify one of these regressions. Even if you do all this specification analysis and are satisfied, there is the rather huge leap in treating government spending as exogenous and causing Kansas GDP. This is a much bigger issue.
Because of the potential specification error problems, cointegration is not the first choice of many time series experts. It’s not necessarily the “correct” way. That’s not to say that no one ever should use it. If you do decide to use it, you just have to be pretty methodical and cautious, recognizing all the pitfalls. …
Mebbe. Tim Duy and Mark Thoma, in JEBS (1998) write:
Although the issue of identifying cointegrating relationships between time-series variables has become increasingly important in recent years, economists have yet to reach an agreement on the appropriate manner of modeling such relationships. In this paper, we attempt to distinguish between modeling techniques through a comparison of forecast statistics, while focusing on the issue of whether or not imposing cointegrating restrictions via an error-correction model improves long-run forecasts. We find that imposing cointegrating restrictions often improves forecasting power, and that these improvements are most likely to occur in models which exhibit strong evidence of cointegration between variables.
The findings of Chinn and Meese (1995) regarding exchange rate prediction conform to Duy-Thoma; to a lesser extent, the sequel (Cheung, Chinn, Garcia Pascual, Zhang, 2019) also shows that a PPP error correction term “works” well. So, I think issues of integration/cointegration are usefully considered in many time series contexts.
I never quite understood Rick Stryker’s complaint. My feeble understanding was that the main point of using a VECM was precisely to avoid reflexively and mindlessly differencing the data when doing so risked throwing away valuable information. He’s right to point out that many of the tests can be ambiguous and sometimes contradictory although the same is true of unit root tests; but I don’t recall him ever saying we should just assume every time series is stationary because unit root tests aren’t always as powerful as we’d like. And is it really all that difficult to figure out the best way to specify the constant term? In most cases it’s pretty obvious. Most of Rick Stryker’s complaints apply equally well to almost any time series approach, so I never understood why he felt the need to single out cointegration.
I wasn’t singling out cointegration. I was merely disputing the claim that cointegration is the “correct” method to deal with non-stationary time series. It’s one method, with some drawbacks that I listed, which are not especially controversial.
I often play Econometrics Jeopardy with my free market econometrics class at Wossamotta U. I think you’ve earned the right to have a go.
And the Category is
“Time to Get Serious About Time Series” for $400.
And the answer is…
After discussing the strategies of ignoring non-stationarity by estimating in levels and always differencing apparently non-stationary variables, this author turned to the third strategy of using cointegration, but then discussed the drawbacks, writing:
“The disadvantage of the third approach is that, despite the care one exercises, the restrictions imposed may still be invalid–the investigator may have accepted a null hypothesis even though it is false or rejected a null hypothesis that is actually true. Moreover, alternative tests for unit roots and cointegration can produce conflicting results, and the investigator may be unsure as to which should be followed. Experts differ in the advice offered for applied work…”
Who is ….?
Rick Stryker: You write as if I said cointegration was the only way of dealing with the issue of regressions with integrated series. You also write as if Johansen was the only approach to estimating cointegrating relations. Neither was stated in the original post. Rather, if you don’t believe directly addressing cointegration is appropriate, then Ironman should’ve at least first differenced the series (or otherwise rendered the series stationary) — or do you disagree. If one is worried about misspecification (gee, isn’t that also in just about every econometric procedure we undertake?), then one could assess using other approaches like Stock-Watson DOLS (Econometrica) or Park CCR.
Or do you think Ironman’s approach of estimating in levels w/o testing at all was “the right way to go”?
I made all those comments on that post because 1) I felt you were bullying Ironman; and 2) you were giving your untutored progressive audience the impression that all you need to do is select a few boxes in eviews and voila, out pops a progressive conclusion.
So I downloaded the data myself and did my own tests, to show how much well-informed judgment is necessary to perform even standard tests and also to show by example what good applied econometrics actually looks like. Not that it mattered, of course, since your core progressive audience seems incapable of learning anything.
Rick Stryker: In a ten year period, PDSI is best modeled as stationary? That’s your “good applied econometrics”? Remember, in an *infinite* sample, we would always reject a unit root; but we don’t have an infinite sample usually.
‘my free market econometrics class at Wossamotta U.’
Is this your class?
Are you Bullwinkle? That would explain a lot!
Actually I did have a moose in one of my classes who thwarted an attack on the world monetary system, as documented here.
Time’s up 2slugs. The correct answer of course is James D Hamilton.
This looks reasonable and is consistent with what Menzie notes regarding analysis of forex times series. Indeed, all econometric methods have their limits and can be misapplied, but it does seem that careful use of cointegration can be useful in time series analysis. It is certainly not obviously more flawed than other methods.
This post is a nice gesture, and a deserving gesture to Mark Thoma. A man (similar to many of the better bloggers) who took a lot of his personal time to dispense knowledge “out into the void”. I think most educators, although they no doubt have moments of despair, and sometimes wonder “why do I bother??”, deep down inside strongly and inherently believe in people’s desire and yearning to better themselves. What better way to facilitate that than providing “free” access to knowledge?? This is what the better bloggers do, of which Thoma was/IS in the cream of the crop:
Let’s hope it is only Thoma’s classroom students who feel “this disturbance” and not his internet fans.
Time Series coordination slash research treatment is a complex topic, and I don’t have anything to contribute to the conversation at the moment.
I actually do look out for times we are in agreement. I completely agree about Mark Thoma and hope the best for him.
I have learned from private communication with him that Mark Thoma is in good health. He has a new relationship and is moving to near San Diego. He is “burned out.” Unfortunately we are probably not going to be seeing him do a lot of blogging, but maybe some.
Oh, apparently he will be living on the beach. His becoming a widower in 2011 really dragged him down.
Interesting. Assuming he’s ok with these being made known (I don’t know why he’d tell you if he wasn’t) the update is appreciated. Can’t say it entirely surprises me. Maybe when he gets some of that clean coastal air and some personal breathing space, he may find he wants to return to the internet discussion. Or even find a platform that would pay him for his commentary. Hope he feels reinvigorated as the time passes.
What I was trying to do was to show people that you can’t just go to eviews and run some tests. You need to think about the problem carefully, think about what your priors might be and why, consider small sample properties of the tests, etc. and adjust the parameters of tests accordingly. That’s what I mean by good applied econometric practice.
Rick Stryker: Well, that’s what I’m trying to do as well, but in a blogging context, and here I go farther than anybody I know (except for the specialized econometrics blogs). For more formal assessments, I refer you to my JBES paper on unit root testing, and using finite sample critical values in applications of Johansen maximum likelihood testing in several papers; also cross-testing Johansen vs. Stock-Watson DOLS.
Dear Mr. Stryker,
I’m not taking positions in this argument, just trying to get a clarification. It is quite possible that cointegration should not be seen as a default. But in view of what Menzie is suggesting about differencing series, etc., do you have the same discomfort with Dickey-Fuller tests or other tests to measure the order of integration of a time series?
I don’t have discomfort. I just think you have to use stationarity tests carefully. You might look at my comment to see how I thought about applying stationarity tests in the PDSI case.
Another option is to ignore the stationarity issues by not differencing the series but including lagged values of both the dependent and independent variables. In general estimates will be consistent and sampling distributions asymptotically normal, so that you can do the usual hypothesis tests. But you can’t do any arbitrary test, as some of them will have non-standard distributions.
There are no easy answers unfortunately.
Dear Mr. Stryker and others,
I have no particular desire to enter unnecessary controversy. But I did work with the late Professor Dhrymes on his “Time Series, Unit Roots and Cointegration” book, published by Academic Press of New York in 1998. It seems to me that Chapter 3, Unit Roots: I(1) Regressors is very relevant to the statement Mr. Stryker is making, and this is not some political propaganda. On p. 120, we see that the reason Beta estimators using I(0) regressors converge to the true Beta is “intimately related to the convergence of X’X/T. If X’X does not converge when normalized by T, it does not necessarily follow that the distributional conclusions above will continue to hold.” Here, of course X is the matrix (really the vector in the single-equation case) of independent variables, and T is the index of time. I use “^” to indicate an exponent. Dhrymes writes on p. 121 that, “when the regressors are I(1), in which case X’X/T does not converge, but, as we will see X’X/T^2 converges.”
Whether the divergence is significant enough to be disastrous depends on the case.
Neither Menzie nor I would disagree with the statements made in the Dhrymes book. If you have a standard regression model, y = XB +u, then the OLS estimate b will satisfy
b = B + (1/T)sum(x(t)u(t))/(1/T)sum(x(t)x(t)’)
where x(t) are the column vectors of X and the sum is from 1 to T. If the denominator converges to a constant positive definite matrix in probability (which will happen in general when x is I(0)) and if X is uncorrelated with error term, so that the numerator converges in probability to 0, then b converges in probability to the true value B and the OLS estimator is consistent.
On the other hand, if both y and x are I(1), and are NOT cointegrated, then (1/T^2)sum(x(t)x(t)’) converges to a functional of Brownian motion, as does the numerator. So, in this case the estimate b is a random variable and an inconsistent estimate of the true B.
Menzie and I agree on all that. I think we agree on much of the econometrics too. It was just the details that were at issue–I was raising questions about some of the tests he did as well as the specification.
Rick Stryker: So…please answer my question. Was Ironman right to estimate in levels given the large amount of evidence of nonstationarity in the relevant sample period (2005-)? Or at a minimum should he have first differenced, if he wasn’t going to rely upon superconsistency of cointegrated variables…?
I thought I already answered that question in a previous comment, when I said: “From my analysis, the jury seems to be out on whether the AG Kansas GDP is non-stationary. Ironman’s regression might be just fine from a spurious regression point of view, since he could be regressing stationary series on each other. Your point, it seems to me, that Ironman has estimated a spurious regression is wrong. In the worst case, he has regressed a stationary (PDSI) against a non-stationary variable (AG GDP), which is a fundamentally misspecified regression, since you can’t really do that. But that’s a very different problem.”
Rick Stryker: Well, I’ve given an argument for why PDSI in about ten years sample is better treated as I(1). In that context, the regression Ironman estimated is ill advised.
I’m just trying to recover from true neurogenic shock that Stryker showed half-intelligence in one of these threads. I think now I know how those people at Whakaari felt right when they heard the first rumbles. All that being said Menzie, I wouldn’t expect a straight answer to your query related to nonstationarity anytime inside the proceeding decade.
I gave a very clear and straight answer to this question three years ago, an answer you obviously don’t understand.
The statistical evidence that you offered on the non-stationarity of the PDSI was very weak.