For some reason, my use of commodity futures as predictors of future spot prices for commodities (e.g., soybeans) incites fire and fury from some Econbrowser readers. Hence, I want to cite another example of the use of futures.
The primary policy tool of the U.S. Federal Reserve is manipulation of the federal funds rate, an overnight interest rate on interbank loans that is quite sensitive to the total quantity of reserve deposits that are created by the Fed. The Chicago Board of Trade offers a futures contract whose payoff is based on the average value for the effective fed funds rate over all of the calendar days of a specified month.
A separate question from whether changes in futures prices are possible to predict is the question of how far in advance they give a useful estimate. One standard of comparison is the mean squared error, or the average squared difference between the implied futures forecast at a given date and what the actual fed funds rate turns out to be. A benchmark for comparison is the assumption that the fed funds rate itself follows a martingale, so that one’s forecast for the future value of the fed funds rate is always its current value. Such “no-change” forecasts have often proven to be very difficult to beat out-of-sample with financial data. The table below shows that, if you simply predicted that the fed funds rate isn’t going to change, you’d have a mean squared error of 389 basis points (that is, a standard deviation of about 20 basis points or 0.2%) predicting one month ahead and 2,522 basis points (50 basis-point standard deviation) predicting 3-months ahead. For comparison, the MSEs of the futures-derived forecasts are only a third as large.
The moral is, if you think the fed funds rate is going to do something over the next few months that differs from what is predicted by the futures prices, then think again.
So, here is another dimension in which futures are pretty good forecasts, even better than a random walk (martingale to be precise). How are the forecasts evaluated? The author uses “mean squared error” and “mean absolute error”, respectively:
MSE penalizes the square of the error, and is a consistent estimator of the error variance. Large errors are penalized — proportionately — more than small.
MAE penalizes the Euclidean distance in the error, and might be more appropriate if the population variance does not exist.
Note that in no case does a miss of a point estimate result in infinite penalty. In other words, if the futures predicts 801, and another prediction is for “not 801”, then the “not 801” does not win if the outcome is 800. Seems obvious, but that logic seems to elude some people.
In Chinn and Coibion (2014) [Google Scholar cites = 136], we use the ratios of (root) mean squared errors, of the futures against a random walk benchmark. This type of ratio is called a Theil U statistics. Chinn (1991) [Google Scholar cites = 96] deploys both (root) MSE and MAE statistics, following Meese and Rogoff (1983) [Google Scholar citations = 4803].
The author of the post on Fed funds futures forecasting, using the MSE metric, is …. Jim Hamilton. (Figured some people won’t trust the message coming from me, for whatever reasons, so am trying somebody universally trusted on the statistical theory).
By the way, futures — like forwards — don’t work so great for currencies.