There are some terms used on this blog that provoke some confusion. Here are some quick definitions that might help some readers.
Nominal vs. real, and price deflators
- Real quantity: a nominal quantity divided by a price deflator. Real GDP is nominal GDP divided by a price deflator. If the deflator takes a value of 1.00 in 2012, then real GDP is in 2012$.
- Real price: A nominal price divided by a general price for all items. The price of oil divided by the CPI would be a real price. Also “inflation adjusted” price.
- Relative price: A nominal price divided by the price for all other items. The price of oil divided by the CPI excluding oil would be a relative price.
- Real interest rate: nominal interest rate minus expected inflation rate over corresponding period.
Indices
- Laspeyres price index: Price index with weights fixed in the base period (usually a year). CPI used to be a simple Laspeyres. [post]
- Paasche price index: Price index with weights fixed in the reference period. GDP deflators used to be Paasche.
- Chain weighted quantity index: Index where growth rates are weighted by changing weights each period. NIPA components are chain weighted. Note: Adding up chain-weighted components of an aggregate will not necessarily equal the chain-weighted aggregate. [post]
- NIPA: National Income and Product Accounts.
Growth rate conventions
- Month-on-Month growth rate: For X recorded at a monthly frequency, (Xt/Xt-1)-1
- Month-on-Month growth rate, annualized: For X recorded at a monthly frequency, (Xt/Xt-1)12-1
- Quarter-on-Quarter growth rate: For X recorded at a quarterly frequency, (Xt/Xt-1)-1
- Quarter-on-Quarter growth rate, annualized: For X recorded at a quarterly frequency, (Xt/Xt-1)4-1 [post]
- SAAR: Seasonally Adjusted at Annual Rates, usually refers to quantities, m/m growth rates or q/q growth rates, annualized. In US statistics, GDP is usually reported in official releases as SAAR (sometimes just AR), and GDP q/q growth rates as SAAR.
- Year-on-Year growth rate: For X recorded at a monthly frequency, (Xt/Xt-12)-1
- Year-on-Year growth rate: For X recorded at a quarterly frequency, (Xt/Xt-4)-1
- Annual growth rate: For X recorded at annual frequency, (Xt/Xt-1)-1
Price deflator vs. inflation
- Inflation: the growth rate of a price deflator. [post] . Also first derivative of log price level with respect to time.
- Deflation: negative inflation
- Disinflation: declining inflation
Debt, money, assets
- Gross debt: For government debt, total amount of debt issued, including debt owned by other parts of the same government. E.g., FRED series GFDEBTN
- Net debt: For government debt, total amount of debt issued, excluding – or netting out – debt owned by other parts of the same government. E.g., FRED series FYGFDPUN
- Flow: A variable that occurs over time. GDP is a flow.
- Stock: A variable that is measured at an instant in time. Debt is a stock.
- Inside asset: An asset held by the private sector that corresponds to a liability of the private sector.
- Outside asset: An asset held by the private sector that has no corresponding liability of the private sector.
- Money base or high powered money: currency and bank reserves (both outside assets). E.g., FRED series BOGMBASE
- M1: narrow money, currency plus checking deposits. E.g., FRED series M1SL
- M2: broad money, currency plus checking, savings accounts, and other e.g., money market deposits. E.g., FRED series M2SL
- Velocity (of money): nominal GDP divided by money stock. [post]
Macroeconomic policy
- Fiscal multiplier: the change in GDP for a change in government spending, ΔY/ΔG (or for a change in government transfers, or for a change in taxes) [post]
- Crowding out: the offset of decreased interest sensitive components of aggregate demand in response to an increase in budget deficits and/or GDP. [post]
Statistics
- Standard error: an estimate of the standard deviation of the sampling population.
- XX% Confidence interval: the range over which the interval will encompass the true parameter XX% of the times, if the test is repeated 100 times. [post]
- Mean error: the average of errors.
- Bias: if the mean error is non-zero.
- Root mean squared error: the square root of the mean of squared errors. [post]
- Mean absolute error: mean of the absolute value of the errors.
- Time series model: a model of a variable involving only lags of the variable itself, and error terms. Time series models often refer to ARIMA models, of which a random walk is a simple version of.
- ARIMA: AutoRegressive Integrated Moving Average. Example, of an ARIMA(1,1,1) model: ΔXt = α + φΔXt-1 + εt + θεt-1
- Deterministic trend: a trend that follows exactly with time.
- Stochastic trend: a trend that moves with time, but with includes a random error, e.g., Xt = δ + Xt-1 + εt [post]
Seasonal adjustment
- Seasonally adjusted: adjusted to remove influences of predictable seasonal variations, often denoted “s.a.”
- Not seasonally adjusted: not adjusted to remove influences of predictable seasonal variations, often denoted “n.s.a.”
- X-13ARIMA-SEATS: Standard statistical technique to extract and remove seasonal components, used by Census and other agencies.
Business cycles
- Recession, Contraction: According to NBER, a broad based decline in economic activity, defined using various indicators (including, but not exclusively, real GDP), after a “peak”. Other definitions exist. [post]
- Expansion: According to NBER, a broad based increase in economic activity, defined using various indicators (including, but not exclusively, real GDP), after a “trough”. Other definitions exist. [post]
- Output gap: usually gap between reported GDP and potential GDP. [post]
- Potential GDP: the level of output consistent with normal utilization of the factors of production with the given level of technology. CBO reports a commonly used estimate, e.g., FRED series GDPPOT
- Unemployment gap: the gap between the reported level of unemployment and the natural rate of unemployment (the latter sometimes equated with the non-accelerating inflation rate of unemployment, NAIRU).
Welfare economics
- Dead weight loss: the cumulation of excess of marginal social benefit over marginal social cost for all units of consumption foregone, or cumulation of excess of marginal social cost over marginal social benefit for all units of excess production undertaken [post]
- Externality: a cost or benefit associated with an activity not entirely borne or received by the agents undertaking an activity, e.g. air pollution. [post]
Data sources: A list of data sources, including FRED referenced above.
Note: Many of the series I link to at FRED are generated by various agencies (BLS, BEA, Census, OECD), but these series are the series as reported, unless otherwise indicated in the documentation that FRED provides. (So if you see a series sourced from FRED on Econbrowser, do not accuse me of not having provided the “raw data”.)
Menzie,
Only one of these I would question is your claim that a time-series model involves only one variable. Certaintly Box-Jenkins ARIMA models are just one variable, but my copy of Jim H.’s Time-Series Analysis is in my editorial office and I am at home, but does he not count the various types of VARs as being “time-series models”? They involve multiple variables interacting. And then there are all the variations of cointegration models from Johansen/Juselius through ARDL and more that also involve multiple variables interacting over time. Not time-series models?
Barkley Rosser: Well, I use the word “often”. One could throw in ARMAX models, and VARs. I think this terminology I used I got from Harvey’s “Time Series Models” book. I would say VARs, ARDLs and the like falling under the rubric of “time series econometrics” as opposed to cross section or panel.
Well, a “time-series” would seem to refer to one variable, although clearly one can have a bunch of time-series, with the plural spelled the same as the singular. But when one gets to models, well. Taking the example of basic VAR, I think I would distinguish an estimated VAR model from tests one runs on it such as variance decomposition. It is true that one uses time-series econometrics to estimate a VAR, but once one has done so, it is in effect a model of the variables in it, even if that has no theoretical foundation. But when one starts bombarding a VAR with the usual battery of tests people do to them, then those tests look to me to be strictly “time-series econometrics” and not “time-series models,” although this is definitely getting pretty picky.
Thanks Menzie
Your audience contains many quarrelsome people but that probably is the nature of economists
Some more than others. See towards top of the comment thread. Note that exacting is different than quarrelsome, and in this case you used the pertinent word.
Awesome list. Only minor thing I would suggest is some quick categorization. Hard to process long lists. Even if not perfect, the effort to organize has benefit to the reader (and even the writer). Sort of like how paragraph breaks help.
https://www.amazon.com/Pyramid-Principle-Logic-Writing-Thinking/dp/0273710516/
If the above are “terms used on this blog that provoke some confusion”, then your definitions will cause further confusion for the already confused.
-Inflation… It’d be clearer to distinguish terms for pos and neg 1st and 2nd derivatives.
-Year-on-Year growth… should be clarified further. Growth (not specified as a growth rate, percent, etc) is considered by many to be in levels.
-Annual growth… same as above
Just my thoughts ¯\_(ツ)_/¯
EConned: I added in “rate” after growth in the two instances where I had omitted that term.
But maybe we should use his formal calculus terms – just to confuse the already economically illiterate.
Now if I we think about some of the comments on labor economics, maybe should define wage floors v. wage ceilings, monopsony power, and of course demand and supply. But the list could get incredibly long.
I was not suggesting that derivative should be used here but used in reply to Menzie. The point is to define inflation as a positive increase, yadayadayada.
Upon a 2nd glance you have the exact same formula for “Year-on-Year growth rate: For X recorded at a quarterly frequency”
as you do for
“Annual growth rate: For X recorded at annual frequency.”
Best of luck, Wisconsin–Madison Econ students!!!
EConned: Thanks, typo fixed.
I am in the unusual position of defending Menzie. The issue is not the most precise definition (with curly d’s and the like) but improving understanding. I think he is setting the appropriate balance. This is a complicated multi-variable function to optimize (pedagogy) and most rigorous is NOT (not, not, nottity-not-not) always most helpful. Perfect is the enemy of better. I think he set right balance, for this situation.
Menzie,
The definition of the confidence level is wrong as written. It is not the case that if you repeat a test on some data set 100 times ( or any number of times), the x% confidence interval will encompass the true parameter x% of the time. This definition confuses prior and posterior information. Before we see any data, when we just have the null hypothesis, then indeed the x% confidence interval is defined to be the interval that includes the true parameter with x% probability. But after we see the data, the probability is either 0 or 1 that the true parameter is in the estimated confidence interval. The probability will be zero or one whether we repeat the test 100 times or not. The definition of the standard error has the same problem.
The definition of the stochastic trend also looks wrong as it could include a deterministic time trend. The definition is “a trend that moves with time but includes a random error.” For example y(t) = at + b + epsilon(t) would seem to be covered by the definition, but that’s a deterministic time trend. The key point about a stochastic trend is not that error terms are included but rather that the error terms accumulate so that they have a permanent effect. So, for example integrating your example y(t) = a+ y(t-1) + epsilon(t), we can alternatively write it as y(t) = at + sum(epsilon(i)) + y(0). That looks like the deterministic time trend, except that the sum of the epsilons induce the stochastic trend: their effect does not die out over time. So, the difference between a deterministic and stochastic trend depends upon whether the innovations have a permanent effect. I’d suggest defining it that way.
Rick Stryker: Pulled the definition from
WadeDave Giles, so take that Bayesian issue up with him. You are right on stochastic trend (although the example of a random walk with drift makes the point that the error is permanently incorporated), have amended text. Thanks.Menzie,
Do you mean Dave Giles? I found it hard to believe he wouldn’t agree with me on this so I looked at his blog to check. Here is what he says in a relevant blog post.
“For some reason, students often have trouble interpreting confidence intervals correctly. Suppose they’re presented with an OLS estimate of 1.1 for a regression coefficient, and an associated 95% confidence interval of [0.9,1.3]. Unfortunately, you sometimes see interpretations along the following lines:
There’s a 95% probability that the true value of the regression coefficient lies in the interval [0.9,1.3].
This interval includes the true value of the regression coefficient 95% of the time.
So, what’s wrong with these statements?
Well, something pretty fundamental, actually…”
And then Giles concludes:
“So, the first interpretation I gave for the confidence interval in the opening paragraph above is clearly wrong. The correct probability there is not 95% – it’s either zero or 100%! The second interpretation is also wrong. “This interval” doesn’t include the true value 95% of the time. Instead, 95% of such intervals will cover the true value.”
Note in his first point, Giles is explicitly agreeing with me. And in his second point, Giles is explicitly denying your definition above.
On Bayesianism and confidence intervals, Giles also said: “I’m not saying that we should be using CI’s. Specifically, when I’m wearing my Bayesian hat, CI’s make no sense at all, and the very term is banished from my vocabulary. But I digress………” Here Giles seems a little extreme to me. When I put on my Bayesian hat, I think it is often justified to treat frequentist confidence intervals as reasonable approximations of posterior statements given the appropriate priors.
Of course, if you are not talking about Dave Giles, then I don’t know who Wade Giles is.
Sorry that the last piece of the comment is hard to read since I missed an html tag late last night. Repeating the last piece here:
And then Giles concludes:
“So, the first interpretation I gave for the confidence interval in the opening paragraph above is clearly wrong. The correct probability there is not 95% – it’s either zero or 100%! The second interpretation is also wrong. “This interval” doesn’t include the true value 95% of the time. Instead, 95% of such intervals will cover the true value.”
Note in his first point, Giles is explicitly agreeing with me. And in his second point, Giles is explicitly denying your definition above.
On Bayesianism and confidence intervals, Giles also said: “I’m not saying that we should be using CI’s. Specifically, when I’m wearing my Bayesian hat, CI’s make no sense at all, and the very term is banished from my vocabulary. But I digress………” Here Giles seems a little extreme to me. When I put on my Bayesian hat, I think it is often justified to treat frequentist confidence intervals as reasonable approximations of posterior statements given the appropriate priors.
Of course, if you are not talking about Dave Giles, then I don’t know who Wade Giles is.
Menzie,
I realized that by using the terms posterior and prior it sounded like I was making a Bayesian critique of the definition. Sorry–wasn’t trying to do that.
Let me make my point a little more carefully: The x% confidence level is a particular realization of a random variable. If you re-estimated the confidence level a large number of times by re-simulating the data under the null hypothesis, x% of the confidence intervals, each of which is a different realization of a random variable, would contain the true variable x% of the time. However, if you take any particular confidence level, the probability that it contains the true parameter is either 0 or 1. This is true regardless of the number of times you simulate the model. If you look at a particular realization of a confidence level estimated from a particular realization of the data, you can only say the probability is zero or one that it contains the true parameter.
The important point about confidence levels for both students and researchers is, I think, that you can’t use it to make any post-sample probability statements of interest about the true parameter.
Rick Stryker: Perhaps my very abbreviated summary was inadequate for your tastes, but the post that is hyperlinked has the exact passage from Dave (not Wade) Giles. I understand the true parameter is either in or not in the interval in any given instance.
Menzie,
I clicked into that post you linked to. You have a passage from Greenland et al (2016), not from Giles. In any event, I can’t see that you pulled your definition from Giles or from Greenland. It’s not a matter of taste. As written, the definition is wrong for the reasons I’ve already mentioned. Up to you whether you want to modify it of course.
i don’t think any of these definitions really change how one discusses the topics. they all seem to be good working definitions.
i find it interesting the similarity between what is being discusses here (0 and 100% probability after the measurement) and the measurement problem that has been argued about in quantum mechanics for the past 100 years. to no avail there, either. it has to do with the collapse of the wave function from a probability distribution to a value of 1 upon measurement. some folks argue the wave function does not collapse, and others argue that it does. at the end of the day, both approaches seem to produce the same theory. at this point, the differences seem to be philosophical rather than technical. i always find it interesting to see similar problems pop up in disparate applications.
Definitions side-by-side: Menzie [Giles]
“the range over which the interval will encompass the true parameter [=such intervals will cover the true value] XX% of the times, if the test is repeated 100 times [=95% of such intervals].”
I don’t see an issue but i might be missing something…
While precision is important, and several comments have served to improve precision, terms of debate are also important. The very fact that Menzie have provided definitions is a challenge to some of the local trolls, who seem to use the language of economics like the Lewis Carrol’s Humpty Dumpty – “When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean…”
C’mon, you sad, sorry trolls. English is among the richest languages in existence. You don’t need to misuse it to make a point. Abusing it almost certainly means you don’t have an honest argument. But then, maybe honesty isn’t your goal?
Your last paragraph is spot on. Can I add I think I have finally figured out what drives the insanity we routinely endure from JohnH. It seems if one insists on presenting reliable data and sensible economic analysis – one must be a centrist who does not care about the poor. So in JohnH’s world the only caring progressives are those who routinely get the economics wrong.
Of course there are a lot of smart progressives who should be very insulted by JohnH’s absurd tirades.
Fun nonsense poem. See Carroll’s “The Walrus and The Carpenter” poem recited/sang to Alice by Tweedle Dee and Tweedle Dum in Chapter Four of Through The Looking Glass . . .
One of the Tweedles asks Alice in effect, “Which was the bad guy?” They discuss.
Some Useful Terms in Economics
[ A remarkably helpful glossary.
I am entirely grateful for the painstaking effort. Thank you so much. ]
building a taxonomy, so we are all on the same “page”……
https://www.nytimes.com/2021/11/14/business/economy/farm-exports-supply-chain-ports.html
November 14, 2021
Crunch at Ports May Mean Crisis for American Farms
Backlogs and cancellations are hitting growers as costs rise, profits slump and overseas customers shop elsewhere.
By Ana Swanson
It’s just 60 miles from El Dorado Dairy in Ontario, Calif., to the nation’s largest container port in Los Angeles. But the farm is having little luck getting its products onto a ship headed for the foreign markets that are crucial to its business.
The farm is part of one of the nation’s largest cooperatives, California Dairies Inc., which manufactures milk powder for factories in Southeast Asia and Mexico that use it to make candy, baby formula and other foods. The company typically ships 50 million pounds of its milk powder and butter out of ports each month. But roughly 60 percent of the company’s bookings on outbound vessels have been canceled or deferred in recent months, resulting in about $45 million in missed revenue per month.
“This is not just a problem, it’s not just an inconvenience, it’s catastrophic,” said Brad Anderson, the chief executive of California Dairies.
A supply chain crisis for imports has grabbed national headlines and attracted the attention of the Biden administration, as shoppers fret about securing gifts in time for the holidays and as strong consumer demand for couches, electronics, toys and clothing pushes inflation to its highest level in three decades.
Yet another crisis is also unfolding for American farm exports.
The same congestion at U.S. ports and shortage of truck drivers that has brought the flow of some goods to a halt has also left farmers struggling to get their cargo abroad and fulfill contracts before food supplies go bad. Ships now take weeks, rather than days, to unload at the ports, and backed-up shippers are so desperate to return to Asia to pick up more goods that they often leave the United States with empty containers rather than wait for American farmers to fill them up.
The National Milk Producers Federation estimates that shipping disruptions have cost the U.S. dairy industry nearly $1 billion in the first half of the year in terms of higher shipping and inventory costs, lost export volume and price deterioration.
“Exports are a huge issue for the U.S. right now,” said Jason Parker, the head of global trucking and intermodal at Flexport, a logistics company. “Getting exports out of the country is actually harder than getting imports into the country.” …
An old truism from international economics – barriers to trade include not only tariffs but also transportation costs. Farmers should have been mad at Trump for his stupid trade war. I suspect they will be mad at Biden if these logistic issues are not fixed quickly.
All sorts of people are. WaPo reports today a poll showing people voting 46 to 43 % for generic GOP Congressional candidate over Dem one, even though Dems have now passed an infrastructure bill, unrmployment is falling sharply with people quitting jobs in record numbers to get better ones, the stock market at all time highs (oh, sown slihghtly this past week), and I just saw gasoline prices down 5 cents a gallon where I live after three straight weeks of crude oil prices falling. So, of course, all these port and driver and warehouse problems are Biden’s fault, and he is the Grinch who stole Christmas with the coming shortages of toys!!!!
https://krugman.blogs.nytimes.com/2016/02/06/in-defense-of-funny-diagrams-wonkish/
February 6, 2016
In Defense of Funny Diagrams (Wonkish)
By Paul Krugman
There was, clearly, a time when economics had too many pictures. But now, I suspect, it doesn’t have enough.
OK, this is partly a personal bias. My own mathematical intuition, and a lot of my economic intuition in general, is visual: I tend to start with a picture, then work out both the math and the verbal argument to make sense of that picture. (Sometimes I have to learn the math, as I did on target zones; ** the picture points me to the math I need.) I know that’s not true for everyone, but it’s true for a fair number of students, who should be given the chance to learn things that way.
Beyond that, pictures are often the best way to convey global insights about the economy — global in the sense of thinking about all possibilities as opposed to small changes, not as in theworldisflat….
Paul Krugman on Target Zones:
** http://www.tau.ac.il/~yashiv/krugman_qje.pdf
The point being that visualizing concepts in economics is valuable and this glossary by Menzie Chinn allows for just that.
Re: Confidence Interval. In the government we were obliged to use the definition given by the National Institute for Standards and Technology:
Confidence intervals are constructed at a confidence level, such as 95 %, selected by the user. What does this mean? It means that if the same population is sampled on numerous occasions and interval estimates are made on each occasion, the resulting intervals would bracket the true population parameter in approximately 95 % of the cases. A confidence stated at a 1−α level can be thought of as the inverse of a significance level, α.
https://www.itl.nist.gov/div898/handbook/prc/section1/prc14.htm
“Dead weight loss: the cumulation of excess of marginal social benefit over marginal social cost for all units of consumption foregone”
Took me a couple of reads to review in my head and fully comprehend. Note to self: A Dead weight loss is always social. Good reminder.
P.S. I initially thought this blog post was for lay people. Seems to me that at the minimum, an honours degree in economics or better is required to have a good understanding of all the terms.
In passing, I am shocked at the number of people who seem to have no understanding of the argument that the current sharp (and unexpected) increase in the price level is notionally a one-time price increase. One can disagree and one can certainly disagree with the Fed’s dual mandate as well as the Fed’s current policy stance but it would be nice if the critics at least UNDERSTOOD the argument.
https://krugman.blogs.nytimes.com/2009/09/27/the-textbook-economics-of-cap-and-trade/
September 27, 2009
The Textbook Economics of Cap-and-Trade
By Paul Krugman
Think of the benefits to the private sector from pollution. Yes, benefits — in the sense that it’s cheaper to pollute than not to, or that it’s easier to produce goods if you don’t worry about whatever emissions result as a byproduct. So we can think of drawing a curve representing the private marginal benefit of emissions, as in this figure:
https://www.princeton.edu/~pkrugman/capandtrade.png
Permit price
Permit price x Emissions cap = Rents
1/2 (Permit price x (Quantity emitted – Emissions cap) ) = Deadweight loss
Quantity emitted
In the absence of government action, the private sector will increase emissions up to the point where there is no further marginal benefit. That is, emissions will rise to whatever level is implied by profit-maximization, paying no attention to the effects on the environment.
A cap-and-trade system puts a limit on overall emissions, so that emitters have to pay a price for emitting. This price will, as shown in the figure above, equal the marginal benefit of the last unit of emissions allowed….
The Covid19 issue has caused a Santa Shortage. Supply of Santas is down as demand for Santas soar:
https://www.masslive.com/entertainment/2021/11/theres-a-santa-shortage-across-the-us-but-massachusetts-might-be-safe.html
Wait, wait. What do we tell the children? Isn’t Santa Clause one person who has isolated himself at the North Pole? Please tell me he is not taking Christmas off.
covid had me lose a fair amount of weight and the mask is less effective with the long beard so I trimmed down to almost white stubble……
hung up the suit after 2019 season……
pgl,
The Santa shortage is also obviously Biden’s fault, all because he blocked some pipelines and did not fix the truck driver shortage, leaving all those Santas abandoned wherever it is that somebody makes them.
Maybe malls should hire Mrs. Claus. A new economic opportunity for women!
I interested to see how all the Mrs. Clauses react to a strange yellow liquid left behind on their legs after a full day working in front of Macy’s etc. Will “motherly instinct” kick in?? Tune in to this same channel in 3 weeks.
《Standard error: an estimate of the standard deviation of the sampling population.》
When they do surveys of employment, are they sampling? When they use wage survey data to construct GDP, do they dishonestly throw out the standard error? Are FRED graphs thus unreliable, since standard error is not reported on survey-derived data (such as GDP)?
Aren’t you telling stories using noise?
For the third time – I provided you with one of the BLS papers which noted everything you insist should be there. But of course a troll like you just ignores all of that. Your lack of integrity on this issue is staggering.
Two consecutive quarters of negative GDP growth is not the definition of a recession. But, it is a common bit of journalism.
Question: how is that growth measured, Q-Q annualized, or YoY? Bear in mind that it is entirely possible for only one of those measures to be negative, and the other positive.
David O’Rear: That rule-of-thumb usually relies on Q/Q (either annualized or not).
Taking the time to spell out terminology is very rudimentary and boring to people of Menzie’s level and intellect. But is useful to students at the beginning of their learning. It is also one sign (of many) Professor Chinn is a great teacher for taking the time do do so.
As far as people picking apart your definitions Menzie, I give you some deep wisdom I wish my parents had taught me when I was younger. “Remember, no good deed ever goes unpunished”
*to do.
Honest to God, no one will believe this, but this keyboard is on its last legs. Though yes, some of it still is my bad tying.
Maybe it would be useful to define “diffusion index” as exemplified by the NY Fed Business Leaders Survey.
https://www.newyorkfed.org/survey/business_leaders/bls_overview