Blanchflower Baloney

A Mark Sadowski post

In a recent post James Alexander caught Danny Blanchflower tweeting that he thought “NGDP totally impractical due to data revisions”.

This is a familiar complaint, voiced for example by Goodhart, Baker and Ashworth in January 2013.

There are numerous problems with this line of thinking.

First of all, central banks shouldn’t be targeting past values of economic variables anymore than one should attempt to drive a vehicle on a superhighway by looking in the rearview mirror. Arguably the world’s major central banks tried doing that in 2008, and we are still living with the results. Since central banks should only be targeting the expected values of economic variables, bringing up the issue of data revisions reveals a level of obtuseness that borders on the ridiculous.

And as irrelevant as the issue of data revisions is to the proper conduct of monetary policy, although NGDP levels tend to be revised, that certainly should not imply that inflation rates are not revised. In fact the personal consumption expenditure price index (PCEPI), the inflation rate of which is the official target of the Federal Reserve, often undergoes significant revisions.

There’s two main ways of measuring the size of the revisions of the components of national income and product accounts: 1) Mean Revision (MR) and 2) Mean Absolute Revision (MAR). For rate targeting MAR is the more appropriate measure, and in fact the MAR of inflation is usually smaller than the MAR of NGDP. However, for level targeting MR is more appropriate.

Interestingly, at least in the US (Page 27):

“The MRs for the price indexes for GDP and its major components are generally not smaller than those for real GDP and current-dollar GDP and its major components.”

In fact, over 1983-2009 the MR for the final revision to quarterly NGDP is 0.14, whereas over 1997-2009 the MR for the final revision to the GDP deflator and the PCEPI is 0.20 and 0.12 respectively. And over time the revisions have trended downward.

So I suspect that the MR for NGDP is smaller than the MR for PCEPI over 1997-2009.

Which means the claim you frequently hear that NGDP revisions are larger than inflation revisions is pure grade A horse manure. You will never see any evidence supporting this mindlessly repeated spurious claim, because no such evidence exists.

And, finally, inflation is a totally artificial construct requiring that we come up with an estimate of the extraordinary abstraction known as the “aggregate price level.” To see how preposterous this is imagine equating the aggregate price level between what it is now and what it was in say 14th century England. The goods and services are so different it requires the complete suspension of one’s disbelief.

In particular, PCEPI inflation is the difference between nominal PCE and real PCE, meaning PCEPI inflation is nothing more than the estimated residual between a truly nominal variable, which is relatively straightforward to measure, and a real variable, which is fundamentally an exercise in crude approximation.

It’s high time that central banks moved beyond the near medieval practice of targeting real variables and/or their residuals, and started targeting truly nominal variables, which according to the accepted tenets of monetary theory is their proper domain.

8 thoughts on “Blanchflower Baloney

  1. Thanks, Mark, I thought there was more on inflation revisions, even though “revising” such a slippery concept is like trying to sculpt with water.

    Blanchflower is a fast mover, even though it’s good he debated. I asked him what he’d target: “stable prices and maximum employment”. When challenged on whether max employment meant anything given Philips Curve was discredited he got technical. The “unstable phillips curve is just a misspecified (stable) wage curve”. Apart from the deep complexity of unstable curves, his specialist area is “the wage curve”. But this “curve” looks nothing more than a minor supply-side constraint and nothing to do with AD:
    https://t.co/XkrG22hrFB

    He said his books and website had more detail and then signed off.

  2. Excellent blogging. I also think the Fed could do a better job in collecting data in real time. For example, there is gobs of data now on Zillow and other real estate sites, and from what I understand retailers now know down to the pack of gum what they have sold in a day. It seems to me that the Fed should have a good handle on nominal results day by day. Has anyone looked this?

  3. Ben, market professional most certainly do, I know from experience. Asset managers have private access to dozens of up-to-the- minute retail surveys conducted by private research firms. Banks and card companies (Visa, M, Amex) receive the same data on their customer behaviour (bricks and clicks), using it themselves and/or selling it. The Fed is in the Stone Age, comparatively.

  4. Pingback: TheMoneyIllusion » An important new paper on NGDP targeting

  5. Same comment to Sadowski that I ask Sumner: if a central bank targets NGDP, prints money, and member banks either don’t lend the money and/or the public takes the money and hoards it, so NGDP does not rise, then what?

    BTW, “First of all, central banks shouldn’t be targeting past values of economic variables anymore than one should attempt to drive a vehicle on a superhighway by looking in the rearview mirror” – is a bad analogy. Most of the time, in any US style superhighway, which are built to exacting specs (unlike in other parts of the world), the radius of curvature of a highway is pretty constant and predictable, meaning, what you just drove past is a good predictor of what you will encounter. So if you are in a right-hand turn curve, such as a cloverleaf, you’ll get more of the same, at the same rate of turn, and if you’re on a straight road, the road will continue to be straight. That’s why successful mutual fund managers say that ‘past performance *is* a good predictor of future success’. But anyway, I understand the desire by Sadwoski to somehow look into the future, and control it. Problem is, it just can’t be done as money is neutral, but that’s a topic for another thread.

  6. I’m a supporter of NGDPLT. But there may be more legitimacy to the data revision critique than I think is being acknowledged. Example. If the Fed were targeting 4% annual growth (1% quarterly), then it would always be looking forward and bygones would be bygones. (I’d prefer that to our 2% inflation growth target). But one reason for NGDPLT would be to not let bygones be bygones – to force the Fed to compensate for past misses. So if on Day 1 NGDP =100, then at the end of year 5 it should be 120 (ignoring compounding for simplicity). So 3 years in, the data is saying we are at 112 and we are all pleased with that result and we’re all expecting 8% NGDP growth over the next 2 years (and 20% over the next 5, etc). But say the data is revised the next day. That NGDP is really at 114, not 112. Should the Fed still use the NGDPLT and target 120 at the end of Year 5 – only 3% annually for the next 2 years? Or start on a new 4% NGPDLT from here (letting bygones be bygones and essentially converting LT into rate targeting)? Is the answer symmetric in that if the revision is to 110 and we now need two years at 5% each? I’d like to hear your opinions on those two questions. My inclination is that LT is still the better option but lately I’ve become concerned that my opinion is being biased by the fact that the Fed has chronically undershot its inflation target.

    • “Should the Fed still use the NGDPLT and target 120 at the end of Year 5 – only 3% annually for the next 2 years? Or start on a new 4% NGPDLT from here (letting bygones be bygones and essentially converting LT into rate targeting)? Is the answer symmetric in that if the revision is to 110 and we now need two years at 5% each?”

      Hypothetically, yes and yes. But a 2% revision would be enormous given the Mean Revision over 1983-2009 was only 0.14 percentage points. To my knowledge the only revision that was ever that big was the methodological revision involving the addition of R&D spending. But as Scott Sumner has noted,

      “…there’s pretty general agreement that monetary policymakers would allow “base drift” in those cases; they’d raise the target by the amount of the upward bump from the new definition of NGDP.”

      http://www.themoneyillusion.com/?p=30762

      But even if hypothetically there were a non-methodological revision as large as that, it would pale in comparison to the enormous shifts in level trend experienced during the Great Inflation or, more recently, during the Great Recession. The macroeconomic consequences of such a revision would be relatively minor compared to the extremes that have been experienced historically.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.