For some time the economy has been ‘playing tricks’ on policymakers, in particular those at the Fed, concerned with monetary policy.
For more than one year, they have been signaling that the beginning of ‘policy normalization’ (aka the first rise in rates) was imminent, but at each turn something happened to thwart their ‘desire’.
It´s interesting to note, in that context, the ‘bias’ in some of the research being done at the research departments of the Federal Reserve System. Some would indicate that the Fed should start the normalization process now. Others that it could still wait a while.
In the first category, the San Francisco Fed has just released a study downplaying the fall in inflation expectations provided by market-based measures:
A substantial decline in market-based measures of inflation expectations has raised concerns about low future inflation. An important question to address is whether the forecasts based on market information are as accurate as alternative forecasting methods. Compared against surveys of professional forecasters and other simple constant measurement tools, market-based inflation expectations are poor predictors of future inflation. This suggests that these measures contain little forward-looking information about future inflation.
Another, from the Richmond Fed, indicates that we´re in ‘overtime’, given that the natural (aka Wicksellian rate) has for “a long time” been higher than the market rate:
The natural rate of interest is a key concept in monetary economics because its level relative to the real rate of interest allows economists to assess the stance of monetary policy. However, the natural rate of interest cannot be observed; it must be calculated using identifying assumptions. This Economic Brief compares the popular Laubach-Williams approach to calculating the natural rate with an alternative method that imposes fewer theoretical restrictions. Both approaches indicate that the natural rate has been above the real rate for a long time.
Going in the other direction, Mark Thoma discusses the ZPOP measure of the labor market developed by the Atlanta Fed:
The Fed has a difficult job. It must assess how close the U.S. is to full labor force utilization, and how that translates into inflation risk. Both steps of that process involve considerable uncertainty. The Atlanta Fed’s new ZPOP measure attempts to provide additional clarity, but as the researchers acknowledge, this measure isn’t perfect. In the end, the Fed will always have to make its monetary policy decisions based on incomplete information about the economy.
The panel below extends the ZPOP chart of Thoma and the Atlanta Fed to show how the story could have been different (and we wouldn´t be unduly worried about what´s the best measure of inflation expectations, interest rates being below the ‘natural rate’ or what´s the better labor market indicator).
First off, the best thing would have been for the Fed NOT to allow nominal spending (NGDP) to tank! Since it did, the best thing would have been to crank it up at a higher rate, to try and get as close as possible to the original trend level path.
Don´t argue that was not possible, because if the Fed can ‘choose’ one level of spending it can ‘choose’ any. And look how it allowed NGDP to grow at a higher rate after the mistake of 2001/02. This time around, it stopped the rise in NGDP growth too soon!
The Fed has allowed the economy to remain ‘depressed’. And in a ‘depressed’ economy, the ‘gauges’ of performance (most likely) behave differently, and that´s causing a lot of anxiety.
Note: But there are the diehard RBCers, like Ellen McGrattan who write Monetary Policy and Employment:
Neither conventional nor unconventional monetary policy has much of an impact on employment. What does? Factors that drive the labor-leisure decision.