A “Taylor Rule” For Chopper Drops?

A Benjamin Cole post

The British monetary thinker Lord Adair Turner contends that helicopter drops, at least those designed as money-financed fiscal programs (MFFPs), will work to counter recessions and slow growth, and macroeconomists know they will work.

Turner says the usual objection to chopper drops is political—a fear the money-printers will gain the upper hand, rout common sense, and charge to hyperinflation. Turner may be too kind; some people oppose chopper-drops due to sheer dogma.

So, why not a “Taylor Rule” to guide or restrict MFFPs?

The Taylor Rule

Here is one version of the Taylor Rule:

i = r* + pi + 0.5 (pi-pi*) = 0.5 (y-y*)

Where i is the nominal federal funds rate, r asterisk is the real federal funds rate, pi is the rate of inflation, p asterisk is the target inflation rate, y is a logarithm of real output, and y asterisk is a logarithm of potential output.

True, the Taylor Rule makes less sense when deflation becomes the norm, and there seems to be no provision for quantitative easing (QE), although Taylor has gushed about the use of QE in Japan.

And as even with Market Monetarism NGDPLT, there can be agendas hidden in the little numbers of the Taylor Rule.  For example, a tight-money fanatic could praise NGDPLT—as long as the target was a 2% increase for every year.

Marcus Nunes would add that monetary rules should target results, not process. Still, when it comes to helicopter drops, some rules might provide comfort.

Chopper Drop By Code?

So let’s listen to Marcus and target 6% NGDPLT.

How should a chopper drop code or formula read?

How about this: For every 1% below a target of 6% increase in annual NGDPLT, then $100 billion of money-financed fiscal policy is induced, preferably through cuts in payroll taxes.

In this plan, payroll taxes would be cut by $100 billion for every 1% deficit from target, and the Federal Reserve would print up $100 billion and turn it over to the Social Security-Medicare Trust Funds.

No doubt some readers will have a great deal of uneasiness with this proposal.

But does the current Rube Goldberg arrangements of the Federal Reserve, working through the 22 primary dealers, prosecuting the buying and selling of Treasuries on the open market, and the passing through of interest but not principle from the Fed to the U.S. Treasury, really make sense? Would anyone design such a system from scratch?

Furthermore, the present-day claptrap system relies on major extent on private-sector but extraordinarily regulated commercial bank lending to expand economic output. But banks are loath to make unprofitable loans, and the bulk of bank loans are on property. In other words, the Fed is trying to stimulate the economy through property markets, or (more usually) apply the monetary noose. The noose we saw in 2008, btw.


As noted by many, the targeting of interest rates and inflation is off-center. The target should be expansion of GDP, also called nominal GDP, and preferably the NGDPLT.

Surely, any rules that apply to helicopter drops could be tweaked, although the more simple, the better.

And in the end, it does not matter if inflation is 1.4% or 2.1%, though the current FOMC, and cult of central banking, seems to regard such trivia of Titantic importance.

What matters is sustained growth of NGDPLT, and a nation that consistently pursues pro-business policies.

PS I wonder if federal agencies, including the Federal Reserve, are collecting data as they could. With the advent of bar codes, many national retailers know daily sales. National hotel chains and airlines at any moment know their room and seat counts. Many traffic-monitoring systems exist. Is it not possible to generate a fairly accurate, timely picture of NGDP, and adjust helicopter drops accordingly?

“Looking for Wally when there are many Wallies”

That well describes the challenges faced by monetary policymakers according to this piece from Bloomberg Business “Are we tight yet? The Fed´s problem in finding the neutral rate”:

Federal Reserve officials just aren’t sure how much stimulus their zero-interest-rate policy is providing.

At issue is the level of the so-called natural, neutral or equilibrium rate of interest, which is the borrowing cost — adjusted for inflation — that keeps the economy at full employment with stable prices.

Economists from the academic world and even within the central bank are vigorously airing differing views on where the rate lies in the aftermath of the worst recession since the Great Depression. The uncertainty is yet another reason for Fed officials to go slowly as they begin raising interest rates for the first time since 2006.

According to this older piece from Brueguel:

What’s at stake: The natural rate of interest is a key ingredient in the recent discussion of secular stagnation, and more generally in New-Keynesian models of the Great Recession. But the concept is often poorly understood, in part because the term refers to different things for different people.

A couple of examples:

Richard Anderson writes that the Swedish economist Knut Wicksell based his theory on a comparison of the marginal product of capital with the cost of borrowing money. If the money rate of interest was below the natural rate of return on capital, entrepreneurs would borrow at the money rate to purchase capital (equipment and buildings), thereby increasing demand for all types of resources and their prices; the converse would be true if the money rate was greater than the natural rate of return on capital.

Axel Leijonhufvud writes that Erik Lindahl (1939) and Gunnar Myrdal (1939) refined the conceptual apparatus, in particular by introducing the distinction between ex ante plans and ex post realizations and thereby clarifying the relationship between Wicksellian theory and national income analysis.

And there are several others.

In short, the Fed is faced with an “estimation” problem. To make that clear, think of a Taylor-Rule for setting the Fed Fund (FF) rate:

Looking for Wally_1

The circles around the level of “potential output” (y*) and the level of the natural rate (NR) represent the “uncertainty” about their estimated values.

For example, San Francisco Fed senior economist Vasco Cúrdia argued in a paper published earlier this month that the equilibrium rate may have dropped so much that “monetary conditions remain relatively tight despite the near-zero federal funds rate.” He provides a chart which indicates that at present the “natural rate” could be anywhere from -3% to 6%!

Looking for Wally_2

Similar uncertainty surrounds the value of “potential” output.

In essence, facing the “estimation” problem, the situation of monetary policy makers is well captured by this picture!

Looking for Wally_3

An alternative, to try to overcome the “estimation” problem would be for the Fed to try some “experimentation”.

That has happened before. In March 1933, in the depths of the Great Depression, President Roosevelt decided to “innovate” and free the economy from the “gold standard shackles”, delinking from gold. The effect was immediate as illustrated below.

More recently, in the heights of the Great Inflation, Paul Volcker also decided to innovate:

On Oct. 6, 1979, the Federal Open Market Committee—under the leadership of Paul Volcker—made a decision that would come to be known as a key moment in U.S. economic policymaking, a turning point in the history of the Federal Reserve that would forever alter central banking. And those are the understatements.

A defining moment may shape the direction of an institution for decades to come. In the modern history of the Federal Reserve, the action it took on October 6, 1979, stands out as such a milestone and arguably as a turning point in our nation’s economic history.(A Greenspan)

So, what did the FOMC do? It made a short-term change in the method used to conduct monetary policy, from making adjustments in the federal funds rate to containing growth in the monetary aggregates. (Yes, the Fed now targets the funds rate again—the 1979 change was reversed in 1982—but more on that in a minute.) This meant the Fed would focus on controlling the amount of reserves provided to the banking system, which would ultimately limit the supply of money.

By many, that “experiment” was seen as a failure. Nevertheless, judging by the results it worked, in that inflation was permanently brought down.

In what follows I´ll give a “liberal” interpretation of the experimentation, based on NGDP. The interpretation is not so farfetched because the NGDP targeting concept was extensively discussed both by the Volcker Fed in 1982 and by the Greenspan Fed in 1992.

The first charts show how rising core inflation was the outcome of a rising NGDP growth. The follow up shows that by “downsizing” NGDP growth inflation was brought down.

Looking for Wally_4

This was followed by Greenspan´s “consolidation” in 1987-92 and almost “smooth sailing” from then to the end of his mandate in January 2006. These last two periods came to be known as the “Great Moderation”.

Looking for Wally_5

I interpret the “experiment” as trying to find first the level and then the stable growth path for NGDP. As the next chart shows, by 1987 the Fed had “hit” on the NGDP level and from then onwards NGDP growth rate was stabilized, i.e. kept close to the trend path.

Looking for Wally_6

There were “mistakes” along the way, notably in 1998-03, when NGDP first rose above trend and then fell below, but by the end of 2005, NGDP was back on trend.

Looking for Wally_7

Soon after taking the Fed´s helm, Bernanke allowed NGDP to begin once more to fall below trend. This was magnified in 2008, probably because of the Fed´s exclusive focus on headline inflation, which was being propelled by an oil and commodity price shock. In an environment where the financial system was “wounded”, allowing NGDP to crumble is mortal!

Looking for Wally_8

At present we have the opposite situation of the 1970s. Instead of high/rising inflation due to rising NGDP growth, we have low/falling inflation due to low/falling NGDP growth. So this time around it may be fruitful to devise an NGDP based experiment in reverse. Try to establish a higher level of NGDP that when attained is “consolidated” through a stable NGDP growth rate.

This “experimentation” would be much more helpful than spending time on “estimation” of the “natural rate of interest” or the “potential level of output”.

PS In the comments, bill writes:

“I need to go see the correlation between corporate spreads and NGDP growth. I think those spreads have been widening which I take as a good sign that the market expects less than optimal choices by the Fed in the near future.”

The chart shows how the recent fall in NGDP growth has been accompanied by a rise in less than stellar bond spreads over 10yr treasuries:

Looking for Wally_9

By his own assessment, Dudley is saying the Fed has been a failure

On how monetary policy should be conducted, William Dudley concludes:

What is important for attaining the Federal Reserve’s mandated objectives is not that monetary policy is described in terms of a formal prescriptive rule, but rather that the FOMC’s intentions and strategy are well understood by the publicThis argues for clear communication through the FOMC meeting statements and minutes, the FOMC’s statement concerning its longer-term goals and monetary policy strategy, the Chair’s FOMC press conferences and testimonies before Congress, and speeches by the Chair and other FOMC participants. 

But it also is important that the strategy be the “right” reaction function.  This means a policy approach that responds appropriately to important factors beyond the two parameters of the Taylor Rule—the output gap estimate and the rate of inflation.

Interesting that each month a new “important factor” buts in!

It´s good when something works both in practice and in theory!

For the past several years, market monetarists have promoted the change from inflation targeting to NGDP level targeting. The analysis was mostly empirical, a fact that made some “wrinkle their nose”. A new model based paper arrives at the same conclusion:

The design of monetary policy has been the subject of a voluminous and influential literature. In spite of widespread discussion in the press and policy circles, the normative properties of nominal GDP targeting have not been subject to scrutiny within the context of the quantitative frameworks commonly used at central banks and among academic macroeconomists.

The objective of this paper has been to analyze the welfare properties of nominal GDP targeting in comparison to other popular policy rules in an empirically realistic New Keynesian model with both price and wage rigidity. We find that nominal GDP targeting performs well in this model. It typically produces small welfare losses and comes close to fully implementing the flexible price and wage allocation. It produces smaller welfare losses than an estimated Taylor rule and significantly outperforms inflation targeting.

It tends to perform best relative to these alternative rules when wages are sticky relative to prices and conditional on supply shocks. While output gap targeting always at least weakly outperforms nominal GDP targeting, the differences in welfare losses associated with the two rules are small.

Nominal GDP targeting may produce lower welfare losses than gap targeting if the central bank has difficulty measuring the output gap in real time. Nominal GDP targeting always supports a determinate equilibrium, whereas output gap targeting may result in indeterminacy if trend inflation is positive.

Overall, our analysis suggests that nominal GDP targeting is a policy alternative that central banks ought to take seriously.

There are a number of possible extensions of our analysis. Two which immediately come to mind are financial frictions and the zero lower bound. Though our medium scale model includes investment shocks, which have been interpreted as a reduced form for financial shocks in Justiniano et al. (2011), it would be interesting to formally model financial frictions and examine how nominal GDP targeting interacts with those.

Second, our analysis abstracts from the zero lower bound on nominal interest rates. It would be interesting to study how a commitment to a nominal GDP target might affect the frequency, duration, and severity of zero lower bound episodes.

On the last sentence, MM´s have little doubt that, when undertaken, the study will also corroborate their view that the frequency, duration and severity of ZLB episodes would essentially disappear!

Bernanke takes on John Taylor and his (namesake) rule

I think Bernanke is still “taking it easy” in his blogging. I hope he´s “warming up” to what really matters, i.e. explaining why the Fed bungled in 2008!

Bashing the Taylor-rule is easy, even if, like me, you´ve never been a central banker. I did that in a number of posts (two examples, here and here).

In the following paragrah, BB disappoints, and indicates that the bad things that happened after 2008 were not the fault of the Fed. In fact, according to him, the Fed came out ahead of the pack!

As John points out, the US recovery has been disappointing. But attributing that to Fed policy is a stretch. The financial crisis of 2007-2009 was the worst at least since the Depression, and it left deep scars on the economy. The recovery faced other headwinds, such as tight fiscal policy from 2010 on and the resurgence of financial problems in Europe. Compared to other industrial countries, the US has enjoyed a relatively strong recovery from the Great Recession.