The crisis next time
What we should have learned from 2008
At the turn of this century, most economists in the developed world believed that major economic disasters were a thing of the past, or at least relegated to volatile emerging markets. Financial systems in rich countries, the thinking went, were too sophisticated to simply collapse. Markets were capable of regulating themselves. Policymakers had tamed the business cycle. Recessions would remain short, shallow, and rare.
Seven years later, house prices across the United States fell sharply, undercutting the value of complicated financial instruments that used real estate as collateral—and setting off a chain of consequences that brought on the most catastrophic global economic collapse since the Great Depression. Over the course of 2008, banks, mortgage lenders, and insurers failed. Lending dried up. The contagion spread farther and faster than almost anyone expected. By 2009, economies making up three-quarters of global GDP were shrinking. A decade on, most of these economies have recovered, but the process has been slow and painful, and much of the damage has proved lasting.
“Why did nobody notice it?” Queen Elizabeth II asked of the crisis in November 2008, putting a question that economists were just starting to grapple with. Ten years later, the world has learned a lot, but that remains a good question. The crash was a reminder of how much more damage financial crises do than ordinary recessions and how much longer it takes to recover from them. But the world has also learned that how quickly and decisively governments react can make a crucial difference. After 2008, as they scrambled to stop the collapse and limit the damage, politicians and policymakers slowly relearned this and other lessons of past crises that they never should have forgotten. That historical myopia meant they hesitated to accept the scale of the problem and use the tools they had to fight it. That remains the central warning of 2008: countries should never grow complacent about the risk of financial disaster. The next crisis will come, and the more the world forgets the lessons of the last one, the greater the damage will be.
CRASH LANDING
Before 2007, most economists had been lulled into a false sense of security by the unusual economic calm of the preceding two and half decades. The prior U.S. recession, in 2001, was shallow and brief. Between 1990 and 2007, rates of economic growth in the United States varied far less than they had over the previous 30 years. The largest annual decline in GDP was 0.07 percent (in 1991), and the largest increase was 4.9 percent (in 1999). Inflation was low and steady.
The period came to be known as “the Great Moderation.” The economists Olivier Blanchard and John Simon reflected the views of their profession when they wrote in 2001, “The decrease in output volatility appears sufficiently steady and broad based that a major reversal appears unlikely. This implies a much smaller likelihood of recessions.” That view ignored much of what was happening outside the West, and even those whose perspectives stretched back more than just two decades tended to look back only to the end of World War II. In that narrow slice of history, the U.S. economy had always grown unless the Federal Reserve raised interest rates too high. When it did, the Fed reversed course and the economy recovered quickly. That assurance was further bolstered by another belief about downturns. Economists compared them to plucking a guitar string: the more forcefully it is pulled, the faster it snaps back. More painful recessions, the wisdom went, produce more vigorous recoveries.
Yet had economists looked farther afield, they would have realized that financial crises were by no means a thing of the past—and that they have always led to particularly large and persistent losses in economic output. As research we have published since the crisis with the economist Kenneth Rogoff has demonstrated, systemic financial crises almost invariably cause severe economic downturns, and the string does not snap back. This fact screams out from every aspect of the historical record. Data on the 14 worst financial crises between the end of World War II and 2007 show that, on average, these led to economic downturns that cost the affected countries seven percent of GDP, a much larger fall than what occurred in most recessions not preceded by a financial crisis.
In the worst case, when a financial crisis in 2001 forced Argentina to default on over $100 billion in foreign debt, the country’s GDP per capita fell by more than 21 percent below its prior peak. On average across these episodes, per capita GDP took four years to regain its pre-recession level. That is far longer than after normal recessions, when growth does snap back. In the worst case, after Indonesia was hit by the 1997 Asian financial crisis, it took the country seven years to recover.
Also noteworthy was that all 14 of the largest postwar crises took place after the mid-1970s. The three decades beginning in 1946 were unusually free of financial catastrophe. (That likely resulted from the tight controls on international flows of capital that formed part of the Bretton Woods system, a reminder that although open global capital markets bring major benefits, they also produce volatility.) But the crises after the mid-1970s show how much economic output can be lost when capital flows come to a sudden stop—and how hard it can be to recover from the downturn.
Given this background, the economic pain that followed the financial crisis of 2007–8 should have come as no surprise. As it turned out, the effects were even worse than history would have predicted. We compiled data from the 11 economies that suffered the deepest crises in 2007–8, as assessed by the loss of wealth in the stock market and the housing market, the market capitalization of the financial institutions that failed, and the amount spent by the government on bailing everyone out. On average, these countries saw a nine percent drop in real GDP per capita, compared with the average seven percent fall among the countries that experienced the previous 14 worst financial crises since World War II.
Not only did output fall further, it recovered more slowly. On average, it took over twice as long to regain the ground lost in the recession—nine years rather than four. Some countries have still not fully recovered. The economies of Greece and Italy experienced falls in per capita GDP—26 percent and 12 percent, respectively—of a scale seldom seen in the modern world outside of wartime. Today, both countries have yet to climb back to the levels of per capita GDP they reached in 2007. They are not likely to anytime soon. According to the latest forecast from the International Monetary Fund, per capita real GDP in Greece and Italy will come in below 2007 levels through 2023, as far out as the IMF forecast runs.
THE LONG ROAD BACK
Financial crises do so much economic damage for a simple reason: they destroy a lot of wealth very fast. Typically, crises start when the value of one kind of asset begins to fall and pulls others down with it. The original asset can be almost anything, as long as it plays a large role in the wider economy: tulips in seventeenth-century Holland, stocks in New York in 1929, land in Tokyo in 1989, houses in the United States in 2007.
From their peak at the end of 2006 to their nadir at the beginning of 2009, U.S. house prices fell so far that the average American homeowner lost the equivalent of more than a year’s worth of disposable income. The destruction of wealth was about 50 percent larger than that resulting from the previous major financial shock, the technology stock crash in 2000. The bursting of the tech bubble led to a brief and shallow downturn. The collapse of house prices triggered something far more serious. The crucial difference lay in the form of wealth destroyed.
The 2000 crash had little effect on the wider economy since most people and institutions owned few technology stocks. In 2007, by contrast, houses formed a major part of most Americans’ wealth, and banks around the world had used them as collateral in vast numbers of complicated and opaque financial instruments. Once house prices fell and Americans began defaulting on their mortgages, the elaborate system of financial obligations built on top of U.S. household debt came crashing down. The bursting of the tech bubble was just a stock market crash; the bursting of the housing bubble became a systemic financial crisis.
Recovering from a severe financial crisis typically involves three stages. First, the affected country must acknowledge the extent of the wealth that has been destroyed. This can take some time if the assets that have collapsed in value are held by institutions that have opaque balance sheets and are protected by risk-averse government agencies. The housing market, for example, often takes some time to account for price falls, as owners hang on in the hope of better days ahead, and even if the price continues to fall, they have legal protections in bankruptcy law that delay forced sales. The many complicated instruments that use those underlying mortgages as collateral can also be slow to adjust, as they are difficult to price and banks often cannot sell them once trading dries up in a crisis. All too often, the day of reckoning is put off as banks fail to acknowledge how much the value of their assets has fallen.
Financial supervisors sometimes look the other way because once they have admitted the depth of the problem, their governments will be forced to take the second step: allocating the losses among their citizens. Even doing nothing marks an implicit decision, as the people and financial institutions that own the assets then have to bear all the pain. Officials usually worry that major banks are too important to the economy to go under, so the government steps in with bailouts.
Understanding these first two steps goes a long way toward explaining why countries have followed such different paths since the financial crisis. In the United States, the Troubled Asset Relief Program, passed by Congress in 2008, and the bank stress tests that followed, which measure whether banks have enough capital to survive another catastrophe, created a formal mechanism to recognize and allocate losses. Europe, by contrast, lagged behind the United States because some governments were unwilling to admit how much wealth had been destroyed. And even once European governments did face the facts, some were reluctant to allocate the losses within their own economies because of the international nature of the institutions affected and, in a few cases, the sheer magnitude of the sums involved.
The five European countries hit hardest by the crisis were Greece, Iceland, Ireland, Italy, and Spain. The governments of Iceland, Ireland, and Spain ultimately forced their banks to acknowledge their losses and relieved them of some of the junk weighing down their balance sheets. But Greece and Italy lumbered along with impaired banking systems, which acted as a drag on economic activity. Partly out of concern about the consequences for their own balance sheets if they bailed out the banks—they had the continent’s highest levels of debt even before the crisis—the Greek and Italian governments looked the other way for as long as possible.
THE CENTRALITY OF CENTRAL BANKS
The final step in dealing with a financial crisis is for governments and central banks to do what they can to blunt the effects. How far the economy falls and how fast it recovers depend on what tools officials have available—and on how willing they are to use them.
For most advanced economies, the events of 2008–9 will go down in history as “the Great Recession,” not “the Second Great Depression.” That should stand as a credit to the governments that prevented a new depression by actively managing their economies. This was a far cry from the 1920s and early 1930s, when politicians believed that it was best to let the economy correct on its own. From 2008 to 2011, across the 11 countries hit hardest by the crisis, governments spent an average of 25 percent of GDP on stimulating their economies.
But they all could have done better. The United States spent heavily but too slowly and inefficiently. In Europe, too much of the stimulus went toward picking up the pieces of failed financial institutions, which provided neither the immediate fillip that would have come from policies such as raising compensation for the unemployed nor the permanent benefits that would have come from building infrastructure. Moreover, the governments that had the most room to borrow—most notably Germany, the eurozone’s largest economy—did the least to boost spending.
Europe also wasted valuable time when it came to monetary policy. In 2012, Mario Draghi, the president of the European Central Bank, famously promised to “do whatever it takes to preserve the euro.” He kept his word, by lowering interest rates below zero and buying vast quantities of financial assets. But his U.S. counterpart, Ben Bernanke, the chair of the Federal Reserve, was more than three years ahead of him. In December 2008, the Fed cut the nominal interest rate to zero and then over the next four years broke new ground in several other ways. It launched new lending programs that accepted collateral the Fed had previously never touched from institutions it had not previously lent to. And it bought huge tranches of financial assets, inflating its balance sheet to an unprecedented size, around one-quarter of U.S. nominal GDP.
Yet as effective as central banks ultimately were at fighting the crisis, they may come to regret their unconventional policies, some of which had the effect of reducing democratic accountability. In particular, the Fed supported financial institutions well beyond the commercial banks it usually deals with. It lent to investment banks and an insurance company and directly supported money market mutual funds (which buy and sell mostly short-term government and commercial debt) by starting special lending programs that accepted a wider range of collateral than was normal. By lending to some institutions and industries but not others, the Fed affected their stock prices and the relative interest rates they had to pay. That effectively favored specific companies and industries and, by extension, the people who owned the companies and lent to them. Such policies are normally carried out by Congress. But now that the Fed has opened the door to them, future politicians might see an opportunity to achieve some public policy aim by supporting a particular company or industry without having to pass a law; if politicians believe that they only need to make a phone call to the Fed, the Fed could come under far more political pressure than in the past.
Early on in the crisis, the Fed also extended huge loans to foreign commercial banks and central banks that needed dollar funding, obviating the need for the U.S. Treasury to supply them with dollars. That saved the Treasury from having to report the actions to Congress, which would have looked askance at U.S. government support for foreign banks. Another legacy of the central banks’ assertiveness is that the majority of government debt that is sold on the open market in the United States, Japan, and the eurozone is now owned by central banks. That threatens to erode central bank independence by tempting politicians to fix their budgetary math by forcing central banks to write off the government debt they own. Concentrating government debt in official coffers has also made the private market for it less liquid, which might make it harder for governments in advanced economies to borrow more in the future.
The regulated banking sector is healthier today than it was before the crisis, both because a generation of bankers were chastened by the crash and because governments have erected a firmer scaffold of regulation around it. In the United States, the Dodd-Frank Act of 2010 gave regulators the power to force the largest banks to significantly increase the amounts of capital they hold, making them more able to absorb losses in the future. Banks were also induced to reduce their reliance on short-term funding, giving them more breathing room should their access to short-term markets be shut off, as it was in 2008.
To make sure banks are following the rules, regulators now supervise them more closely. Large banks undergo regular stress tests, with real money riding on the results. If a bank fails the test, regulators may block it from paying dividends to shareholders, buying back its own shares, or expanding its balance sheet. Large banks also have to show that their affairs are in order by submitting “living wills” to the authorities that detail how their assets and liabilities will be allocated should they go bust. This has made the traditionally opaque business of banking a little more transparent.
Yet the fatal flaw in the Dodd-Frank Act is its focus on the mainstream banking system. The act makes it more expensive for banks to operate by forcing them to hold more capital, pay more for longer-term funding, and comply with increased reporting requirements. But American attitudes toward risk taking remain the same: aspiring homeowners still want to borrow, and investors still want to lend to them. By making it more expensive to take out a mortgage with a mainstream bank, regulators have shifted borrowing and lending from the monitored sector to the unmonitored one, with homebuyers increasingly turning to, say, Quicken Loans rather than Wells Fargo. A majority of U.S. mortgages are now created by such nonbanking institutions, also known as “shadow banks.” The result is that the financial vulnerability remains but is harder to spot.
After a crisis, the regulatory pendulum typically swings too far, moving from overly lax to overly restrictive. Dodd-Frank was no exception; it swept more institutions into a burdensome compliance scheme than was necessary to limit systemic risk. The Trump administration has undone some of this overreach by directing agencies to regulate less aggressively and passing legislation to amend Dodd-Frank. So far, the changes have mostly trimmed back excesses. But at some point, if efforts to cut the fat continue, they will reach the meat of the supervisory process.
DOOMED TO REPEAT
Ten years after the financial crisis, what have we learned? The most disquieting lesson is how complacent politicians, policymakers, and bankers had grown before the crisis and how much they had forgotten about the past. It shouldn’t have taken them as long as it did to relearn what they should have already known.
Several other specific lessons stand out. First, authorities must follow the three-step process of dealing with a crisis—admit the losses, decide who should bear them, and fight the ensuing downturn—as quickly as possible. Delay allows problems to fester on bank balance sheets, increasing the ultimate cost of bailing out the financial system. This was the mistake Europe made and the United States avoided.
The second lesson of the crash is that a system of fixed exchange rates can turn into an economic straitjacket. When aggregate demand falls sharply, central banks usually respond by cutting interest rates and using every other tool at their disposal to get the economy going again. The effect of such policies is to lower the value of the country’s currency, stoke domestic inflation, and reduce the interest rates at which domestic banks lend to customers and to one another. This boosts exports, stimulates demand at home, and encourages lending.
As members of the eurozone, some of the hardest-hit countries—Greece, Ireland, Italy, and Spain—had neither independent central banks nor their own currencies, so this course of action was unavailable to them. Monetary policy for the entire region was instead set in Frankfurt by the European Central Bank, which took too long to react aggressively to the crisis, since it was initially focused on the performance of the countries at the eurozone’s core—France and Germany—not that of those at the periphery. The only way to make Greek, Irish, Italian, and Spanish goods and services more attractive to foreign customers was to cut wages and accept lower profit margins, a slow and painful process compared with devaluing one’s currency.
The crisis also showed that it matters how much room governments have to borrow, even when interest rates are extremely low. In times of trouble, policymakers are less likely to be sure that investors will buy new government debt. In the eurozone, they are limited in another way: governments cannot borrow as much as they want, since they worry their fellow member states will trigger the EU’s enforcement mechanisms meant to prevent excessive borrowing. During the 2007–8 crisis, government debt exploded. From 2007 to 2011, the Greek government borrowed an amount equivalent to about 70 percent of the country’s GDP, and Italy borrowed about 20 percent of its GDP, bringing government debt in both countries to over 100 percent of GDP. Iceland, Ireland, and Spain borrowed equally spectacular sums. Yet most of this new debt went toward propping up these countries’ financial systems, leaving the governments with little left over to boost growth.
The final lesson of the crisis is that it is possible for inflation to be too low. Before 2007, inflation in most advanced economies (with the notable exception of Japan) was stable and close to the goal set by central banks, typically two percent. This was a measure of the progress made by the world’s central bankers over the previous three decades. Markets had come to expect that central banks would keep prices steady. After the crash, that progress became a poisoned chalice. Central banks cut nominal interest rates close to zero, but this pulled the real interest rate (that is, the nominal interest rate less the expected rate of inflation) to no lower than negative two percent, which was not low enough to stimulate economies suffering from massive losses in wealth and confidence.
The tragedy is that none of these lessons is new. The importance of moving quickly to stimulate the economy after a financial crisis was shown by Japan’s “lost decade” in the 1990s, when a period of low GDP growth followed an economic bubble. The value of a system of flexible exchange rates was demonstrated in 1953, when the economist Milton Friedman showed how devaluing one’s currency would avoid the need to cut wages and prices. For decades, economists in the United States and Europe have urged governments to control spending during times of growth. The EU even imposes rules limiting the size of the deficits member states can run, although countries routinely break them. There are many good economic reasons to limit the buildup of debt, including avoiding crowding out private spending, but the most practical warning may have come from the American diplomat John Foster Dulles in the first issue of this magazine: large debt loads take away oxygen from the discussion of other political issues. Finally, the fact that a higher background inflation rate can be a good thing, because it allows for a lower real interest rate and thus faster recoveries, has been known since 1947, when the economist William Vickrey argued that higher inflation made economies more resilient.
That the world had to relearn so many important lessons during the last crisis suggests that it will forget them again. Economic vulnerabilities vary from place to place. Some countries default on their debt, in some cases frequently; others never do. Some have repeated bouts of virulent inflation; others avoid the problem entirely. Some have exchange-rate crises; others do not. What determines which countries are prone to these problems is a combination of institutional design, the strength of the rule of law, and national attitudes, such as an aversion to debt. Despite this diversity, financial crises turn up in every country, whatever its history of sovereign defaults, periods of high inflation, or exchange-rate volatility. This suggests that there are important human elements behind financial meltdowns: greed, fear, and the tendency to forget history. The most recent crisis, dramatic as it was, will not be the last
No comments:
Post a Comment