Banks still too big to fail?
Given that the largest banks are now even bigger than they were before the last financial crisis, it’s a pressing question. Unfortunately, a careful look at the data suggests the answer is less encouraging than many policy makers think.
Expectations of government bailouts create dangerous distortions. When, for example, creditors assume they’ll get rescued in an emergency, they don’t demand higher interest rates from banks that take on bigger risks. This lack of market discipline gives bankers a strong incentive - consciously or not - to engage in behavior that makes disasters more likely. Taxpayers effectively end up subsidizing activity that threatens their own well-being.
Ever since the giant bailouts of 2008 and 2009, regulators have been trying to solve this too-big-to-fail problem. By requiring banks to fund themselves with more loss-absorbing capital, they aim to make failures less likely. By setting up mechanisms to wind down failed banks, they hope to convince markets that governments have options other than taxpayer-backed rescues.
To assess the effectiveness of such measures, Congress has asked the Government Accountability Office to figure out whether - and to what extent - big banks still enjoy lower borrowing costs due to expectations of government bailouts. The GAO has said it intends to issue a report in the spring on this implicit taxpayer subsidy, which some researchers have already estimated to be worth billions of dollars a year.
Estimating the subsidy is a rather tricky task. To know how much banks are benefiting from government support, one must first know the counterfactual: What would their borrowing costs be in the absence of support? This cannot be observed, so economists must estimate it using credit-pricing models. If, for example, a big bank is paying 5 percent to borrow money and the model says it should be paying 6 percent, then the subsidy would be 1 percentage point.
There are various reasons to believe that the models underestimate what the big banks would be paying in the absence of implicit guarantees. As a result, the models probably also underestimate the taxpayer subsidy and can give regulators an “all clear” signal when in fact the too-big-to-fail problem is far from solved.
Consider, for example, one of the measures economists use to tease out a bank’s proper borrowing cost: distance to default. The idea is that the stock price can tell us how likely it is that the value of the company’s assets will fall below that of its liabilities, rendering the equity worthless and forcing the company to renege on some of its debts. The further the market thinks the company is from such a disaster, the lower the interest rates creditors should require the company to pay.
Problem is, too-big-to-fail status is an asset in itself. If equity investors expect to benefit from government bailouts in the future, they will place a higher value on the bank’s stock today. As a result, the measured distance to default will also be greater, and the bank’s counterfactual cost of borrowing - along with the estimate of its taxpayer subsidy - will be erroneously low.
A second problem arises from the unique nature of banks’ investments. Standard models assume that the risk of a company’s assets is symmetric: Their value is just as likely to double as it is to be cut in half. Big banks, however, have a peculiar set of positions - loans, mortgage-backed securities, derivative contracts - that have limited potential to rise in value but can entail large losses or even turn into big liabilities in a severe crisis. As a result, the banks’ distance to default can be much smaller than the models would suggest.
Again, this means that the models would underestimate both the banks’ proper borrowing costs and the implicit subsidy they receive from taxpayers. The error is compounded if the models focus only on data from quiet periods, during which the tiny fluctuations in the value of a bank’s assets would make a large loss seem extremely improbable.
Congress has entrusted the GAO with a difficult challenge. It is not, however, impossible. Simply recognizing that current methods provide, at best, a lower bound in estimating taxpayer subsidies would be an important step in the right direction. Beyond that, researchers can improve their estimates by making adjustments to account for banks’ unique vulnerability to crises, as well as for the effect of bailout expectations on the value of banks’ equity.
It’s crucial that we get our math as right as we can when measuring the taxpayer subsidy. As long as it exists, we’re giving banks the wrong incentives.
*The author is the Donald C. Cook professor of business administration at the Ross School of Business, a professor of economics at the University of Michigan.
By Stefan Nagel