When something goes wrong, legal rules help determine who pays for it. When, on a hot summer day, a Coke bottle does a plausible imitation of a hand grenade, product liability law determines whether you or Coca-Cola pays the cost of removing fragments of glass from your skin. When a company that ordered goods from you finds that it no longer needs them and refuses to take delivery, contract law determines whether you or they bears the resulting costs. Legal rules allocate risk.
Insurance companies also allocate risk—to themselves, at a price. The economic analysis of that activity was worked out well before the economic analysis of law was invented. The economics of insurance—why people buy it, what costs are associated with it, and how they can best be minimized—provides a useful shortcut to understanding a wide variety of legal issues. Hence this chapter.
There is one chance in a hundred that my house will burn down this year, costing me $100,000. I go to an insurance company to buy insurance on the house. The company agrees with my estimate of the odds and concludes that, on average, it will end up paying out $1,000 on the policy. In addition to paying out on claims, the insurance company also has to pay salaries, heating, rent, and the like, so it offers to insure my house for one year at a price of $1,100.
On average, I am paying out a hundred dollars more than I am getting back, so why should I buy the insurance? The answer is that a dollar is not a dollar is not a dollar. If my house burns down, I am going to be much poorer than if it doesn't, hence dollars will be worth much more to me. I am trading cheap dollars in a future in which my house doesn't burn down, dollars not worth very much to me because I will have lots of them, for valuable dollars in a future in which the house does burn down and I will need them badly. The difference in value is enough so that I am willing to trade eleven cheap dollars for ten valuable ones, leaving the insurance company enough over to pay its rent.
This explains why I am willing to buy insurance. Why is the insurance company willing to sell it? If the stockholders of the insurance company have the same pattern of tastes as I do, less value for additional dollars the more of them they have, why are they willing to accept my risk?
The answer is that transferring risk does not eliminate it, but pooling risk does. With a large number of policies, most of the uncertainty averages out. The insurance company that insures a hundred thousand houses can predict with considerable confidence that it will have to pay out on about a thousand fires a year.
We have encountered dollars of varying value before. In chapter 2, when I was explaining economic efficiency, I pointed out that one of the things wrong with it as a measure of how well off people are is that it compares gains and losses to different people on the assumption that a dollar is worth the same amount to everyone—and the assumption isn't true. My example there was the difference between the value of a dollar to a rich person and its value to a poor person. My example now is the difference in its value to the same person under different circumstances.
The example this time is actually a better one, for two reasons. To begin with, the poor person and the rich person may differ in many ways other than wealth. Perhaps the rich person is rich because he values material goods very highly, continues to do so even when he has a lot of them, and therefore works very hard to get more money to buy things with, while the poor person is poor because he has the opposite attitude. If so, the rich person, even rich, may value a dollar more than the poor. In the context of insurance, we are comparing the same person in different circumstances. All that has changed is how much money he has.
A second reason this example is a better one is that we do not have to limit ourselves to theory; we have evidence. People buy insurance even though most of them probably know that, on average, they can expect to collect less in claims than they pay out in premiums. That suggests that purchasers of insurance are measuring its value in something other than dollars and, by that measure, expect a dollar to be worth more to them if their house burns down than if it doesn't. Diminishing marginal utility is a fact about most people's preferences, revealed by their choices.
Economists refer to this pattern of tastes as risk aversion. By insuring your house against fire, you convert an uncertain future, a 99 percent chance you will have your current wealth and a 1 percent chance you will have your current wealth minus $100,000, into a certain future in which you have a 100 percent chance of your current wealth minus $1,100. You are paying $100 to make the conversion, since your expected wealth falls by $100 when you buy the insurance. Your willingness to pay to reduce risk shows that you are risk averse.
This terminology is widely used and almost as widely misunderstood. To begin with, it makes it sound as though risk preference is a statement about your taste for the excitement of risk, when it is actually a statement about how the value of money to you varies with the amount of it you have. There is nothing logically inconsistent about someone who both buys fire insurance and jumps out of airplanes for fun. The cautious skydiver has both declining marginal utility of income and a taste for thrills.
A further problem is that what we call risk aversion is really aversion to monetary risks. The fact that one more dollar is worth less to you the more you have does not mean that the same pattern holds for things other than dollars.
Shortly after getting married, you discover that you are suffering from a rare and very serious medical problem. If you do nothing about it, you can expect to die in about fifteen years. The alternative is a medical procedure that gives you a 50 percent chance of living for another thirty years—and a 50 percent chance of never waking up from the operation.
Measured in life expectancy, it is a fair gamble—on average you will live for another fifteen years either way. Whether you take the gamble depends on how the value of additional years depends on how many you have.
Suppose you very much want to have children—but only if you are going to live long enough to bring them up. Your choice is between a certainty of fifteen years without children and a 50 percent chance of thirty years with them. You grit your teeth, take several deep breaths, and arrange for the operation.
I have just described someone who is a risk preferrer measured in years of life, as demonstrated by his preference for an uncertain outcome over a certain outcome when the expected value is the same for both. He might simultaneously be a risk averter when it was a matter of insuring his house. "Risk averse" is not a statement about tastes for risk but about tastes for outcomes.
Risk aversion explains why we sometimes buy insurance, even at a price that covers not only our future claims but the insurance company's rent and salaries as well. To explain why we often don't buy insurance, it is worth introducing two other concepts: moral hazard and adverse selection.hazard and adverse selection.
You own a factory worth a million dollars. You estimate that each year it has a 4 percent chance of burning down. By spending ten thousand dollars a year on buying and maintaining a sprinkler system and having occasional inspections for fire hazards, you can cut the risk in half. Should you do it?
On average you are saving twenty thousand dollars a year in expected fire damage at a cost of ten thousand a year in precautions. If you are risk neutral, neither risk preferring nor risk averse, that is a good deal. If you are risk averse, it is an even better deal. Your precautions are the equivalent of an insurance policy that costs only half what it pays out.
Suppose, however, that you have already insured the factory for its full value. Now if it burns down you lose nothing. Your precautions are still worth more than they cost—but not to you, since it is the insurance company that benefits by the reduced risk. So you don't bother putting in a sprinkler system.
In the insurance literature, this problem is referred to as "moral hazard"—the failure of the insured to take cost-justified precautions once he has shifted the risk the precautions protect against to the insurance company. It implies that insured buildings are, on average, more likely to burn down than uninsured buildings. Rational insurance companies take that fact into account in setting their rates. Ignoring your insurer's operating expenses and assuming, for simplicity, that it sells policies at their expected value, your policy will cost you forty thousand dollars a year. That is the expected loss from fire if you do not take precautions and, if you are fully insured, you will not take precautions.
The cost of moral hazard is not merely a transfer from insured to insurance company but a net loss—in our example a loss of ten thousand dollars a year. That is the difference between what the precautions you are no longer taking now that you are insured are worth and what they cost.
Moral hazard is one example of a problem we discussed earlier: inefficiency due to externalities. By buying insurance you transfer the benefit of precautions against fire and the cost of risky behavior, such as smoking near piles of waste paper, from you to the insurance company. Precautions now have a large positive externality, so you take inefficiently few; risks have a large negative externality, so you take inefficiently many.
Does it follow that an efficient legal system should ban insurance? No. The inefficiency due to moral hazard is a real cost of insurance, but the gain due to pooling risks is a real gain. Insurance is a voluntary transaction between the insurer and the insured and so will take place only if both parties believe that the gain at least balances the loss.
The problem of moral hazard does not imply that insurance should not exist, but it does imply that insurance companies should and will try to design their policies in ways that reduce the problem. One way of doing so is to specify precautions the insured must take, such as installing an adequate sprinkler system. Another is for the insurance company itself to pay for some precautions, such as inspections.
A less direct approach is coinsurance. The insurance company insures the factory for only part of its value. The lower the fraction insured, the more precautions it is in the interest of the owner to take. If the factory is insured for half its value, precautions whose payoff is at least twice their cost, such as those described earlier, are worth taking. Thus coinsurance eliminates the most inefficient consequences of moral hazard: the failure to take precautions that have a payoff much larger than their cost. At the other extreme, if an insurer is so careless as to insure a factory for 150 percent of its value, the probability of fire may become very large indeed.
"That's not a bug. That's an undocumented feature."
Old software joke.
Consider our factory again, with one change. This time the factory belongs to a large corporation with thousands of such facilities, and I am the employee in charge of managing this one. My employer, like many corporations, judges its employees by results, which in my case means output (the higher the better) and operating costs (the lower the better).
It occurs to me that by doing without fire precautions I can save my employer ten thousand dollars a year, substantially improving my chances for promotion. With only one chance in twenty-five that the factory will burn down each year, the odds that it will happen while I am still in charge are pretty low. And if it does, the million dollar loss comes out of my employer's pocket, not mine; the most they can do to me is to fire me. It looks as though skimping on precautions, while a poor decision from the standpoint of my employer's long-run interest, may be a good one from the standpoint of my long-run interest.
My head office, in charge of thousands of facilities, does not know enough about each one to judge which precautions are or are not worth taking; that is why they have to judge me by results. They don't know it and they know they don't know it. They can run through the calculation of my interests as well as I can and come to the same conclusion—that there is a conflict between what they want me to do and what it is in my interest to do. Their problem is what to do about it.
One solution is to hire another employee to look over my shoulder and second-guess all my decisions, but that would be expensive, and they would then have the problem of making it in his interest to do the right things. An alternative solution is to transfer both the costs and the risks to someone in the business of keeping factories from burning down.
My employer goes to a company that specializes in fire insurance. In exchange for a fee of somewhat over thirty thousand dollars a year, they agree to insure the factory against fires, pay for sprinkler systems, and arrange for regular inspections. I get a message from the head office telling me that precautions against fire are no longer my business. I am to do what the insurance company tells me and send them the bill.
This is a situation where moral hazard is a feature, not a bug. In my earlier discussion I took it for granted that the owner of the factory was in the best position to prevent the fire. He was, to use a term from our earlier discussion of externalities, the lowest-cost avoider. Often that is the case; sometimes, as in the example of the large corporation, it is not. If I am the lowest-cost avoider, the incentive effect of transferring the loss from me to the insurance company is a disadvantage of insurance, since it is no longer in my interest to take all precautions that are worth taking. But it is an advantage of insurance if it is the insurance company that is the lowest-cost of avoider.
"Moral hazard" may be a misleading way of thinking about the issue, since it implicitly assumes that the person buying the insurance is the only one whose incentives matter. The risk of fires or other insurable accidents is affected by the decisions of more than one actor. Putting all the cost on one party gives him the right incentive but other parties the wrong one. If the insurance company is in a better position than the insured to prevent the loss, transferring it to them via insurance raises efficiency more by increasing the incentives of the insurance company than it lowers it by reducing the incentive of the owner. The right rule is to put the incentive wherever it will do the most good.
Seen from this standpoint, coinsurance, insuring the property for only a fraction of the loss, looks like a sensible, although imperfect, solution to the problem. Each party has some incentive, although neither has the efficient incentive, since precautions whose payoff is only a little larger than their cost will not get taken. That leads to some inefficiency. But it may not be a lot of inefficiency since precautions that cost almost as much as they are worth produce only a modest gain. The precautions that you want to make sure are taken are the ones that produce a large reduction in risk for a small cost. Those are the ones that coinsurance makes it in the interest of either party to take.
Recognizing that moral hazard is sometimes a feature rather than a bug helps explain why large corporations sometimes insure their factories. My imaginary employer, after all, has no need to hire an insurance company to pool its risks; it is big enough to pool them itself. Like an insurance company that insures a thousand houses, a company with a thousand factories can rely on the law of large numbers to produce a predictable outcome. Nonetheless such companies sometimes buy insurance. One possible explanation is that they do so in order to give the insurance company an incentive to keep their factories from burning down.
A different version of the same argument explains another puzzle: why people sometimes buy insurance against small losses, even though a small loss is unlikely to change your wealth by enough to significantly change the value to you of another dollar.
Consider a customer who buys a new washing machine from Sears and decides to pay for an extended service contract. From the standpoint of risk aversion, buying the contract makes little sense. But from the standpoint of finding a competent person to fix the machine if something goes wrong, and getting the job done well and at a reasonable cost, it may be a sensible decision. Sears knows a great deal more than I do about how to repair its washing machines and about the competence and honesty of the people who do it. By purchasing a service contract, I turn over the job of finding someone to fix the machine to them and give them an incentive to see that the job is done well, since if it is not they will be hearing from me again. Here again, although in a somewhat different way, we are shifting the incentive from the owner of the property at risk to somewhere else where it will be more useful.
We will return to this issue in chapter 14, where we discuss product liability. A legal rule that makes Coca-Cola responsible if a Coke bottle blows up is, in effect, mandatory insurance; Coca-Cola is insuring its customer against that particular risk. One disadvantage is that doing so reduces your incentive to be careful not to shake warm bottles of Coke. One advantage is that it increases Coca-Cola's incentive to improve the quality control on their bottles.
What He Knows That You Don't
You are in the life insurance business. One morning a man comes running into your office and tells you that he wants to buy a large policy—right now. At what price should you sell it to him? Actuarial tables showing risk of death for men of his age and health may not be very relevant, since his behavior suggests that he knows something you don't that is relevant to his chance of living out the day.
The same argument applies, with somewhat less force, to everyone who comes into your office. The fact that someone wants to buy insurance is evidence that you should not sell it to him—more precisely, that he is a worse risk than the actuarial tables imply. Most people, after all, have some private information about their own risks, whether those risks involve jumping out of airplanes, paying too much attention to arguing with radio talk show hosts while driving in heavy traffic, or being shot at by jealous husbands. That private information affects how likely their family is to collect on their life insurance, hence how much they are willing to pay for it. People who buy insurance represent, not a random sample of possible customers, but a sample weighted toward those most likely to collect. A prudent insurance company takes that into account in setting prices.
If the insurance company knew the risk for each customer, there would be no problem; high- and low-risk customers would have their policies priced accordingly and buy or not buy according to whether or not the protection was worth the price. But the insurance company cannot charge different prices to high-risk and low-risk customers if it does not know which is which, so it ends up charging both the same price. That makes insurance a better deal for customers whose private information implies that they are particularly likely to collect than for those whose private information goes in the opposite direction. The result is that members of the first group are more likely to buy insurance than are members of the second.
The insurance company prices its policies to allow for that fact, which makes insurance an even worse deal for the low-risk customers, since they are being charged a price that assumes they are probably high-risk customers. As low-risk customers respond by dropping out, buying insurance becomes even stronger evidence that you are a bad risk; the price rises accordingly. The result is that some of the good risks fail to buy insurance even though, at an actuarially fair price, it would be worth buying. That is the problem known in the insurance literature as adverse selection—an inefficient outcome due to asymmetric information.
To see the same pattern in another context, consider the market for used cars. Sellers know more about their cars than buyers do, and the worse the car, the more willing the owner is to sell. This time it is the seller who has private information—with the result that his willingness to sell is at least weak evidence that the car is a lemon. Buyers reduce what they are willing to offer to take account of that evidence, making sale even less attractive to owners of cars in good condition. One can model extreme cases where only the worst car sells or more realistic cases in which many cars go unsold, even though they are worth more to a potential buyer than to their current owner.
Sellers with cars in good condition could solve the problem by providing guarantees—any repairs in the first year to be paid by the seller. Their willingness to offer such guarantees would demonstrate that they believe their own claims about the car's condition. Unfortunately, while a guarantee eliminates inefficiency due to adverse selection, it creates inefficiency due to moral hazard. The buyer, knowing that someone else will pay for repairs, has an inefficiently low incentive to take good care of the car.
Such conflicts appear with disturbing frequency when trying to design efficient legal rules: Fixing one problem often creates another. The fully efficient rule, one that gives everyone the correct incentives on every margin, may turn out to be an impossible ideal. If so we are left, as Coase suggested, with the problem of choosing the least bad among a set of imperfect alternatives.
A friend of mine who was looking for a used car devised an ingenious way of taking advantage of adverse selection to induce sellers to reveal their private information. Having located a car he liked, he asked the seller if he was willing, for an additional payment, to provide a one-year guarantee. The seller declined. My friend continued looking. Eventually he found a car he liked whose seller was prepared to sell him a guarantee as well as a car. He bought the car—without the guarantee.
This method works only if the seller does not know about it; otherwise he can offer to guarantee a lemon, knowing that the buyer will buy the lemon but not the guarantee. Perhaps it would be prudent for me to continue to omit my friend's name from this story, on the chance that he might some day try to buy a car from one of my readers.
How to Hurt People by Helping Them
It is now becoming possible to identify genetic tendencies toward diseases and test for them. This leads to some interesting problems.
Some people have bad hearts, some do not. As long as nobody knows which is which, it is possible to insure against the risk of a heart attack. Suppose a cheap and reliable genetic test is discovered by which we can tell who is in which group. Consider some possible legal rules:
1. The test is banned; nobody is allowed to use it.
2. Individuals are permitted to get tested. Insurance companies are permitted to make testing a condition of insurance and take account of the result in setting rates.
3. Individuals are permitted to get tested; the results are confidential. Insurance companies are forbidden to make testing a condition of insurance and take account of the result.
4. Individuals are permitted to get tested, but the fact of the test (not the outcome) is recorded. Insurance companies are not permitted to require testing as a condition of insurance but are permitted to know whether or not a potential customer has been tested and to take account of that fact in setting the rate they charge him.
What are the consequences of each rule? Is it possible that, under some or all rules, the invention of the test makes us worse off?
To see why the answer is "yes," compare rules 1 and 2. Under rule 1, which corresponds to the situation before the test is invented, neither the insurance company nor the customer knows the condition of the customer's heart, so the risk of having a bad heart is insurable. Under rule 2 if you try to buy insurance and refuse to be tested, the company will take that as evidence that you know you have a bad heart and set the price accordingly. You can still get tested, show the results to the insurance company, and insure against whatever uncertainty is left after knowing the condition of your heart, but the risk of having a bad heart is now uninsurable.
The result of rule 3 is worse still; we are back in the market for lemons. Anyone who tries to buy insurance against a heart attack signals by doing so that he has a bad heart; the insurance company no longer has the option of testing applicants and pricing insurance accordingly. People with good hearts cannot get insurance unless they are willing to pay far more than the actuarial value of their risk, which few are. Only people with bad hearts are insurable—against the residual uncertainty of just when their hearts will fail.
Rule 4, if it is an option, provides the best outcome. People who want to insure against the risk of a bad heart can buy insurance before being tested; since they can prove to the insurer that they have not been tested, the price will correspond to what it costs to provide insurance to a random customer. After they have bought insurance they can then decide whether the advantage of better information about what health precautions they should take and how long they can expect to live outweighs the risk of learning something they may not want to know.
Unfortunately rule 4 may not be an option in a world of many countries and high mobility. Even if the United States insists that all tests be recorded and successfully suppresses any black market in secret tests, American citizens can still get their genes tested somewhere with less restrictive rules. The same problems apply to rule 1. So it is possible that the invention of the test, by moving us from the world of rule 1 to the world of rule 2 or 3, may make the risk of being born with a bad heart uninsurable—just as the risk of being born poor is now. If that effect is large enough to outweigh the benefits that individuals get from knowing more about their own health risks, the invention of the test will have made us, on net, worse off.
I was introduced to this problem by a commencement speech proposing rule 3 as a way of protecting people from misuse of genetic information by their insurance companies. I concluded that the speaker had never heard of adverse selection.selection.
I rent land with buildings on it. Six months into my one-year lease, some of the buildings burn down. Do I still owe full rent for the remainder of the year? The common law answer was that, unless there was a contrary provision in the lease, I did. The risk associated with a fire was placed on the tenant, not the landlord, at least for the duration of the lease.
I sell a piece of land, accepting as payment a sack of gold coins. A year or two later, after discovering that the coins are actually lead plated with gold, I attempt to cancel the sale, only to learn that the buyer has resold the land and skipped town. I try to reclaim the land from its current owner, arguing that since it was procured by fraud the buyer never really owned it and so could not sell it. I lose. In the view of most courts I am entitled to void the purchaser's deed but not the deed of an innocent third party.
Suppose that instead of defrauding me with fake coins, the villain adopts a more direct approach: he forges a deed to my land and then sells the land to an innocent purchaser. This time, when I try to reclaim the land, I win the case.
All three of these cases make sense in terms of one simple rule for allocating risk: Put the incentive where it does the most good. Can you see why?
My Academic Pages
My Home Page
Email to me