This article is part two of a series on information asymmetry, here you can find part one, on adverse selection.
Moral hazard is an economics term referring to a situation in which an actor has an incentive to take on more risk because they do not bear the full weight of that risk. This typically comes from information asymmetry, because the party taking on risk understands the full risk more than the party that bears some of the costs of the risk.
The most typical example of moral hazard is the bank bailouts of the 2007 financial crisis. Banks felt like they would be “saved” by the US government in event of major trouble, so they took on much more risk (particularly in mortgage-backed securities). Additionally, mortgage originators had incentive to take on risky mortgages, because they would sell them as part of mortgage pools which took over any risk of default. Much has been written about this and I won’t go into more detail. It might seem like moral hazard only applies to financial transactions of a complex nature, and we wouldn’t find it present in more prosaic situations. Yet that is far from true, and so I want to give a few examples of other examples of moral hazard, as well as some ways in which they are mitigated.
Recruiting
A common compensation model for recruiters has them paid a percentage of a hire’s first year salary for every job placement they make (typically something like 20%). This seems reasonable, after all a recruiter that doesn’t make any placements isn’t actually providing any value. However, we can see that this creates a clear moral hazard situation. Consider there is a “risky” job candidate, perhaps they work hard some days, but other days they shirk and play video games instead (or you could imagine other ways in which they are risky). A recruiter can still try to place this employee, hoping that their interview takes place on a good day, and they get to take home the commission. For the employer using the recruiter this is not great, because they’d rather have only less risky candidates, since the burden of a bad hire falls mainly on them.
There are some common mitigates used to mitigate this. One is simply that employers tend to have strict interview processes to filter out ‘bad’ candidates. A recruiter that continually sends candidates that fail the interview will lose reputation and won’t be hired by the employer as much. However if it’s only done infrequently then this might be hard to detect. Another option is, naturally, to pay a flat salary, but this has the reverse issue of the recruiter doing too little work.
Critical Projects
Consider a business-critical project with an important deadline, say migrating some backend system to handle some big customer’s data needs. Any such project carries a large amount of risk, but this risk is not uniformly placed on the people working on it. If the project fails, a random employee of the project might suffer some temporary hit on their performance review (which they obviously don’t want), but the business as a whole could be affected much more catastrophically. As a result, employees will not have the same incentives to work extra hard to make sure the project succeeds. They might be more likely to “cut corners” or make other risky decisions.
This is a difficult situation to mitigate. Many companies give employees equity to try to align their incentives more closely to the business, but this only works to a small extent, particularly when the business is large. Even with 20 employees, each one’s contribution is only a small percentage of the overall business, so they still don’t have the same risk as the business as a whole, and of course this is magnified the more employees there are. I suspect this is a big part of the constant delays and missed deadlines found in projects in larger companies.
Software Handover
In some cases a different set of engineers work on software than will eventually be maintaining it. Or there might be a separate operations team that actually runs it in production. Both these cases will produce moral hazard. The developers writing the software don’t have any incentive to make it reliable, since they won’t suffer any of the added maintenance costs. So they will tend to take shortcuts and make poor decisions that won’t be apparent till later. They might decide to pick some “hot new framework” just to try it out, even though it hasn’t yet been proven at scale. This might not even be intentional, they might merely spend less time on planning/design and just go with a quick solution without thinking about it.
The only way to solve this situation is to avoid it in the first place. Whoever writes code should also be responsible for maintaining it. Developers on a project should be part of any oncall rotation and support load for that project. Similarly, bug reports should arrive directly to developers in some form (of course, some consolidation for volume is helpful), if they are too filtered out then they won’t feel like the bugs are their responsibility.
Of course, one issue is that jobs are often very short in length. It’s not uncommon to leave after 1-2 years, which greatly diminishes any responsibility you have to the code you wrote. Obviously we can’t force people to stay, but I think it could be good for employers to consider that they might be better off taking more effort retaining employees rather than hiring new ones. In particular the incentives produced when everyone knows they will stay around for 5+ years mean that they will try much harder to produce quality, maintainable software.
Conclusion
Moral hazard can be found in many aspects of work, and can produce wrong incentives that cause long-term harm to the business. The best ways to get rid of it are either to align incentives differently, so that risk is shouldered by those that choose to take it, or to include some kind of oversight from someone who is properly aligned. Yet, this is not always possible, so like other problems caused by information asymmetry it can never be solved perfectly.
Moral Hazard
“I suspect this is a big part of the constant delays and missed deadlines found in projects in larger companies.” I think a more likely explanation is that executives don’t understand the planning fallacy. A responsible project manager will overestimate the amount of time everything will take. The executive sponsor will then challenge it as sandbagging. The project manager will reluctantly improve the forecast after extracting promises of fast decisions and immediate resource availability. None of those things then happen, so the project manager’s initial forecast turns out to be true or pessimistic. I see it all the time. As an executive, I therefore assume that every project will take two to three times as long as I expect.
In-house recruiters’ variable compensation is often paid based on positions filled for a period of at least X. In the last HR department I ran, X was a year for IT positions, and 4 months for operations. If someone made it past those time lines, they were usually there for a while.