Information asymmetry refers to any situation in which one party has more information than the other. Initially you might think that this will always advantage the party with more information, and that certainly is often the case, but other times it can be a detriment. In such cases the party with more information may want to communicate this information as much as possible to the other party, to avoid these problems. There are several specific types of information asymmetry; in this article I’ll discuss one type: adverse selection, and two examples of where it turns up in tech.
What is Adverse Selection?
Adverse selection refers to a situation in which actors selectively participate in a transaction. The most famous research in this field comes from George Akerlof’s 1970 paper “The Market for Lemons”. Akerlof describes a hypothetical world in which some cars are high-quality (“peaches”) and other cars are low-quality (“lemons”). Peaches are valued by both buyers and sellers at a larger amount than lemons. Say, for example, peaches are worth $30,000 and lemons are worth $10,000. The catch is that buyers have no way of knowing whether a car is a peach or a lemon. So if we start out with a 50/50 distribution of cars, buyers will only be willing to pay $20,000 for a car.
However, sellers do know what type their car is, so any seller of a peach will not be willing to sell it for $20,000. The only people selling in this situation will be sellers of lemons. As a result, buyers will realize that the chances of getting a lemon are much higher, and revise their price down, creating a feedback loop, until buyers are only willing to pay $10,000 and sellers are only selling lemons. This is a very negative result, because many sellers want to sell their peaches, and many buyers want to buy them, but yet no sales can occur (this is an example of “market failure”).
Mitigating Adverse Selection
To prevent this problem we need to take actions to reduce information asymmetry. These are three common mitigating approaches.
Reputation
Sellers may cultivate a reputation of providing reliable products (this is the value of having a strong brand). This can also manifest in things like public reviews (for example, seller reviews on eBay or Amazon, or car dealer reviews online). It may also just come from word of mouth or similar recommendations.
Unbiased Evaluation
For some products it may be possible to have unbiased evaluations. For example, for used cars we have vehicle history reports like CARFAX, that give objective information about the vehicle. Another option is to hire an independent mechanic to examine the car (likewise, for houses an independent property inspection is standard). For consumer goods there are many independent organizations like Consumer Reports that provide evaluations.
Insurance
Another way to avoid adverse selection is via some kind of insurance for buyers to be protected in case of bad outcomes. For cars sellers typically will provide warranties, or these can even be purchased from independent providers (“extended warranties”). Some jurisdictions have so-called “lemon laws” that allow for return of defective goods within a certain period. Credit cards often offer their own warranties for any goods purchased on them (since credit card issuers benefit from having transactions, it makes sense for them to undertake this part in the market).
Example 1: Buying SaaS
This situation may seem a bit contrived since the real world is not so simple as Akerlof’s model. However, even if it doesn’t perfectly capture what occurs, we can still look for elements of adverse selection that arise in real-world scenarios.
One such situation that fits this model is buying software (especially SaaS). A common situation, especially at a startup, is that you find your business in need of some piece of software. Perhaps it’s something common like an issue tracker, a project management tool, or e-mail/calendar solution. Or perhaps it’s a bit more specialized, like error tracking, performance monitoring, a cloud database, or user behavior analytics. And inevitably there are multiple providers vying for your money and you must choose which one to go with.
There is a large amount of information asymmetry in this scenario. You only get to interact with the information the provider wants to show you. The fancy UI, contrived examples, and demo videos are all part of a slick sales approach to get you to consider their product as the best possible solution to your problem. All this isn’t to say it’s not a great product, it might very well be (as some cars were peaches in Akerlof’s model). Yet you can’t really be sure, for behind the scenes could be many potential defects that you can’t immediately appreciate. For example, perhaps their infrastructure is unreliable, or it won’t scale to meet your future growth, or it doesn’t cover some important edge case, or has poor security protocols. Perhaps the company itself is unstable and will get bought or go out of business. Switching solutions can be very costly and come with extra issues so you want to make sure you choose the best solution, but you may not be able to tell for sure.
Since you can’t necessarily know all these factors, we would expect companies that provide services that are deficient in them to be able to charge lower prices and therefore seem more attractive. For example, if they spend less on reliable infrastructure they can charge you less, and you probably have no idea at the time of purchase. Indeed any company that doesn’t do this wouldn’t really be able to price competitively and therefore wouldn’t even attempt to enter the market. This essentially explains the “mystery” of why it’s hard to buy things that work well that Dan Luu wrote about.
One aspect of this effect is that it’s not equally applicable to all types of software. Some types are easier to evaluate correctly (there is less missing information). For example, anything run locally has fewer dependencies than SaaS (and indeed, as long as you have a binary you can keep using it almost indefinitely). For SaaS, something like a calendar or project management product will be less subject to adverse selection, because it’s much easier for the purchaser to evaluate, and generally has simpler infrastructure. Contrast this with something like a cloud database, which is very hard to evaluate, particularly given changing workloads and increasing data size and scale (hence many issues that users of MongoDB ran into).
Mitigation
If we consider the three approaches to mitigating adverse selection, the latter two are basically useless in this field. Accurate and unbiased evaluations don’t really exist (very few people have sufficiently used all alternatives), especially since each customer might be looking for different things. Insurance doesn’t exist either, since there is such a high cost to switching. There are free trials certainly, but you may not identify any issues until much later into the process.
The surprising thing about this is that there are any SaaS peaches at all. I attribute this entirely to the reputation mechanism. If you are looking into buying some software, hopefully you have someone at your company (or at least someone you trust very well) that has used some software you are thinking about, and can make a strong recommendation.
Example 2: Hiring
For many years there have been articles about how hard it is to hire in tech, such as this recent one in the New York Times. Yet at the same time many software engineers complain that it’s hard to get a job, that they applied to 100+ companies and didn’t get a single offer. This seems like a puzzle: how can we reconcile these situations? It’s another example of market failure caused by adverse selection.
When hiring, both parties have some information the other doesn’t have. The candidate knows their own skills, motivation, and available time much better than the employer. On the other hand, the employer knows the expected job conditions, aspects of work, and possible compensation better than the candidate. As a result, we would expect that the majority of employers and candidates that participate in the market are those with the lower levels of quality. Imagine two companies:
Company A has “poor working conditions” that are not apparent before joining. Perhaps they expect everyone to routinely work long hours, their infrastructure often breaks, they have unfair promotion practices, etc. They pay $200,000.
Company B has “good working conditions”. The opposite of the above. Since this costs them more, they pay only $170,000. However, let’s say the value of all the non-monetary aspects makes the job worth “$250,000”.
So in this case we expect candidates to prefer Company A, even though B would actually be a better choice. Similarly we can imagine two candidates:
Candidate A is not very skillful, often is a drain on their team, never puts in extra work hours, etc. They produce “100 points” of value for their employer (imagine this however you will). Let’s say candidate A expects $150,000 for their work.
Candidate B works very hard, learns new skills well, helps others, etc. They produce “200 points” of value. However, B has roughly the same “years of experience” and level as A, due to the vagaries of the promotion process. However since B works much harder, they expect $200,000 for their work.
As a result, companies will find it easier to hire A than B (they can pay $50,000 less after all!). So B will also have to take a job for $150,000. This will create the incentive for B to work less hard since they are now getting paid the same as A.
Let’s see how the mitigating approaches work, at best, only imperfectly. “Insurance” is kind of the simplest, in most cases employment is at-will, so both employers and employees are able to leave the arrangement if it isn’t working out well. However this is often a complex process, many companies have lengthy HR procedures to fire someone (which can take a year or more), and employees need to worry about going through the job search process again, possible financial impact, and impacting their team and coworkers negatively.
Reputation is quite important, especially for known employers. If you know many people who work at, say, Google, and they say it’s a great place, then you will strongly consider it in your own job search. And the inverse is even stronger–if your friend rage quit a job for being terrible you are likely striking it from your consideration. However you probably have no information for most potential jobs, especially those for smaller companies. For candidates we have a weaker version of this in the form of a resume and references (although in practice this ends up being a mere list of “points” for having worked at a certain tier of employers, or having gone to a certain tier of school). So although this can be effective, it’s a very weak mitigation as well.
Finally we come to “unbiased evaluation”, in this case the interview process. This provides a certain amount of signal, but as we all know interviewing is at best only a weak predictor of job performance. A lot of performance is based on time spent preparing, and “soft” questions mainly test how good the candidate is at framing (this is a nice way to say “it’s easy to lie”). Additionally, for the candidate the interview process provides a very small amount of signal. They don’t really know what it’s like to work there, often candidates only get a few minutes to ask questions at the end of an interview, and there’s no reason to expect that the answers they get are fully accurate. If it were more common to have a “probation” period for jobs this could work better, but it seems unpopular for prospective employees. So, like the other two, this works only very weakly as a mitigator.
Of course, a large amount of hiring does occur. Partially this is inevitable: people need jobs to make money, and companies need to hire someone, so both parties may eventually settle on lower standards. Partially it’s because of the mitigations discussed above.
But another key reason is that because the job search is not seen as purely mercenary (compared to, say, buying a car), it avoids some aspects of adverse selection. Let’s go back to the hypothetical example of Company A/Company B and Candidate A/Candidate B from above. In reality compensation is not directly advertised. This means no one knows that Company A pays more than Company B. So instead of everyone preferring to go there, many might choose Company B if they even have some weak reason to. I think it’s interesting that less transparency creates better outcomes in this way. It may actually be counterproductive to focus on pay transparency (which is seen as good for candidates) and ignore transparency in other aspects of a job.
Mitigation
For companies, it’s important to focus on getting the highest signal from interviews possible. This isn’t too hard for technical parts of work, as anything from whiteboard coding to take-home assignments is actually pretty reasonable. It gets much trickier for other types of skills, including things like planning, architecture, mentoring, etc. I don’t have a good solution for this, it may be right to simply take more risks and accept more false positives.
For candidates, focus on strong reputation signals. If you are considering a job where you don’t know anyone and can’t get any accurate information, be very careful. Make sure to spend a lot of time learning about all aspects of the work environment, both during the interview and afterwards. It’s perfectly reasonable to set up follow-up time for you to ask more questions, if you didn’t have time during the primary interview. Additionally, it’s helpful to ask the same questions to multiple people. This can give you a better sense of what the truth actually is.
Conclusion
Although I only discussed two specific examples here, adverse selection is a type of information asymmetry that is present in many other areas. After you’re familiar with the concept you’ll see it crop up in other contexts (for example, consider something like getting accurate project time estimates). It can be helpful to view problems in this context since you can then focus on how best to apply the mitigation strategies to minimize the impact of adverse selection. As we saw in the examples, in tech it’s common that insurance is not an effective mitigation strategy, leading us to focus more on reputation and unbiased evaluation.