I’m writing a piece on organizational risk management, but it’s not ready yet; expect it in a couple of weeks.
When I was in college, “algorithm” was a term mostly confined to computer science classes. Behind the scenes, though, algorithms had already crept into everyday life via systems like FICO credit scores in 19891. The good news is that early algorithms were fairly formulaic, relatively understandable, and easily disputable, so errors in the record could be corrected with reasonable effort.
Fresh starts were also a thing across country and sometimes state boundaries. For example, overstaying a visa in Spain didn’t necessarily mean you’d have trouble visiting Finland two years later. Mishaps were less permanent and less broadly damaging.
And in that kind of environment, when mistakes were easier to fix and less universally visible, the potential downside of taking risks was lower.2 Now, with more advanced algorithms running more and more of the show, making assessments and decisions that are increasingly difficult to override, the potential downside of taking risks is generally higher.
Risk-taking and mitigation
When you take a risk, you do it in expectation of a payoff, which may be:
financial (investing in a stock may pay off in profit),
emotional (getting married may pay off in intimacy and companionship),
social (running for college class president may pay off in popularity and connections),
physical (undergoing surgery may pay off in improved health), or
spiritual (taking ayahuasca may pay off in self-discovery).
Most people try to mitigate potential downsides when they take risks, which makes some types of risk better bets than others. But I believe algorithms have increased the severity of the downside—of perceived failure—for many types of risk. Failure is more broadly visible and harder to leave in the past, and that’s not good for innovation, progress, or happiness.
How did we get here?
Like I said, computer algorithms have been with us for many decades. But they really ramped up in the early and mid-2000s. I believe the suspicion and fear stoked after 9/11 played a role, as did the good old profit motive. Companies conducted A|B tests to identify which algorithmic tweaks could hold our attention longest, increase our engagement the most, keep us returning over and over like rats pressing levers in a maze. And there were, as always, second-order effects.
As a simple example, when I was in college, if I drank at a frat party, there was near-zero chance that anyone would take a photo of me attempting to enjoy terrible beer, post that photo publicly to social media, and refuse to take it down unless I went through a complex complaint process involving a giant corporation, only to find that five other accounts had already reposted or copied the photo.
If that type of dynamic had been in play back then, the stress of knowing that any unflattering or regrettable moment could be captured essentially forever and used against me would have had a profoundly oppressive, intimidating effect on my sense of security, self-esteem, and confidence in the process of growing up.
I believe the vigilance of Being Perfect or at least Appearing Perfect has had a dampening effect on not just our risk appetites, but also our joy in the imperfect process of learning and growing. We have to take risks to break new ground and make progress. But now, although threats about “your permanent record” tend to evaporate when we leave high school, there really is a permanent record that follows us around. And it’s facilitated by algorithms.
Tiering and its discontents
Another problem with algorithms is that they have enabled companies to filter profitable and unprofitable customers—so a company that once might have aimed to “Delight the customer” can now aim with great precision to “Delight the best customers.” And if you’re not in that top tier and you feel like chopped liver? Perhaps it’s because you really are treated as if you don’t matter much. Witness the proliferation of VIP customer service lines that make reaching an empowered human relatively easy, contrasted with labyrinthine phone trees or the simple absence of any customer service channels at all.
A second-order effect here, of course, is dissatisfaction that eventually, after a thousand paper cuts inflicted over the course of years, can become cynicism or even rage.
Rage against the dying of the light
You can try to adjust and give more business to friendlier algorithms: for example, my various credit cards seem (from my perspective) to have different levels of algorithmic profiling applied to purchases, so my reaction has generally been to use the least jumpy or judgmental card the most. If I’m shopping at a new retailer or traveling, I’ll often use that card rather than the one that auto-calls me from the fraud department at the drop of a hat.
Another example: many years ago, a dating service flagged my account. It turned out that I had used a VPN (virtual private network)3 while traveling, and it triggered an algorithm that associated the VPN-assigned IP address with misdeeds of some other user in some other location. After several emails with customer support, who initially refused to disclose any reason for my account lockout, I deduced on my own that the VPN might have been the culprit, explained the situation, and was reinstated—after promising never to use the VPN again when connecting to the service. It was difficult to fix this problem, took several back-and-forth email exchanges, and I believe I was only successful because I understand how VPNs work and managed to reach someone with some ability and power to override the banning algorithm.
It’s getting harder and harder to do that. And with AI taking over functions like customer service, as Tomas Pueyo writes about in his Q3 update (see the section titled “AI and the Automation of Work”), it’s going to get even harder unless conscious choices are made to elevate and empower human second- and third-line support. Sure, let AI handle vanilla password resets and simple return requests—but ensure there are plenty of humans to quickly handle cases where algorithms’ assessments may not have been accurate or fair. The White House’s Blueprint for an AI Bill of Rights, released in October 2022, takes a stand here and states: “You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”4
Can versus should
Keeping humans in the loop at key junctures is vital because algorithms have no emotions, not enough nuance, and often no space between checkboxes for leeway in their decisions. If you get on the wrong side of an algorithm, it’s all over but the shouting unless you can get human help. And algorithms are everywhere now.
We have not really reckoned enough with the can versus should question, as a former work mentor and leader who I greatly admire would put it. Yes, we can use algorithms to reduce short-term costs and maximize profitability, to gain insights about people across companies and services, and we can try to blame “the system” when it goes wrong.
And yet. We are the system. We are in it, and we are part of it, and we are letting this happen without a full discussion of boundaries and counterbalancing controls and second-order effects and what is right and what is not right. Our future will probably include ever more algorithms and fewer human interactions, so we would do well to build in escape hatches:
Dispute pathways guaranteed to reach humans who can easily and quickly override algorithmic decisions.
Frequent and proactive assessment of user complaints to identify and correct patterns of algorithmic misjudgment.
Interim workarounds for people unfairly and incorrectly assessed by algorithms.
Escape hatches are the difference between technology that serves us and technology that subordinates and separates us. The purely profit-driven may attempt to convince us that escape hatches are too costly, too redundant, too human, but they are essential. A magnificent ship without escape hatches can all too easily become like the Titanic. Risk management demands a little redundancy, to give us a little breathing space against the tide.
Isn’t it weird that FICO scores didn’t exist before 1989? They seem like a fact of life to me, but they’re quite recent!
Increasing the downside of risk-taking has good and bad effects, with tradeoffs: if intensive monitoring reduces crime rates by a little but increases anxiety by a lot across a broad population, is it worth it?
VPNs encrypt your internet traffic on public WiFi networks and were especially important before HTTPS became the secure default for most websites.
“Blueprint for an AI Bill of Rights.” White House Office of Science and Technology Policy, October 2022. https://www.whitehouse.gov/ostp/ai-bill-of-rights/
I admit, I'm pretty concerned about our increasing reliance on algorithms. Most of what I see is algorithms which don't work as well as a competent human, or a team of competent humans working together. But the algorithms are fooling people without much knowledge that genuine expertise isn't needed. They are amplifying Dunning-Kruger effects.
The identification apps which claim to be able to identify plants and fungi are a great example. The accuracy of the plant apps, with a decent specimen, isn't bad, perhaps around 80%. For fungi, it's a lot less. Much lower than a decent expert. But the apps convince people with no expertise that all they need to do is hold their phone over something and they'll get a name. But a human who has acquired genuine expertise will know more than just a name, they know the risks. They'll understand that a mistake with Amanita or Apiaceae is not the same as a mistake with Psytharellaceae or Arecaceae, for example.
"Isn’t it weird that FICO scores didn’t exist before 1989? They seem like a fact of life to me, but they’re quite recent!"
My same thought. And exactly a generation later, a subprime housing crisis.