Risk management is often viewed as an unsexy cost center. The highest flyers with eyes set on amassing riches, power, or fame don’t tend to study risk management. With its checklists and metrics, it tends to attract systematic thinkers.
And it’s true that using checklists can increase consistency, which is great for processes like spaceflight countdowns and surgical prep. But other parts of risk management require looking ahead to identify emerging risks, which are harder to capture in checklists, though it’s possible to survey the landscape for general early-warning signs.
So, what are the right moves when the business-as-usual realm of checklists gives way to the realm of emerging risks and tail risks?
Preparation contributes to—but doesn’t guarantee—resilience
To a degree, emergency playbooks can play a role, even though emergencies have a well-known tendency to upend “the best-laid plans.” If the COVID pandemic showed us anything, it’s that being prepared for extreme events is underrated. The world was unprepared for a pandemic for various reasons that sounded good in good times:
It was expensive to prepare, especially to build up healthcare resources so that capacity could withstand catastrophe.
We might never need to use our preparations, so they could represent a waste of time and money.
It was inconvenient and difficult to work out controversial questions like who should receive limited resources in an extreme situation.
Because it was expensive to prepare and uncertainty was high, cost-cutters eliminated the CDC’s pandemic response team just-in-time for a new pandemic to finally arrive in our interconnected world.
Going forward, we would do well to take a longer-term and less profit-focused view of risks facing humans as a species. Climate change is one example, as are pandemics, asteroids hitting the planet, large solar storms, and artificial intelligence.
What’s the role of regulation in all this?
With a risk already in motion, regulation’s main role is to respond and mandate mitigating controls. But that doesn’t mean regulation can never be preventive. Proactive and smart regulation—which does not necessarily mean more regulation—could have reduced the chances of any pandemic taking hold and spreading before the virus emerged as a threat.
For example, proactive pandemic regulation could have meant:
Strengthening information sharing among local, state, and federal public health departments and helping them proactively build trust with communities.
Cutting red tape for testing, reporting, and treatment.
Establishing rapid-response channels to take anomalous reports from doctors and act fast to investigate.
Banning gain-of-function research. (Whether or not it contributed to the COVID pandemic, gain-of-function research poses significant risk that may not be worth the potential benefits.)
Prioritize, target, and break through prisoner’s dilemmas
As mentioned above, smart regulation doesn’t mix well with bureaucratic red tape. Instead, it applies itself at critical chokepoints in processes and systems, focusing on the highest-risk potential outcomes and ignoring low-risk check-the-box items that can gum up the works and choke off profit but don’t really reduce risk. For example, do you know that at the outset of the monkeypox pandemic, doctors had to fill out at least 27 pages of paperwork to prescribe TPOXX for a single patient? That is insane, especially in the context of an emergency.
Smart regulation, in contrast, has a few notable characteristics:
It is targeted.
It engages the governed, listening to their concerns and gaining their trust as a governed party to ensure buy-in at a strategic level instead of tactical compliance in a check-the-box exercise.
It is not make-work and is reasonably time-efficient, allowing sufficient flexibility in implementation for different types of businesses and organizations.
Ideally, it makes required something industry participants mostly wanted to do anyway but couldn’t since doing so would have put them at a perceived or actual market disadvantage. In essence, regulation in this situation is a key for overcoming a prisoner’s dilemma.
It helps ensure safety for stakeholders that have insufficient leverage to negotiate with more powerful stakeholders on their own (consumer protection regulation is an example).
It is forward-looking, with consideration of how processes and systems are evolving and how the regulation might apply to their anticipated future states.
It costs businesses and societies less than the risk it protects against.
Its second-order effects are manageable and do not undermine the intent of the regulation (e.g., by driving activity outside of the regulatory jurisdiction en masse without actually reducing risk in the system as a whole).
In systems thinking terms, smart regulation seeks high-leverage points and applies pressure there. Identifying those high-leverage points for established or nascent risks may require in-depth interviews with stakeholders; objective assessment and analysis of responses; and coordination among parties with multiple and sometimes competing interests.
…. And what not to do
In contrast, here are a few hallmarks of cumbersome regulation (though a regulation doesn’t have to hit all these points to be cumbersome!):
It is backward-looking—enacted to protect against the last crisis—and/or lags behind technology, business, and society.
It does not sufficiently assess all stakeholders’ likely reactions to the proposed regulation, so second-order effects undermine the intent of the regulation.
It is overly rigid and poses a much greater burden for smaller businesses than for larger businesses.
It attempts to correct course after-the-fact or not at all.
It costs more to implement than the risk it protects against.
It leaves the state of play worse than it was before, as companies spend money trying to comply (sometimes tactically rather than strategically), yet risk is reduced less than planned.
Overall, if the weight of regulation becomes ever heavier without corresponding benefit, the problem likely lies in how the regulation was researched, scoped, and implemented. Regulations can be changed and improved, but it’s much easier to get them right the first time.
An approach for emerging risks
Getting regulation right is especially tough when dealing with emerging risks like AI risk and in looking forward to address technologies and capabilities that don’t even exist yet. Proactive, smart regulation that aims to prevent emerging risks from manifesting would:
Engage all stakeholders to ensure complete understanding of the technology as-is and as-might-someday-be.
Explore potential worst-case scenarios.
Ask how systems could be broken or manipulated to achieve those outcomes.
Take steps to prevent those systems from breaking or being manipulated in those ways.
Also, proactive regulation would assess each risk on its own merits and tailor its approach, rather than attempting to shoehorn an existing framework whole-cloth onto a novel situation. It wouldn’t attempt to control every possible thing that might go wrong. Instead, it would focus on preventing high-risk outcomes.
AI regulation dilemmas and the importance of preventive controls
AI risk poses particular challenges for regulation. Like nuclear weapons, AI that surpasses human intelligence could wipe out humanity. But nuclear weapons tend not to fire themselves, whereas self-aware AI might take action without its creators’ awareness or agreement.
Moreover, nuclear weapons are easier to track centrally, and materials bottlenecks significantly limit proliferation. The main bottlenecks to AI’s creation are compute and training resources and human brainpower applied creatively and correctly. As a result, AI risk is somewhat nonlinear: whether the most powerful country or company creates artificial general intelligence (AGI) or a small startup does it, some of the risks may be similar in magnitude. How can regulators possibly police all countries, all rogue groups, all companies of all sizes, and all tinkerers sufficiently? They cannot (although identifying key gateways to AI training clusters and placing controls there could help!).
Even AI’s creators may not be aware when their AI has achieved artificial general intelligence, if it’s not in the AI’s interest to let them know right away.
In the face of this dilemma, preventive controls are especially important with AI because, in a true AI disaster, there might not be sufficient time for detective and corrective controls to take effect. In some other fields, insufficient regulation and lax controls can lead to bad outcomes, but it can take years. With AI, insufficient regulation and lax controls could lead to bad outcomes extremely rapidly.
To sum up, it benefits humanity to create smart AI regulation that balances competing interests and targets key leverage points, despite the difficulties and long odds. In next week’s article, I’ll dig into possible paths and approaches for AI regulation.
-<>-<>-<>-
Extra, Extra!
Tangential extras for curious readers:
1. Artificial Intelligence: A Guide for Thinking Humans - by Melanie Mitchell - I’m reading this book about AI.
2. Machine Learning Shaking Up Hard Sciences, Too - by Dan Garisto in IEEE Spectrum - on how particle physicists are finding new uses for machine learning, and in turn learning from machines.
Food for thought - for sure. Thanks for sharing.
Stephanie -- You covered a lot of ground here and a lot of it was GREAT. There are so many sub-topics, too many for a comment or I will get a bad reputation :) I thought, first and foremost, you did a great job describing the prisoner's dilemma.