When things move fast, it can be hard to keep perspective on how much risk to take. In the world of AI, it feels like everything is moving fast all at once, and our world will be reshaped in unpredictable ways. As I wrote in my article on Types of Risk, AI has unbounded upside potential and unbounded downside risk—and it’s uncertain which outcome will manifest.
In the current AI boom, it’s important to understand that there will be two phases of AI development and AI risk management1 :
The phase we are in now, when we have the opportunity to learn from our mistakes and improve controls. In this phase, our concerns might center on misinformation floods, developer ethics, AI ethics and equality of outcomes, and we may be able to create and use cool new apps and services, some of which may be world-changing2 . AI safety in this phase may resemble social media safety.
The phase we will enter at some undetermined future time, possibly without our immediate awareness, when AI will have advanced to a point at which mistakes become catastrophic. AI safety in this phase may resemble nuclear weapons safety or bio-lab safety, with near-zero tolerance for errors.
The bottom line is that risks of AI development and failures of AI risk management will become more consequential from a safety perspective as AI technology evolves. Organizations will need to remain fully aware of where we are within Phase 1 in order to understand when we might be at risk of crossing into Phase 2, so they can adjust gears before we collectively hit the wall.
On anxiety about AI
This brings up a larger question that seems especially relevant amid the general uncertainty about our future amid climate change, AI evolution, and some pretty-much-crumbling societal infrastructure: how can you keep your risk ship steady during dangerous times?
There’s a wonderful quote in The Fellowship of the Ring where Frodo and Sam are leaving the Shire on their quest, and Frodo reminds Sam what Bilbo used to say:
This quote beautifully describes the role risk plays in making life fulfilling despite danger. While it can be tempting to close the curtains and lock ourselves indoors, letting the world spin on outside while we stay safe, safety is only valuable in the context of a fulfilling life.
If we never ask anyone out or agree to a date, we save ourselves heartbreak but wall off passion and intimacy. If we never leave one job for another or start a new venture, we prevent uncertainty but wall off career advancement and potential profit. If we never invest our money, we never face total loss, but we lose the opportunity for returns that outpace inflation. And if we spend all our time worrying about the future, we don’t live our present.
Are you saying I should strive to feel uncomfortable?
Not necessarily. Every person and organization has a different risk appetite. The two extremes of risk appetite go something like this:
Risk is to be minimized.
Risk is the logically coherent path to maximal outcomes.
Either way, extremes rarely work out well.
Someone who lives by the first extreme may spend their life wishing for things but never taking steps to realize those wishes, because it’s too dangerous, so in the end they live a circumscribed life.
Someone who lives according to the second extreme may find their risky bets ultimately harm themselves and others, leaving them in unsustainable positions.
Extreme A tends to implode quietly over time, and Extreme B tends to explode with fanfare seemingly all at once, but both often lead to failure.
In contrast, moderation in risk management often leads to sustainability. Just as a large, dynamic, and prosperous middle class is a recipe for a stable society3, calculated but not reckless risk-taking can shape a successful career, a successful portfolio, and successful personal relationships. AI peace of mind gets tricky, since the technology is so potentially world-changing and the outcome so uncertain, but moderation is still possible.
What does moderation in AI look like?
It seems clear that companies are not going to stop or slow development, at least not soon. I do think there’s a strong argument for slowing down whenever capabilities get ahead of controls, or if it looks like we’re nearing a transition from Phase 1 to Phase 2, but that would require strong regulation, and AI regulation is very much evolving (in the US, there is no formal regulatory oversight, though NIST has developed an initial, voluntary risk management framework for companies and organizations).
Right now, strengthening controls seems both feasible and advisable, given the rocky rollouts of some recent large language models (LLMs). So does giving “responsible AI” teams at companies a seat on risk committees and a vote in go/no-go decisions, so senior management and board members can find out and respond quickly if problems are cropping up in their AI systems. And it’s a great idea to push for smart and effective regulation of the AI sector.
Despite my worries about Phase 2 of AI evolution, my personal approach to moderation and peace of mind about AI looks like this:
Enjoy Phase 1 as much as possible.
Don’t work on advancing AI any further or faster than it’s already advancing.
Do what I can to push for improved controls, leveraging my experience and background in operational risk.
What does yours look like?
Making peace with risk
We all live with big risks all the time4, and most of the time, we muddle through. We’ve had nuclear weapons for eighty years, and we’re still here. I believe that’s due to a lot of unsung effort by many different people across many countries; after all, risk management victories are often silent.
I hope AI will turn out similarly or better. But however it works out, I won’t let worries about the future throw me off-kilter in the present. A full human life is one of balance, and AI won’t change that, even if it makes some things easier and other things harder. We are here, living in interesting times, taking small steps that will coalesce in complex ways. I’m focusing on aiming for positive change with my own small footprints.
I raised this two-phases point as part of my feedback on NIST’s draft AI Risk Management Framework Playbook, so I am walking the talk in pushing for better controls, and I hope you will, too.
As just one example, imagine a service that lets your doctor create a bespoke medicine for a health condition.
OECD (2019), Under Pressure: The Squeezed Middle Class, OECD Publishing, Paris. https://doi.org/10.1787/689afed1-en
Well, most of the time: I’m looking at you, jumpy account-holders actively gutting your own banks. Argh! Another article for another day.
Thank you for sharing your insights.
To me, the qualifier “artificial” in AI is inappropriate. AI is intelligence, just like any other intelligence.
Therefore the more relevant question in my mind is what we want to use this intelligence for?
If we deploy AI without thinking about our purpose, it is no different than letting a drunk person drive a car.
Sorry this is a little long but one of my favorite topics.
I enjoy your writing about risk as it relates to AI. I especially enjoy the analogies to real life. I continue to believe most of the fanfare around AI at this point is sorely misplaced. I believe there are some a-grade challenges ahead though. The focus continues on consumer-facing "write a school essay" or "write a thank you letter". These happen to be the edge of the shallow water where the inquisitive are dipping their toes and reporting on twitter what happened.
While far from sexy, the closest think to ML in the consumer space is the now well recognized capacity of Gmail to block spam. I think when people migrate to GMail they are struck by its innate capabiities to manage an Inbox far beyond what a person can do on their own. It is a weird thing we all take for granted. The amount of compute, insight, and ML required to manage about 40% of the world's email traffic is a feat much more impressive than writing a middle school essay. This in a nutshell is why Apple finds a large subset of their users on their "superior platform" not being able to get along with all of these ML-charged products like Gmail and Gmaps for example. The converse is not true, there is no one beating a path to their door to please let me use Apple Mail