I signed the letter calling for a six-month pause in training AI models more powerful than GPT-4. Not because I think six months is enough time to solve all problems (it isn’t), or because a magic recipe for AI alignment is near (nope), but because the letter pushes for “robust AI governance systems” that I support, and a six-month pause might open up space for cooperation, attention, and regulation.
Here are parts of the letter that I feel especially positive about:
Standing up new/capable regulatory authorities;
Allowing time for AI controls to (attempt to) catch up with AI innovation1;
Implementing independent review, auditing, and certification;
Increasing funding for AI safety research (another way of helping controls catch up);
Working with policymakers instead of viewing government as the enemy, since government involvement in a technology as powerful as AI is inevitable.
These would be steps toward shaping the industry’s future proactively and constructively. Rushing headlong past innovation milestones, like we did with the internet, won’t work—because moving fast and breaking things is insanely risky at the scale and speed of AI.
To engage a metaphor that may be useful:
With the internet, we were running on fresh snow, and sometimes we slipped and fell.
With AI, we’re running on fresh snow that may hide a series of crevasses: thresholds that may appear subtle or almost unnoticeable but are vitally important and should be approached with care, caution, and appreciation for their magnitude. We can’t afford to rush across the snowfield without understanding the terrain. We don’t need to run the Iditarod on a glacial field studded with ravines.
A six-month pause would allow industry participants to better understand the current terrain and make sure they’re standing on stable ground before moving forward with more thoughtful intention.
Cooperation, attention, regulation
A six-month pause also might open the door for more cooperation, draw attention to the possibility of slowing down AI development at key moments, and set the stage for regulation. More on these:
Cooperation. If people who hold widely varying views on AI can agree on a six-month pause, it opens the door to future discussions and future agreements. We don’t know what forms those might take, because conditions always change, but setting a precedent for cooperation among different parties with different viewpoints is incredibly valuable.
Attention. The more people hear about this debate and realize AI progress is not a foregone conclusion of a breakneck race in which they are powerless, the more likely it is that AI innovation will slow down at appropriate moments in the future. Simply having the debate opens up the possibility space (aka Overton Window) for a time when slowing down might be even more important than it is now. I’ll come back to attention (essentially public relations) in a future post.
Regulation. By mandating standards, regulation can curtail or even banish the worst actors from an industry. It doesn’t and can’t prevent all problems, but it can set baselines and boundaries for behavior. Wildcat banks aren’t a thing anymore because of financial regulation. I can trust that an aspirin won’t poison me because of pharmaceutical regulation. Is there still bad behavior and bad incentives? Yes. But are regulated systems more trustworthy overall? Also yes.
System dynamics of regulation
Here’s a general system dynamics diagram for regulation:
For a step-by-step introduction to causal loop diagrams, read my primer on drawing causal loops.
To read this particular regulation diagram, first assess each pair of components in isolation (by convention, start these statements with the first component in each pair increasing). There are many pairs in the diagram, so I’ll list a few and then challenge you to complete the list yourself (if you want to see my full list, email me at riskmusings at substack and I’ll be happy to share it):
If controls increase, the pace of innovation will likely decrease (an inverse relationship, denoted with a - sign).
If the pace of innovation increases, residual risk will likely increase (a direct relationship, denoted with a + sign).
If residual risk increases, incidents will likely increase (a direct relationship).
If incidents increase, the perception of risk (fear) will likely increase (a direct relationship).
If the perception of risk (fear) increases, support for regulation will likely increase (a direct relationship).
That’s pretty much where we are now: the pace of AI innovation increased, which increased actual residual risk, which increased incidents (such as the rough initial Bing Chat rollout), which increased the perception of risk (fear), which has now increased support for regulation. If we continue to travel around the loop, increased support for regulation will eventually translate into actual regulation.
It’s vital that regulation be well-crafted
Here’s where it gets tricky: regulation can take different forms. There’s well-crafted regulation, and there’s poorly crafted regulation.2 Both types of regulation increase controls, but poorly crafted regulation also has side effects that can undermine support for regulation and ultimately decrease regulators’ power.
One side effect is surface-level compliance, which occurs when regulated companies do the bare minimum to comply with poorly designed regulations.3 The outcome is wasted time, money, and effort spent on unnecessary staff, unnecessary policies and procedures, unnecessary meetings, and unnecessary paperwork. This waste reduces support for regulation and, if perception sours enough, regulators can lose influence or funding.
In the worst cases, poorly crafted regulations aren’t just inefficient, they don’t actually reduce the targeted risk at all. When true failure of regulation becomes obvious as risk manifests, support for regulation understandably declines, and regulators may lose influence or funding (deservedly so).
The key is to minimize poorly crafted regulation and address the root causes of risks through well-crafted regulation.
What makes regulation well-crafted?
In an essay in October 2022, I listed the following qualities of what I called “smart regulation”:
It is targeted.
It engages the governed, listening to their concerns and gaining their trust as a governed party to ensure buy-in at a strategic level instead of tactical compliance in a check-the-box exercise.
It is not make-work and is reasonably time-efficient, allowing sufficient flexibility in implementation for different types of businesses and organizations.
Ideally, it makes required something industry participants mostly wanted to do anyway but couldn’t since doing so would have put them at a perceived or actual market disadvantage. In essence, regulation in this situation is a key for overcoming a prisoner’s dilemma.
It helps ensure safety for stakeholders that have insufficient leverage to negotiate with more powerful stakeholders on their own (consumer protection regulation is an example).
It is forward-looking, with consideration of how processes and systems are evolving and how the regulation might apply to their anticipated future states.
It costs businesses and societies less than the risk it protects against.
Its second-order effects are manageable and do not undermine the intent of the regulation (e.g., by driving activity outside of the regulatory jurisdiction en masse without actually reducing risk in the system as a whole).
I stand by this list and would add that, for AI regulation, it’s important to source regulators not just from the narrow field of machine learning or from a single philosophical tradition, but from a variety of backgrounds, such as:
Machine learning and software engineering practitioners and researchers
Ethicists and lawyers
Operational risk experts who understand information security and change management, among other areas
Forecasting experts
Former regulators from other industries like energy, aerospace, environment, medicine, and finance, who understand regulatory processes and how to set them up in a streamlined but effective way
Former policymakers
It’s also important to select regulators who won’t be lured away by the first AI company that will hire them. That means filtering for mission orientation and a mindset of service to all the humans who will be affected by AI developments, and compensating for a lack of equity shares with generous salaries, benefits, work-life balance, and appealing career paths.
Summing it up
In essence, to bring it back to the letter:
Don’t let the perfect be the enemy of the good.
The letter isn’t perfect, but AI controls need to keep pace with AI innovation, so I signed.
It’s a decent start toward cooperation, brings attention to the problem, and puts regulation on the table.
Although regulation brings its own challenges, it’s likely a leverage point to help controls catch up with innovation. That makes it worth doing—especially if it’s done right.
Six months is not long enough, but it’s a start, which is better than nothing. When AI becomes more advanced in the future, controls will need to keep pace with or even outpace innovation. Establishing the necessity of and mindset around controls needing to keep pace is valuable.
Well-crafted regulation can become outdated and ineffective regulation over time, if it’s not updated to keep up with technological and societal changes.
It’s possible for a well-run company to identify strategic ways to comply with a poorly designed regulation and actually reduce its risk, rather than just wasting time on tactical surface-level compliance, but it would be much better to have a well-designed regulation in the first place.
Excellent essay! I love the system dynamic chart for regulation.
I feel that this situation reflects our natural instinct to fear what we don’t understand. As a first step, I think we should consider a massive effort in improving public understanding of what AI is and what it is not.
Part of me does not like the idea of pausing innovating research. Why not then pause genetic engineering or other fields of research that are advancing rapidly and pose significant ethical issues?
Another part of me does not like the idea of regulatory intervention. I appreciate your points about effective regulation. My experience from the medical device industry is that the FDA has done all the right things in developing the a Quality System Regulation. Yet it’s implementation and enforcement has not been effective. Even the FDA has realized that it has done little to promote a culture of Quality in the industry.
Part of me feels that those who are highlighting the risks are the ones looking to gain form such a pause. If we are afraid if AI driven misinformation to overwhelm us then this should have been a concern log time ago.
So what is the answer?
One idea to consider is to have something like an IRB (independent review board) before AI driven applications are made available for public or commercial use.
I am more fearful of humans abusing AI than AI by itself.
I look forward to your posts -- thoughtful for sure! My only experience, and hence a naysayer is what the last thirty years of the Information Age have taught us is the first-mover advantage is durable. While organizations with more mature AI initiatives DID NOT throw their horse in the ring irresponsibly, GPT-4 has been lauded and rewarded with a first-mover advantage. The horse is out of the barn. I am confident we have a bad position right now b/c the first mover was greedy, irresponsible, and lacked care and consideration and acted in the absence of regulation. The highest priority might be to remove their first-mover advantage or we will encourage more bad actors to model GPT-4 and Microsoft behavior in the future. I believe, if regulation makes sense, it will be NECESSARY to not allow the immature first-mover to solidify its position and accumulate an advantage. If this is not done, all competitors will use force to prevent compromise as they would be sacrificing their own future. Beyond that, you have wonderfully explained the need to step back and assess this new era.