I’m sated with Thanksgiving leftovers, catching up on sleep, and I’ll just say: the OpenAI drama this past week gives me massive empathy for people who enjoy keeping up with the Kardashians. If only the stakes weren’t so high! Onward….
OpenAI CEO Sam Altman woke up Friday, November 17th, expecting a different day than the one he got. Also, most of us probably live in a world that is fundamentally different than we think it is. We won’t know how it’s different for months or even perhaps years, but OpenAI will probably play a role.
A brief rundown of the main events that engulfed OpenAI last weekend (if you know all this stuff already, skip to the next section):
Friday, November 17th: Something triggered OpenAI’s board of directors, including chief scientist Ilya Sutskever, to fire Altman and remove company president Greg Brockman from his board chair role. CTO Mira Murati was named interim CEO. Brockman resigned his role as president in response to Altman’s firing and his own demotion.
While the board may have been considering this move for a while, it didn’t look that way from the outside. The announcement was more pointed than typical corporate-speak but unaccompanied by concrete context (aka reasons), and major investors—like Microsoft, which has invested billions in OpenAI’s for-profit operating company—had virtually no advance notice. As Microsoft CEO Satya Nadella put it, “Surprises are bad.” (Satya is totally correct! In my work as an IT risk regulator, I learned that the sooner you give bank management a heads-up about what you think is okay and not-okay, the less surprises there are in the formal delivery of examination findings, and the less pushback you get, and the more strategic the remediation is in the end, which is the most important part. Everything is better if there are no surprises.)Over the weekend: Altman and Brockman teamed up and met with Satya Nadella while also entertaining overtures from OpenAI management (notably not from the board) to return to their former roles. By the end of the weekend, the board had instead installed a new interim CEO named Emmett Shear, while Altman and Brockman had agreed in principle to move to Microsoft and work on AI there, possibly along with senior employees who had also resigned in solidarity.
Overnight Sunday into Monday: the rest of the OpenAI employees displayed staggering solidarity, too, with more than 500 (and ultimately more than 700) signing a letter saying they would resign if the board did not itself resign and reinstate Altman and Brockman. Nadella said Microsoft would support whatever Sam and Greg decided, and chief scientist Ilya Sutskever publicly changed his mind and signed the letter. The board hung on for another day before conceding to a reorganization late Tuesday night.
As of Wednesday, November 22nd: Altman and Brockman are back. The board now consists of Bret Taylor, Larry Summers, and Adam D’Angelo (the sole holdover from the old board). Former board members Helen Toner and Tasha McCauley are out, and Ilya Sutskever is still chief scientist but is not on the board (and neither are Altman and Brockman). Mira Murati is still CTO.
So: meet the new boss, same as the old boss?
I very much doubt it. This shuffling has interesting implications.
First and probably foremost, the board’s power was mainly based on the threat that it could do what it did: fire Sam Altman. But it shot its shot, and unlike with Travis Kelce and Taylor Swift, it didn’t work out, not even a little. We now find ourselves in much the same situation as before, but somewhat worse from a risk management perspective.
Will the board ever be able to pull that maneuver again? My personal view is probably not, barring major malfeasance, which a widely reported internal OpenAI memo said did not take place this time … so why did they pull the trigger? Whatever the board’s plan was, as I wrote last weekend, no plan survives contact with reality intact. This one detonated on impact.
One big takeaway is that the board’s silence after issuing their initial statement was not golden. Honest explanations would have helped a lot. This is often true with corporate and political machinations: in the absence of a clear message with provable rationales, rumors fly, which can get employees and investors up in arms and set the stage for chaos to flourish. (As a side note, this was also true for actions taken to stabilize markets in the 2008 financial crisis! I understand the impulse not to spark panic, but honest explanations of, “We are doing X to shore up Y, not to give handouts to Z for their own sake,” could have gone a long way. I’m not a fan of secrecy at the expense of stability. Secrecy exists to maintain stability sometimes, but taken too far, it can have the opposite effect. People can cope with a lot more information than power-brokers think they can.)
Lingering questions
Given the still-somewhat-cloudy view-from-the-outside of what happened, I do wonder why Ilya Sutskever initially agreed to the ouster, then changed his mind. I think he did not expect the magnitude of the backlash, and he decided he wanted to keep working on transformative AI, and also decided that it could be more dangerous to have Altman, Brockman, and several hundred all-in employees working in perhaps less-constrained ways on transformative AI at Microsoft. But those are just my speculations.
To be fair, this is not to say Altman is necessarily full-speed-ahead. OpenAI has held back on releasing advances in the past if they felt controls weren’t ready1, Altman has been open publicly about the various potential risks of AI advancement, and he is seemingly open to reasonable regulation. These past behaviors are heartening factors.
Most importantly at this point, where do we go from here? In the future, humans cannot make decisions involving artificial general intelligence (AGI) this way, by careening from chaos to chaos, and expect it to go at all well. Humans need to do better.
Otherwise, we really are just shuffling deck chairs on the Titanic.
Speaking of (weak) artificial general intelligence….
Yes. As a meme circulating on Twitter goes:
Reuters broke a story on the eve of Thanksgiving about a possible capabilities breakthrough earlier this year at OpenAI, called Q* (Q star). Nothing is confirmed about Q*, but the Reuters article stated that it possibly can reason better about math problems than current LLMs. If this capability: a.) exists at all; and b.) scales, it could potentially pave the way for AI that’s less of a remixer and more of a discoverer.
Think of a turbo-powered blender (worth clicking for humor), which chops up raw ingredients and serves them up in a new form, as an analogy for current AI LLMs. Think of Alexander Fleming’s discovery and the subsequent industrial production of penicillin as an analog for possible future AI breakthroughs. These are only analogies and are not perfect, but they hopefully convey the main difference.
I find the Q* theory appealing because it could potentially explain several things, particularly the board’s sudden and seemingly radical action followed by silence, and Ilya Sutskever’s involvement and subsequent turnaround when it seemed the company might evaporate. It is also unsettling. If even weak AGI has entered our world, how will things change?
Evidence is circumstantial. Yes, back in early October Sam Altman posted on Reddit for the first time in more than 5 years, writing, “agi has been achieved internally”. He then walked it back as a joke, but I can absolutely imagine Altman and a Reddit co-founder in the Y-Combinator orbit making a deal ages ago about how if OpenAI ever achieved AGI, it would be posted first on Reddit. If that did happen, it would be very Silicon Valley.
Somewhat more concretely, there’s also these quotes from not even two weeks ago. At the Asia-Pacific Economic Cooperation event on November 16th (yes, one day before the board’s maneuver), Altman said (at around 13:22 in the video below): “Four times now in the history of OpenAI—the most recent time was just in the last couple of weeks—I’ve gotten to be in the room when we … push the veil of ignorance back and the frontier of discovery forward.”
Later, during the same event, Altman said in response to a question about AI in 2024 (at around 50:33 in the video): “The model capability will have taken such a leap forward that no one expected … no one expected that much progress. It’s different to expectation. I think people have, like, in their mind how much better the model will be next year, and it’ll be remarkable how much different it is.”
Takeaways
What does all this mean? I don’t know. Maybe nothing; maybe the maneuvering really was just a case of overblown politics and insufficient planning. Or maybe something; maybe we’ll look back on this moment in a year or two with greater understanding and context and realize everything has changed.
One thing’s for sure: risk management is still important. I refuse to polarize into a camp and declare war on another camp. “AI acceleration versus AI deceleration” shouting matches will probably produce bad outcomes if they devolve into a culture war, because the base case for culture wars is that they produce bad outcomes.2 There’s room for AI regulation and AI progress: understanding the differences among various use cases; considering all arguments, data, and implications, including knock-on (nth-order) effects; simulating various outcomes; and making informed decisions about capabilities and controls. This is what people in critical industries do.
AI could produce massively beneficial breakthroughs in medicine, in education, in energy. It could also send our civilization hurtling off the rails. Both of those sentences are true. Open discussion and lively debate are vital going forward. We need to be the humans we want to be when we envision a transformative future. Last week was not it.
So, what’s next? We’ll have to live it to find out. I just hope we do it with wisdom, grace, responsibility, and enough conscientiousness that the pro-wrestling vibes of pre-Thanksgiving fade into a brighter history.
To clarify: when I say “culture war,” I mean purposeful, manufactured escalation of differences into polarized extremes (aka “othering” of each other).
Many years ago (late 90s early 00s) I fell under the gaze of Ray Kurzweil's books and enjoyed the speculative predictions he made. At the time, especially in the 90s the crossover where machine intelligence would cross human seemed a bit much. What the greater theme of the writing was about was not the moment exactly but what the following decade would bring. An utterly unrecognizable world may be ushered in. I believe products and initiatives like Neuralink already have almost unlimited volunteers and the implants have been approved. I would expect that each of us within 3-5 years will know someone with a neuralink-type device implanted. This seemed so unlikely to me in the 90s but seems all but inevitable now. 2029 or earlier was the landmark year wherein a machine intelligence would exceed a human intelligence. The subsequent decade would begin to approach the CUMULATIVE humans intelligence in machines. I figure the last week was mostly about Microsoft getting value for their $14B investment thus far. Seems inevitable that Project Gemini will choose to push the envelope and ignore the barriers like OpenAI.
I've been following the train wreck at OpenAI with interest, largely because the BBC World Service was interested and kept reporting on it. It sounds like some real governance issues with the company - in fact the whole company is an analogy for AI in general - it's become much bigger and more complicated than the governance/ regulation structure is designed for.
I'm actually speaking with someone who has been looking at AI ethics and regulation this week, because the whole thing terrifies me and the nature of the internet means that national borders are largely meaningless.