9 Comments

Excellent essay! I love the system dynamic chart for regulation.

I feel that this situation reflects our natural instinct to fear what we don’t understand. As a first step, I think we should consider a massive effort in improving public understanding of what AI is and what it is not.

Part of me does not like the idea of pausing innovating research. Why not then pause genetic engineering or other fields of research that are advancing rapidly and pose significant ethical issues?

Another part of me does not like the idea of regulatory intervention. I appreciate your points about effective regulation. My experience from the medical device industry is that the FDA has done all the right things in developing the a Quality System Regulation. Yet it’s implementation and enforcement has not been effective. Even the FDA has realized that it has done little to promote a culture of Quality in the industry.

Part of me feels that those who are highlighting the risks are the ones looking to gain form such a pause. If we are afraid if AI driven misinformation to overwhelm us then this should have been a concern log time ago.

So what is the answer?

One idea to consider is to have something like an IRB (independent review board) before AI driven applications are made available for public or commercial use.

I am more fearful of humans abusing AI than AI by itself.

Expand full comment
author

I agree a massive effort is needed to improve public understanding of how deep learning works. I would estimate about half of people who interact with GPT-4 might view it, on some level, as a person inside a computer. My estimate may be too high (I hope it is!), but misunderstanding in itself could raise the risk of protests, upheaval, insider threat at companies to "help" AI, etc.

Pausing training runs for future, more advanced models doesn't mean pausing all innovation; it would be great to understand better how to visualize or triangulate how GPT-4 and GPT-3.5 work internally, even if we can't get perfect traceability. To test the interactions among various plugins' capabilities with GPT-4 and work through emergent problems. And there should be a lot more controls innovation and testing.

As I understand it, we've paused certain aspects of genetic engineering worldwide (when a scientist edited twin babies' genes before birth, he was censured: https://www.theguardian.com/science/2023/feb/04/scientist-edited-babies-genes-acted-too-quickly-he-jiankui) because we view them as too risky and don't understand the long-term effects. Other aspects of genetic engineering like curing diseases with CRISPR via clinical trials are proceeding. This seems like a good approach because time was taken to understand the risks and regulatory mechanisms were and are in place.

I do hear you on failures of regulation. Having worked in financial regulation, I've seen both good and bad regulations in action. What do you think could make the FDA implementation and enforcement for Quality more effective?

I like the IRB idea you suggested because it could be implemented relatively quickly and in a reasonably lightweight fashion. The trick is whether its recommendations could be binding; that's a strength of formal regulation.

Expand full comment

Excellent discussion on this topic Stephanie. As far as FDA's medical device regulations are concerned, we have to find a way to create change both within the FDA on their enforcement practices, and within the manufacturers on their current Quality Management practices. The system of (surface) compliance has been built up over the last 25 years. Change is coming though - FDA is working on harmonizing the Quality System Regulation with ISO 13485. It will be a significant move, which hopefully will drive the right behaviors in the industry. Having said that, I fully appreciate the power of status quo.

I think market forces will have a better chance.

Expand full comment

Fascinating topic Stephanie! I am quite sure your estimate of understanding of GPT-4 by the broad populace is kind :) By any measure the internal application of LLMs many times more extensive than GPT-4 have been deeply integrated and running in the wild for MANY YEARS in Gmail and Google Search for a long time now and ALMOST NO ONE is even aware of it! How good are they? The fact they've not created controversy and rife with stupid and false answers is part of the answer. The best AI/ML is invisible, the worst (like ChatGPT and Tesla self-driving) use humans as guinea pigs. The worst actors should be the target of regulation.

As for regulations, they can have negative consequences in both open and closed societies. The PRC desire to control everything has largely destroyed the AI dreams of Baidu and many others. This was a great gift to America for a generation. Despite an inherent advantage of larger training data sets, it seems to have been squandered due to PRC heavy-handed regulation.

Expand full comment
Apr 1, 2023·edited Apr 1, 2023Liked by Stephanie Losi

Naveen -- you articulated part of what I was struggling to say!

Stephanie -- you draw great causal diagrams!

Expand full comment

I look forward to your posts -- thoughtful for sure! My only experience, and hence a naysayer is what the last thirty years of the Information Age have taught us is the first-mover advantage is durable. While organizations with more mature AI initiatives DID NOT throw their horse in the ring irresponsibly, GPT-4 has been lauded and rewarded with a first-mover advantage. The horse is out of the barn. I am confident we have a bad position right now b/c the first mover was greedy, irresponsible, and lacked care and consideration and acted in the absence of regulation. The highest priority might be to remove their first-mover advantage or we will encourage more bad actors to model GPT-4 and Microsoft behavior in the future. I believe, if regulation makes sense, it will be NECESSARY to not allow the immature first-mover to solidify its position and accumulate an advantage. If this is not done, all competitors will use force to prevent compromise as they would be sacrificing their own future. Beyond that, you have wonderfully explained the need to step back and assess this new era.

Expand full comment
author

I agree that our current semi-shambolic position is due to a gambit for first-mover advantage, and more cautious companies that moved with greater care suffered for their caution. It's one reason why I think allowing controls and regulation to catch up is necessary; it could go a long way toward preventing breakneck races like what just happened. By mandating controls, you reduce residual risk both via controls and via slowing the pace of innovation (because controls are mandated to keep pace with innovation).

I don't think there's anything wrong with being the first mover, as long as you are *also* the first mover with controls (and I don't see that happening right now, in the absence of regulation). What do you think?

Expand full comment

First of all, semi-shambolic for the win!!! My take is AI/ML has been methodically entering the market in lots of places. There have been good actors and bad actors. It has now moved to the phase where the late to the gamers are GAMBLING (like GPT and to a lesser extent MSFT) just to be noticed and be relevant. Taken outside of GPT, the Tesla example is perfect. We have an ARROGANT, RECKLESS late arrival placing an array of cameras in a car and ignoring all oversight. They have never been perceived as in the leading quadrant, they are merely reckless. Peddling a product for years and charging 10K for "Full Self-Driving" when independent analysis considers it Level-2 AT BEST. Thousands of people sleeping in their cars. Absurd! I genuinely believe the LLMs are in the same circumstance. I think AI/ML is baked into so many Google products already, the horse is out of the barn. It is SO GOOD people don't know it's there. A ChatBot and integration into the Assistant will probably happen very soon and just work. Once there, people won't want to live without it, the same way they can't live without Google, Gmail and GMaps. I think Google chose to not explain how 200M proteins fold until they were confident it was correct. They weren't arrogant and didn't gamble credibility. ChatGPT is focused on cloning poems and 4GL code generation right now as a parlor trick. It seems to me while it is cool a ChatBot can write a resume, it would be immensely more challenging to operate the ML workload to analyze the resumes. I have a feeling those analytics are not running on ChatGPT. If this were Reddit I would UPVOTE b/c of shambolic!!!

Expand full comment
Apr 2, 2023·edited Apr 2, 2023Liked by Stephanie Losi

I'd like to share in your optimism about regulation, but as a professional cynic, I see expanded regulation as more room for grift, incompetence, and having meetings instead of getting anything done.

There's something unnerving about the AI that doesn't quite tickle the "yuck" reaction the way things like gene-editing, cloning, and gain-of-function research do. It's so good at presenting as a ghost in the machine that even people who should know better end up believing there's a homunculus in there staring out from behind the chat box.

Combine our natural tendencies to see agency in noise with the easy of integrating these tools into our built environment and that's a recipe for hijinks.

I'm not convinced that a moratorium on one *variety* of machine learning model is going to do much. The LLMs are spooky, as are the deep-learning methods in general, but they're not the only game on the market by far.

AI's real threat is death by ten thousand cuts as new techniques out-compete us like an army of invasive species in the back garden. I'm not sure that any political efforts can stem that tide.

Expand full comment