A few weeks before ChatGPT’s release, I published an essay about possible paths for AI regulation. Now seems like a good time to revisit it and gauge where we stand with regard to AI risk, since ChatGPT, Sydney/Bing, and Bard have lots of people talking about it.
My opinion is there are an awful lot of things like AI we can worry about. The planet is simply configured whereby cooperation of the sort required is an illusion. As tough as it sounds, we are in a position to merely hope for the best. There are lots of governmental systems for whom cooperation and consensus is nearly impossible. Here at home in the US, the reality is via courts and philosophy we've empowered corporations to exceed the power of nation-states almost without consequence. Our "best" hope upon which we gamble our future is that multi-nationals will discover solutions with such great ROI we will adopt them as a means to deal with problems of all sorts. Beyond that we have little or no say at all. AI is merely the latest in a series of things we hope our next innovation will keep up with unitended consequences of the last innovation. The behavior of Airbus and their internal controls HAVE NO IMPACT on the latest debacle at Boeing. This is the system we believe in and stake our future on.
I'm an optimistic realist, I guess; I've seen regulation have impacts (both good and bad) on the financial system - and financial firms are among the most politically connected firms in the US. I believe that good/smart regulation can work for high-risk technology, too. Cooperation and consensus are hard and time-consuming but not impossible, especially when it comes to urgent and important developments. One of the hardest things is that most people want to take credit, when the most important thing is just reducing the risk.
I don't know much about the aviation industry and its oversight, though I'm aware of the Boeing 737max debacle. I agree, though, that regulation in isolation isn't great; it's best to identify leverage points where controls will have high impact and then get that coordination across industry through regulation, information-sharing groups, etc. The risk is mis-regulation (mandating ineffective or bad controls), so there should be lively debate over proposed regulations, but fear of mis-regulation shouldn't end up resulting in essentially no regulation.
I'm optimistic that things will work out. We've managed in the last 75 years to avoid some big existential risks. A lot of it has been LUCK and SOME cooperation. The world was simpler in the Cold War and we avoided the worst of the consequences with a few close calls. With 10 nuclear capable nations, the time ahead is much more difficult to navigate. We managed to get Russia and the US to cooperate at some level. That is state-level regulation. It will be a remarkable thing if we get through the next 25 years without usage of a weapon. The complexity of diplomacy with ten nations at the table is likely impossible. We managed to make it through with the Soviet Union and that only required one consensus agreement. Today such a comprehensive agreements requires 45 cases of consensus. Beyond the realm of possibility IMO. I think most of us don't even pay attention to the big stuff anymore. I think well less than 25% of the people I know even are aware that at the tailend of the Clinton administration, Pakistan and India went on high alert and fueled their nukes. For most people it isn't even something they are aware of! For most of the tough stuff ahead, the same is true. China has MORE COAL plants than the whole world combined! What we do or frankly every other nation on earth does in the absence of cooperation will not put a significant dent in fossil fuel emissions for example. It has been a monumental effort to reduce coal usage largely worldwide. China alone through inaction makes all of that effort largely for naught. Global consensus in the context of 200 nations is quite a parlor trick!
My opinion is there are an awful lot of things like AI we can worry about. The planet is simply configured whereby cooperation of the sort required is an illusion. As tough as it sounds, we are in a position to merely hope for the best. There are lots of governmental systems for whom cooperation and consensus is nearly impossible. Here at home in the US, the reality is via courts and philosophy we've empowered corporations to exceed the power of nation-states almost without consequence. Our "best" hope upon which we gamble our future is that multi-nationals will discover solutions with such great ROI we will adopt them as a means to deal with problems of all sorts. Beyond that we have little or no say at all. AI is merely the latest in a series of things we hope our next innovation will keep up with unitended consequences of the last innovation. The behavior of Airbus and their internal controls HAVE NO IMPACT on the latest debacle at Boeing. This is the system we believe in and stake our future on.
I'm an optimistic realist, I guess; I've seen regulation have impacts (both good and bad) on the financial system - and financial firms are among the most politically connected firms in the US. I believe that good/smart regulation can work for high-risk technology, too. Cooperation and consensus are hard and time-consuming but not impossible, especially when it comes to urgent and important developments. One of the hardest things is that most people want to take credit, when the most important thing is just reducing the risk.
I don't know much about the aviation industry and its oversight, though I'm aware of the Boeing 737max debacle. I agree, though, that regulation in isolation isn't great; it's best to identify leverage points where controls will have high impact and then get that coordination across industry through regulation, information-sharing groups, etc. The risk is mis-regulation (mandating ineffective or bad controls), so there should be lively debate over proposed regulations, but fear of mis-regulation shouldn't end up resulting in essentially no regulation.
I'm optimistic that things will work out. We've managed in the last 75 years to avoid some big existential risks. A lot of it has been LUCK and SOME cooperation. The world was simpler in the Cold War and we avoided the worst of the consequences with a few close calls. With 10 nuclear capable nations, the time ahead is much more difficult to navigate. We managed to get Russia and the US to cooperate at some level. That is state-level regulation. It will be a remarkable thing if we get through the next 25 years without usage of a weapon. The complexity of diplomacy with ten nations at the table is likely impossible. We managed to make it through with the Soviet Union and that only required one consensus agreement. Today such a comprehensive agreements requires 45 cases of consensus. Beyond the realm of possibility IMO. I think most of us don't even pay attention to the big stuff anymore. I think well less than 25% of the people I know even are aware that at the tailend of the Clinton administration, Pakistan and India went on high alert and fueled their nukes. For most people it isn't even something they are aware of! For most of the tough stuff ahead, the same is true. China has MORE COAL plants than the whole world combined! What we do or frankly every other nation on earth does in the absence of cooperation will not put a significant dent in fossil fuel emissions for example. It has been a monumental effort to reduce coal usage largely worldwide. China alone through inaction makes all of that effort largely for naught. Global consensus in the context of 200 nations is quite a parlor trick!