I’m sated with Thanksgiving leftovers, catching up on sleep, and I’ll just say: the OpenAI drama this past week gives me massive empathy for people who enjoy keeping up with the Kardashians.
Many years ago (late 90s early 00s) I fell under the gaze of Ray Kurzweil's books and enjoyed the speculative predictions he made. At the time, especially in the 90s the crossover where machine intelligence would cross human seemed a bit much. What the greater theme of the writing was about was not the moment exactly but what the following decade would bring. An utterly unrecognizable world may be ushered in. I believe products and initiatives like Neuralink already have almost unlimited volunteers and the implants have been approved. I would expect that each of us within 3-5 years will know someone with a neuralink-type device implanted. This seemed so unlikely to me in the 90s but seems all but inevitable now. 2029 or earlier was the landmark year wherein a machine intelligence would exceed a human intelligence. The subsequent decade would begin to approach the CUMULATIVE humans intelligence in machines. I figure the last week was mostly about Microsoft getting value for their $14B investment thus far. Seems inevitable that Project Gemini will choose to push the envelope and ignore the barriers like OpenAI.
I read a Ray Kurzweil book in the late 1990s! I also thought his ideas were pretty out there, and I doubted things would work out the way he envisioned them (I still harbor those doubts, though in different ways). But I have respect for him perhaps getting timeframes broadly correct. I think Google has shown admirable restraint in what they choose to release and when.
My judgment is reduced after a night at Scotch Club. Tonight's theme was bourbon mixed with fall flavors like cranberry. I am amenable to new opinions:) -- my only comment to what you shared is (1) all quite sensible (2) futurists deserve admiration in their willingness to speculate and accept the Monday morning QB (3) Google is aiming for a multisensory model (audio, video coupled with text) much closer to human condition than just text prediction. Their training dataset is richer. I think they will trail GPT for a bit but the long game is to model human sensory perception. I think the 3d modeling of proteins AI- Fold describes the difference in approach. 50%+ of the human experience is visual -- I think that likely describes the human experience.best. parts of Kurzweil always made me uneasy
I've been following the train wreck at OpenAI with interest, largely because the BBC World Service was interested and kept reporting on it. It sounds like some real governance issues with the company - in fact the whole company is an analogy for AI in general - it's become much bigger and more complicated than the governance/ regulation structure is designed for.
I'm actually speaking with someone who has been looking at AI ethics and regulation this week, because the whole thing terrifies me and the nature of the internet means that national borders are largely meaningless.
I think this is an important point, that national borders don't necessarily apply to AI impacts, and we are all stakeholders in the future of this tech. An international cooperation approach may end up being the only path to reasonable regulation, but the focus should be on the biggest, most impactful frontier models, since it's best to apply regulation at bottlenecks. National-level laws and regulations can probably deal with smaller models and their use cases (for example, we really need laws about how AI can and can't be used within the justice system, and that'll need to be different for each country based on its own system - Biden's new Executive Order lays some groundwork here for the US.)
I'm so glad you explained everything as I'm not up to date with the AI news. The unknown is always a bit daunting but in this case hopefully Altman will allow proper regulations so humanity isn't destroyed. (I can't believe I wrote that last bit but there it is.) One other issue tho-- who will regulate? Folks in government know nothing about advanced tech.
*Many* folks in government know nothing about advanced tech, but there are exceptions. The US Digital Service (https://www.usds.gov/) is excellent, and NIST folks (https://www.nist.gov/) are largely tech-savvy. At one point, people in government knew nothing about flight, and now we have the FAA, so I remain hopeful.
They'll have to pay more than customary government salaries, but there are organizational/agency structures that can make that happen or offer benefits that make up for some of the difference (quasi-governmental organization structure, offering excellent pensions with short vesting periods to balance out salary shortfalls, remote work, etc.)
Thanks, Monette! I find risk - and its management - endlessly interesting, but too often those stories are told in dry/boring ways, or risk management gets maligned by people who want to go full speed ahead. Most great advances involve finding the right balance of risk-taking and risk management: the right amount of either is neither 0% nor 100%.
Many years ago (late 90s early 00s) I fell under the gaze of Ray Kurzweil's books and enjoyed the speculative predictions he made. At the time, especially in the 90s the crossover where machine intelligence would cross human seemed a bit much. What the greater theme of the writing was about was not the moment exactly but what the following decade would bring. An utterly unrecognizable world may be ushered in. I believe products and initiatives like Neuralink already have almost unlimited volunteers and the implants have been approved. I would expect that each of us within 3-5 years will know someone with a neuralink-type device implanted. This seemed so unlikely to me in the 90s but seems all but inevitable now. 2029 or earlier was the landmark year wherein a machine intelligence would exceed a human intelligence. The subsequent decade would begin to approach the CUMULATIVE humans intelligence in machines. I figure the last week was mostly about Microsoft getting value for their $14B investment thus far. Seems inevitable that Project Gemini will choose to push the envelope and ignore the barriers like OpenAI.
I read a Ray Kurzweil book in the late 1990s! I also thought his ideas were pretty out there, and I doubted things would work out the way he envisioned them (I still harbor those doubts, though in different ways). But I have respect for him perhaps getting timeframes broadly correct. I think Google has shown admirable restraint in what they choose to release and when.
My judgment is reduced after a night at Scotch Club. Tonight's theme was bourbon mixed with fall flavors like cranberry. I am amenable to new opinions:) -- my only comment to what you shared is (1) all quite sensible (2) futurists deserve admiration in their willingness to speculate and accept the Monday morning QB (3) Google is aiming for a multisensory model (audio, video coupled with text) much closer to human condition than just text prediction. Their training dataset is richer. I think they will trail GPT for a bit but the long game is to model human sensory perception. I think the 3d modeling of proteins AI- Fold describes the difference in approach. 50%+ of the human experience is visual -- I think that likely describes the human experience.best. parts of Kurzweil always made me uneasy
I could not post something half as literate after even half a Scotch ;-) AlphaFold is amazing!
In the moment all of us in the club are sure we are making sense. I'm not so sure :)
I've been following the train wreck at OpenAI with interest, largely because the BBC World Service was interested and kept reporting on it. It sounds like some real governance issues with the company - in fact the whole company is an analogy for AI in general - it's become much bigger and more complicated than the governance/ regulation structure is designed for.
I'm actually speaking with someone who has been looking at AI ethics and regulation this week, because the whole thing terrifies me and the nature of the internet means that national borders are largely meaningless.
I think this is an important point, that national borders don't necessarily apply to AI impacts, and we are all stakeholders in the future of this tech. An international cooperation approach may end up being the only path to reasonable regulation, but the focus should be on the biggest, most impactful frontier models, since it's best to apply regulation at bottlenecks. National-level laws and regulations can probably deal with smaller models and their use cases (for example, we really need laws about how AI can and can't be used within the justice system, and that'll need to be different for each country based on its own system - Biden's new Executive Order lays some groundwork here for the US.)
I'm so glad you explained everything as I'm not up to date with the AI news. The unknown is always a bit daunting but in this case hopefully Altman will allow proper regulations so humanity isn't destroyed. (I can't believe I wrote that last bit but there it is.) One other issue tho-- who will regulate? Folks in government know nothing about advanced tech.
*Many* folks in government know nothing about advanced tech, but there are exceptions. The US Digital Service (https://www.usds.gov/) is excellent, and NIST folks (https://www.nist.gov/) are largely tech-savvy. At one point, people in government knew nothing about flight, and now we have the FAA, so I remain hopeful.
They'll have to pay more than customary government salaries, but there are organizational/agency structures that can make that happen or offer benefits that make up for some of the difference (quasi-governmental organization structure, offering excellent pensions with short vesting periods to balance out salary shortfalls, remote work, etc.)
Great info. Thx. 🙏
Thanks for your thoughts, I really appreciate hearing your insights from the viewpoint of risk management. That's sadly absent most of the time now.
Also, sorry for typing your name wrong in my initial reply! I just woke up from a nap :-/
No worry! It's an unusual name and autofill/autocorrect can trip it up too LOL
Thanks, Monette! I find risk - and its management - endlessly interesting, but too often those stories are told in dry/boring ways, or risk management gets maligned by people who want to go full speed ahead. Most great advances involve finding the right balance of risk-taking and risk management: the right amount of either is neither 0% nor 100%.