TED 2024 starts today. Last year was the first year where AI was a central focus at TED, with talks by Greg Brockman, Yejin Choi,, Eliezer Yudkowsky, Sal Khan, Alexander Wang and myself in one session, and other talks like Imran Chaudrhi’s launch of the Humane AI pin, a live demo of voice cloning, and more sprinkled throughout. This year I am especially looking forward to Demis Hassabis, Rumman Chowdhury and Daniela Rus.
In my view, some of what we heard last year has held up, some not:
• Greg Brockman, OpenAI CTO, talked about ChatGPT’s “limitless potential”, discussed wanting to steer the field in a positive direction and demoed the multimodal capacities of GPT-4, showing how it could be used to generate not just text but images, a seeming revolution at the time. I think it’s fair to say that the multimodal stuff still is very buggy, having troubles with basics like the English alphabet. (Example via Ryan Katz-Rosene, yesterday):
What really bothers me though is Brockman’s closing, “Together, I believe that we can achieve the OpenAI mission of ensuring that artificial general intelligence benefits all of humanity.” I am just not seeing that as OpenAI’s mission, anymore, especially when they have continued to fail to compensate most of the creators whose work they are drawing on, largely without consent.
• In a talk called “Why AI is incredibly smart and shockingly stupid”, Yejin Choi worried that LLMs were not reliable. They still aren’t. She was right.
• I warned that LLMs would be used by bad actors, and they increasingly are. I also urged the world to focus on global AI governance, and am pleased to see some genuine progress in that direction, especially led by the UN and the Council of Europe. I am pleased to see the US State Department actively involved, too.
• Eliezer Yudkowsky warned that AI was (inevitably!) going to kill us all. I am happy to report that so far it hasn’t.
• Sal Khan promised LLMs would revolutionize education, and introduced Khanmigo. A recent, scathing review in the Wall St Journal suggests this is going to be harder than it looks; hallucinations (e.g., on basic math) were again the key liability. You can’t teach if your students don’t know whether or not to believe you. I love Khan Academy, but continue to have doubts about whether LLMs are really the right tool for the next step of the job. Earlier today on X, Ben Riley wrote (not specific to Khanmigo, but re AI and education more generally), “The cognitive scientist @GaryMarcus describes the behavior of generative AI 'frequently wrong, never in doubt,' and that’s just about the worst quality I can imagine for an educator.”
• The Humane AI pin/virtual assistant just doesn’t work well, and a lot of the problem is the underlying AI, which I believe is powered by OpenAI. Reviews last week were scathing. As I put it to the Times, no company yet had A.I. technology that was sophisticated enough to make a virtual assistant answer questions reliably, “It’s almost like a broken watch being right twice a day. “It’s right some of the time, but you don’t know which part of the time, and that greatly diminishes its value.”
• Voice-cloning looked scary last year, and a lot of us wondered whether it would be used for mischief. Spoiler alert: it has.
§
TED is a pretty optimistic place, on the whole. And yet to their credit they gave floor time to AI’s darkest pessimist (Yudkowsky) and pair of realists (Choi and myself) in addition to many optimists. Whose talks best stood the test of time?
The optimists? Not so much. Their grand pronouncements haven’t borne out, at least not so far.
The pessimist? Not so much, either. I don’t think we are any closer to a machine with the wherewithal or the motivation to annihilate the human species than we were a year ago. (Which does not guarantee we will always be safe.)
So far, it is the realists from the Class of 2023 that seem to have been best calibrated. Bugs in LLMs have stymied Khanmigo and the AI pin. Adding visual data to LLMs hasn’t solved hallucinations.
Look forward to hearing this year’s talks, and seeing which best stand the test of time.
Gary Marcus really hopes that Eliezer’s worst fears are never realized.
The "pessimists" you named are just die-hard optimists in disguise, as in ludicrously optimistic in the capabilities of AI. Well, perhaps they ARE pessimists, in that they're pessimistic about the intelligence of human beings. Allow me to explain... The subtext behind AI apocalypse scenarios is "system architects are idiots." For example, taking nuclear weapon launch controls away from human hands and hooking them directly to AI. The surface message is that of "architects are idiots, except me, who will save the rest of you from dumb setups that I dream up." The real message is one of "AI is so powerful it's scary, and so obscure in its workings as to parallel witchcraft. Therefore, worship warlocks like me if you value your life."
Any particular reason you're not speaking this year, Gary? The world is in need of sobering assessments more than ever.