It’s kind of surreal to compare some of the talks at TED yesterday with reality. Microsoft’s Mustafa Suleyman promised that hallucinations would be cured “soon”, yet my X feed is still filled with examples like these from Princeton Professor Aleksandra Korolova:
Suleyman promised, with enthusiasm, we were going to live in a world full of digital aliens, arguing that “AI should best be understood as a new digital species.”
He didn’t give the slightest hint as to how we might solve the hallucination problem. If we can’t, I am not sure I want more of them.
§
The other thing that Suleyman, who focused strictly on AI, didn’t do, was to take seriously the problems that a deluge of alien intelligences might bring to the world. How might they be used by bad actors? Could they collude and outwit the human species? Even when Chris Anderson pressed him on that question, post talk, Suleyman scarcely acknowledged any of the real world harms AI is already causing, or might cause in the future.
I don’t see how we get to a positive AI future if we sweep all the problems under the rug.
§
Vinod Khosla’s techno-optimistic talk was far more substantive in my view, and to his credit, he understood that carving a better future isn’t just about surrounding humanity with empathetic chatbots. Khosla made a dozen fairly specific predictions, in multiple areas, ranging from medicine to agriculture to infrastructure. He has a quick summary on X, and here are a couple excerpts that speak for themselves.
I think Khosla’s timing (all this by 2049) might be off, perhaps by decades, but I don’t see why we can’t get to most of this eventually. I hope that we will.
§
To my mind, the most important question is how we get there.
Among the techno-optimist AI talks, only Fei-Fei Li did something that I think is absolutely key for real progress: she engaged (albeit briefly) in an analysis of the limits of current AI — such that we might figure out how to get to the next stage. Her conclusion was that current AI needs better spatial intelligence (not unlike what Ernest Davis and I have been arguing for when we have talked about physical reasoning).
It is one thing for language models to predict sequences of words, another to have an AI that truly comprehends the world, and one that can safely and reliably maneuver in that world. We won’t get to a billion personal robots if they are as dodgy as driverless cars, frequently working, yet stymied often enough by outliers that we can’t fully count on them. A billion robots with a superficial understanding of the physical world would be a recipe for disaster.
§
Though it may surprise some, I am actually with Khosla in wanting to see technologies that improve the world. But we can only get there if we can face the limitations and risks honestly.
Khosla is a venture capitalist, and I think venture capitalists will have a huge role in whether or not we get to the promised land. They help set the agenda, and they help set the culture.
Right now I think the current net effect of venture capital is not great. Dreamers who promise far more than they are likely to deliver are getting almost all the money, and (at least in AI) they are getting so much money for a path that it is unlikely to work that they are leaving too little oxygen for the exploration of new ideas that might work.
Some of this is because most venture capitalists only fund projects in which there is a reasonable expectation of a relatively quick, massive return (e.g., getting a major product to market in 3 years, with an already reasonably mature technology). Startups might be allowed a year for research but rarely more.
The biggest venture-backed impact on AI has come however from two of the few projects that were not set up that way, DeepMind and OpenAI, both of which were given a very long leash. I doubt that anyone with a short leash can achieve the breakthroughs we will need, if we are to get to the optimistic future so many of us would like to see.
There is also a hostility in some parts of Silicon Valley right now to regulation. This a mistake. Cars would never have blossomed without streetlights and traffic laws, and too many people would have died without seatbelts. Former OpenAI board member Helen Toner’s talk, arguably the most sober of the day, was one of a few to think seriously about what the rules of the road ought to be, with an emphasis on transparency, auditing and interpretability. It’s worrisome that someone with views like that was let go from OpenAI’s board, and even more worrisome that none of the new members of the board have a similar background in AI ethics. Rumman Chowdhury, who spoke yesterday about a kind of consumer’s right-to-repair for AI, would have been a good choice.
Policies around transparency, auditing and interpretability might not be as sexy as alien intelligences, but without it, we will see more and more public resistance to AI. We don’t want AI to wind up stigmatized like nuclear energy, a good solution that was rolled out with insufficient care and permanently tarnished.
Gary Marcus will keep writing these articles in hopes of nudging the world towards an AI that we can trust.
What is desirable about Khosla's visions #1 and #2?
AI doctors will be wrong for a long time. And speaking as Stage IV cancer patient, how are AIs going to develop empathy anytime soon, especially if they are themselves disembodied? In my experience, even embodied human doctors are mostly bad at this. Similarly, tutors are going to be teaching incorrect material -- and why should any child need a tutor 24/7?
As for #2, it sounds horrifying. Labor will be free -- for those who pay for labor. What will happen to those who *get paid for* labor? "Training," or the same old nonsense? Teaching everyone to code, even if they hate it? (Oh, wait, even GPT-4 does that already.) If this is techno-optimism, it's clearly only so if you're a member of the right economic class.
It's really difficult to take any of these VC/influencer talks seriously. Predictions that are beyond the time horizon where anyone will remember them or care that he made them?! He just has no idea how or when any of that will happen. Sure, all those things will come true but is this closer to the ancient Greeks saying one day man will set foot on the Moon, or Kennedy saying the same thing? The Greeks had no idea what they'd even have to learn to solve that problem; Kennedy knew it was just engineering by that point. Its the *perfect* TED Talk. Everyone claps and praises him, he notches another 'brave' TED Talk, and AI such as it is tells you it's a better mother than you are.