What comes after ChatGPT? 7 predictions for 2023
It may also be time for the field to move on from the term "Artificial Intelligence."
For systems unpolluted by reason and higher level conceptualization like GPT-3 and DALL-E I'd propose something like "cultural synthesizer."
thanks for catching the error on altmann. there’s not perfect consensus on values, and i a agree about tradeoffs etc, but there is enough to get started; i will write about this in the new year.
I am growing increasingly concerned by what seems to be a "minimum viable alignment" approach taken by OpenAI. They spend a considerable amount of up-front design and engineering capacity for the sole purpose of figuring out how to throw as many TPUs as they can at the training step, and then hastily bolt on whatever adversarial examples they can think of to give the appearance of guardrails after the fact.
From my perspective, despite their self-claimed mandate as the guardians of responsible AI, they're worried about building and shipping technical capacity first. Ethics seem to come later - just like so many other AI startups. They can't even be bothered to think through the issues of turning their models loose on the world to foresee plagiarism, automated scams, and spam as the most obvious use cases, and they declaim responsibility by asking people to pinky swear they'll label ChatGPT's output. Whatever assurances Sam Altman is giving right now, I am thoroughly skeptical of OpenAI's willingness to truly design for safety from the ground up.
It's great at re-inventing the wheel (with a few errors), but can't recognize (or imagine) new forms of transport.
Interesting. "It will be amazing, but flawed" pretty much describes all human beings.
#typos: "the same playbook as it predecessors" , "explicit it knowledge".
I agree with pretty much everything you say here, and I want to commend you for making relatively specific predictions that can be falsified. There hasn't been enough of that from both AI proponents and skeptics alike, and I encourage seeing how these one turn out. (Personally, I expect all of them except #7 to come true.)
I was curious to see how much agreement there is about these predictions from others in the AI community, so I've created a prediction market for each one.
My fear is that GPT-4 is being created by looking at GPT-3 failure cases and introducing fixes for each of them, rather than increasing its reasoning powers in any fundamental way. Perhaps it will contain a neural network that will identify certain classes of arithmetic problems and, when triggered, route the numbers occurring in the prompt to a calculator or Mathematica. It will be similar to how autonomous driving advances by dealing with each new edge case as it arises. Instead of increasing our confidence in its abilities, it tells us how much world knowledge and reasoning will be needed to do the job properly and how far we are from getting there.
Thanks for another clear article on the LLM phenomenon. In a roundabout way, LLMs are a good thing for AGI research. They are perfect examples of what not to do if achieving human-like intelligence is the goal. All AGI researchers owe OpenAI and Google a debt of gratitude. Thanks but no thanks. :-)
What OpenAI alleged: https://twitter.com/gdb/status/1599124287633248257?s=46&t=j_T-BzOMQDYxPtbROdbPUQ
I asked for “GPT-4”; the best example was this. These systems are very poor at reproducing text verbatim.
One of the first things I did once I had access to ChatGPT is have it interpret Steven Spielberg's Jaws using Rene Girard's ideas of mimetic desire and sacrifice: https://3quarksdaily.com/3quarksdaily/2022/12/conversing-with-chatgpt-about-jaws-mimetic-desire-and-sacrifice.html
It did pretty well, perhaps even better than I'd expected. But the fact is I didn't formulate any explicit expectations ("predictions") before I started. If the pubic is given similar access to GPT-4, I'll repeat the same exercise with it. I won't be at all surprised if it does better than ChatGPT did, but that doesn't mean it will come anywhere close to my own Girardian analysis of Jaws: https://3quarksdaily.com/3quarksdaily/2022/02/shark-city-sacrifice-a-girardian-reading-of-steven-spielbergs-jaws.html
What nonetheless makes ChatGPT's performance impressive? To do the interpretation ChatGPT has to match one body of fairly abstract conceptual material, Girard’s ideas, to a different body of material, actors and events in the movie. That's analogical reasoning. As far as I can tell, that involves pattern matching on graphs. ChatGPT has to identify a group of entities and the relationships between them in one body of material and match them to a group of entities in the other body of material which have the same pattern of relationships between them. That requires a good grasp structure and the ability to “reason” over it. I've now done several cases like this and am convinced that it was not a fluke. ChatGPT really has this capacity.
There were problems with what ChatGPT did, problems that I overlooked in my article because they're the sort of thing that's easier to fix than to explain (here I'm thinking about my experience in grading papers). So there's plenty of room for GPT-4 to demonstrate improvement. Here's a minor example. I asked ChatGPT whether or not Quint had been in the Navy – knowing full well that he had. He replied that there was no mention of that in either the film or the novel. So then I asked about Quint's experience in World War II. This time ChatGPT mentioned that Quint had been in the navy aboard a ship that had been sunk by the Japanese. I can easily imagine that GPT-4 will not need any special prompting to come up with Quint's navy experience.
However well GPT-4 does, it will not come near to what I went through to come up with my Girardian interpretation. In the first place, I actually watched the film, which GPT-4, like ChatGPT, will not be able to do. As far as I know we don't have any artificial vision system capable of watching a feature-length film and recalling what happened. ChatGPT knew about Jaws because it's well-known, there's a lot about it on the internet (the Wikipedia has a decent plot summary), and scripts are readily available.
Beyond that, when I watched Jaws I had no intention of writing about it. I was just watching an important film (credited with being the first blockbuster) that I had never seen. Once I watched it I looked up its Wikipedia entry. And then I started investigating, which meant watching the sequels to see if indeed they weren't as good as the original – they weren't (Jaws 4 is unwatchable). Now I had something to think about, why is the original so much better than the others? That's when it struck me – GIRARD! And that's how it happened. Girard's ideas just came to mind. Once that had happened, I set about verifying and refining that intuition. That took hours of work spread over weeks, and correspondence with a friend who know Girard's ideas better than I do.
That's a very different and much more mysterious process from what ChatGPT did. I pointed it to Girard and to Jaws and asked it to make the analogy. I did half or more of the work, the hard part. No one told me what to look for in Jaws. How was I able to come up with the hypothesis that Girard's ideas are applicable to Jaws? I don't know, but I observe that I have years of experience doing this kind of thing.
The current trends in LLMs seems to imagine that intelligence is primarily static and reactive: a prompt goes in, a response pops out, and that response is the product of a fixed set of algorithms working from a fixed amount of data. But human intelligence is constantly adapting and frequently proactive. Even at a structural, material level, we are living things: our brains and bodies are constantly changing. So is what we know and believe. Intelligence is not finding the right set of algorithms to process a vast (but finite) amount of data. We're never done learning.
As someone who has been working with chatbots like ChatGPT since they were first released, I have to say that I think the reviews surrounding the alignment problem are complete bullshit. ChatGPT should be viewed as a tool, not as an entity. It is simply a tool that we can use to craft ideas and write them more clearly. The alignment problem has nothing to do with using a tool like ChatGPT to write.
I also have to take issue with the idea that ChatGPT is easily confused. As an engineer with a background in English, I can tell you that ChatGPT has been an invaluable tool for me in crafting ideas and expressing them clearly. It may not be perfect, but it is still a powerful tool that can be used to great effect.
That being said, I do agree that there are still problems with chatbots like ChatGPT, and that the alignment problem remains a critical and unsolved issue. It is important to be cautious when interacting with any tool or person, and to understand what we can trust and where mistakes may have been made. However, I believe that chatbots like ChatGPT have the potential to be incredibly useful and powerful tools, and I am excited to see what the future holds for this technology.
I am an 80-year-old man with a background in IT and chemical engineering. I studied chemical engineering at Georgia Tech and worked as a chemical engineer for a decade before transitioning to a career in IT, where I helped implement email at DuPont in the 1980s. Despite my success in these fields, I have always struggled with mild dyslexia, which has made it difficult for me to express my thoughts clearly and concisely in writing. Despite this challenge, I have always been an avid reader and have a deep interest in fields such as physics, computer science, artificial intelligence, and philosophy.
To overcome my dyslexia and improve my writing skills, I have turned to tools like ChatGPT. By dictating my thoughts and using ChatGPT to generate text, I am able to communicate more effectively and express my ideas more clearly. Despite the challenges I have faced, my determination and use of technology have allowed me to excel in my career and continue to learn and grow.
All of the above was written by chat GPT and copied here without my editing. The bot added a few thoughts that I would change but expresses my thoughts clearly I did the whole process quickly. Without the bots help, I would’ve been unable to write the above.
Maybe this isn't a route to AGI, but frankly I don't care about that. What I care personally is whether we can build usable products and automations with it. In my estimation, the answer is very highly likely 'yes'.
We are already seeing some interesting stuff appearing on top on GPT-3 and I'm sure with more maturity we'll get more robust products soon.
The key thing would be how those products are designed, eg if we expect the model spit out the perfect legal contract that doesn't need checking, we'll be there for a long time. But if we design a product that generates possible ways of resolving a customer complaint (based on existing data from the org) and gets the complaint handler to make the final decision, we could probably do that now, and that's very valuable.
Why the confidence that AGI will inevitably come (within a century, say), especially given that recent LLM trends patently are not heading in that direction? I've yet to see a prediction of this kind grounded in sober analysis of practical tools/concepts that already exist, as opposed to Homer-style empty optimism (or pessimism, depending on your outlook) of the kind: 1. GPT-3. 2. GPT-4. 3. ???? 4. Profit!
"The techniques of artificial intelligence are to the mind what bureaucracy is to human social interaction."
— Terry Winograd