That’s not a joke, it’s a quote. And also a warning.
Over the last few hours, people reporting having been report a variety of problems with ChatGPT:
Devin Morse, a philosophy PhD student has collected more examples in this thread.
OpenAI itself has acknowledged the issue:
I won’t speculate on the cause; we don’t know. I won’t speculate on how long it will take to fix; again, we don’t know.
But I will quote something I said two weeks ago: “Please, developers and military personnel, don’t let your chatbots grow up to generals.”
§
In the end, Generative AI is a kind of alchemy. People collect the biggest pile of data they can, and (apparently, if rumors are to be believed) tinker with the kinds of hidden prompts that I discussed a few days ago, hoping that everything will work out right:
The reality, though is that these systems have never been been stable. Nobody has ever been able to engineer safety guarantees around then. We are still living in the age of machine learning alchemy that xkcd captured so well in a cartoon several years ago:
The need for altogether different technologies that are less opaque, more interpretable, more maintanable, and more debuggable — and hence more tractable—remains paramount.
Today’s issue may well be fixed quickly, but I hope it will be seen as the wakeup call that it is.
As ever, Gary Marcus longs for trustworthy AI. There is a fun profile of him today by Melissa Heikkilä in Technology Review, along with a terrific podcast today on Sora and society, with Jayme Poisson, at CBC’s Frontburner.
Isn't ChatGPT "Intelligent" enough to fix itself? I mean if I'm drunk and shouting at the toilet I can still fix myself.
"The need for altogether different technologies that are less opaque, more interpretable, more maintanable, and more debuggable — and hence more tractable—remains paramount." - exactly! We need a formal theory of intelligence that explains such things as reasoning, understanding and knowing, with algorithms that are mathematically provable to be sane and reliable. This has been the real holy grail of AI as a field since its inception, but because it's a very hard problem, somewhere along the way people decided to take the easy route of behaviourism which focuses on achieving practical results at the expense of understanding the theory. Neural networks are the pinnacle of that philosophy, being described as black boxes that "just work". We need to go back to square one and re-evaluate what new route to take. I am putting my money on Bayesian methods and variational inference as building blocks because at least theoretically they satisfy the requirement for mathematically provable saneness and reliability.
There is a counter argument, that a mathematical theory of intelligence might be impossible because there are a lot of non-computable problems. This is true, there are a lot of non-computable problems, furthermore there are problems that while computable in principle are prohibitively expensive in practice. In fact, I would argue that most problems from the real world are unsolvable exactly. I don't view this as an unsurmountable problem though, because we can usually use approximate solutions that are provably good enough. For example, with the exception of integer arithmetic, all floating point math on a computer is actually approximate because we use discrete representations of the continuous set of real numbers. Another beautiful example of approximate solution to an otherwise hard or non-computable problem is Newton's method for finding roots, or Taylor/Fourier series expansion to approximate functions. Engineering is in fact full of approximations, nearly any powerful method for solving a hard problem in engineering is actually an approximation. So, my point is, we don't need to solve problems exactly, we can solve them approximately if the approximation is provably good enough, and approximations are computable and of relatively low complexity.