84 Comments

Isn't ChatGPT "Intelligent" enough to fix itself? I mean if I'm drunk and shouting at the toilet I can still fix myself.

Expand full comment

"The need for altogether different technologies that are less opaque, more interpretable, more maintanable, and more debuggable — and hence more tractable—remains paramount." - exactly! We need a formal theory of intelligence that explains such things as reasoning, understanding and knowing, with algorithms that are mathematically provable to be sane and reliable. This has been the real holy grail of AI as a field since its inception, but because it's a very hard problem, somewhere along the way people decided to take the easy route of behaviourism which focuses on achieving practical results at the expense of understanding the theory. Neural networks are the pinnacle of that philosophy, being described as black boxes that "just work". We need to go back to square one and re-evaluate what new route to take. I am putting my money on Bayesian methods and variational inference as building blocks because at least theoretically they satisfy the requirement for mathematically provable saneness and reliability.

There is a counter argument, that a mathematical theory of intelligence might be impossible because there are a lot of non-computable problems. This is true, there are a lot of non-computable problems, furthermore there are problems that while computable in principle are prohibitively expensive in practice. In fact, I would argue that most problems from the real world are unsolvable exactly. I don't view this as an unsurmountable problem though, because we can usually use approximate solutions that are provably good enough. For example, with the exception of integer arithmetic, all floating point math on a computer is actually approximate because we use discrete representations of the continuous set of real numbers. Another beautiful example of approximate solution to an otherwise hard or non-computable problem is Newton's method for finding roots, or Taylor/Fourier series expansion to approximate functions. Engineering is in fact full of approximations, nearly any powerful method for solving a hard problem in engineering is actually an approximation. So, my point is, we don't need to solve problems exactly, we can solve them approximately if the approximation is provably good enough, and approximations are computable and of relatively low complexity.

Expand full comment

Maths can very, very, very, occasionally solve for two unknowns. Maths are helpless when solving for three unknowns. E. coli bacteria have, roughly, 8,000 transmembrane protein sensors so the organism is at any one time solving, in Real Time, tens to thousands of unknowns. Further we know Reality - however defined - is Inclusive Middle. We know that from the obvious fact the Pacific Ocean is to the east of California and to the west of Japan. Finally, in Real World Decision making agents will Choose A over B, B over C, and C over A. This Value Anomaly was first noted by McCulloch in 1945.

In short, Numerology is not the way forward.

Expand full comment

Enough with Theory, already.

Expand full comment
Comment removed
Feb 21
Comment removed
Expand full comment

"and instead are building an exhaustive map of human knowledge and tricks" - good luck with that!

Expand full comment
Comment removed
Feb 21
Comment removed
Expand full comment

you'd better consult then with the expert systems guys from the 80ies and the Cyc guys from the 90ies, they were trying to do the same thing - cataloging human knowledge. I believe Stephen Wolfram also tried to do something like that more recently.

Expand full comment

Incredible manic poetry, corporate logic-bending — I feel like we see its spilled guts in this glitch.

Expand full comment

insert Matrix joke here :)

Expand full comment

Vogon poetry! Uh oh...

Expand full comment

I tried to play a game of blackjack with ChatGPT last week and it kept breaking the rules -- taking over as dealer, and so forth. Then we talked it over and ChatGPT apologized. I still flagged the issue. Hope I didn't get ChatGPT lobotomized or put on new meds...

Expand full comment

No, see, it's totally fine if the genAI systems that the brightest minds in Silicon Valley want to be undergirding every piece of modern civilization (with hefty kickbacks to their bank accounts) simultaneously have a tantrum, because even humans have sick days, and you're just being an algorithmicist curmudgeon, Gary.

Expand full comment

I think there might've been some A/B testing going on. I've been using ChatGPT today to handle a few basic tasks (LINQ/SQL conversion, xpath generation, D&D brainstorming, tweaking a regex) and it's been acting like its usual over-eager and over-confident but basically useful self. (There was a slight misunderstanding of the task in one of the xpaths it gave me, but that's par for the course and I fixed it by hand.)

I'd love to know what happened, though. A shame OpenAI is unlikely to be, well, open about this.

Expand full comment

This stuff reads kinda like those videos demonstrating what non-English speakers think English sounds like, except instead of English it's corporate PowerPoint blather.

Expand full comment

Maybe it's the psychiatrist in me but I find word salad pretty interesting.

Expand full comment

Doesn’t it sound awfully similar to typical schizophrenic discourse? These wild semantic associations that sometimes break down into literal associations. Maybe there is something to be learned from this? Whether it’s data science, linguistics, neurology or psychiatry I don’t know though. /Licensed psychologist and now data scientist

Expand full comment

This honestly reads like someone with aphasia. It **almost** makes sense but just... doesn't. This makes it even more intriguing because there's a hint of pattern in the randomness. I'd love to know what caused this.

Expand full comment

I recommend Clozapine. In very high doses (with close monitoring of course...).

Expand full comment

Someone should try a prompt along the lines of "You are to play the role of a patient in a psychiatric clinic who took [name of drug] an hour ago." Please respond accordingly.

After an incident of berserkness.

Expand full comment

No one died. This time...

Expand full comment

There is difference between actual intelligence and pretending to be intelligent. All these GPTs are just pretending to be intelligent... They might fool the user 70% of the time... but someone regularly using it or an expert will be able to spot the difference easily.

Expand full comment

I know you said you'd refrain from speculating, but to me it looks like a sort of response you get to a glitch token. GPT3 had quite a few known glitch tokens and the nonsense flow looked very similar to this.

What if they tweaked the system prompt and accidentally included a glitch token, or if whatever markdown preprocessing they do resulted in one... I used to go back to davinci on the OpenAI playground just to play with "Solidgoldmagikarp" responses - might do it again for nostalgia purposes.

Expand full comment

Well, since OpenAI is actually ClosedAI, we'll probably never know what happened. But as a (real) old computer security guy, the speed with which this was "fixed" suggests that the "fix" was yet another patch somewhere in the input stream to prevent yet another form of impolite input. If (big if) that was the case, then the fact that the erroneous behavior happened to people who were submitting polite inputs means that the isolation between users broke down. And if (bigger if) that in turn is the case, then having User A observe/manipulate the activity of User B becomes possible.

Right now, Closed, er OpenAI relies on security through obscurity: latent vulnerabilities are not exploitable (they think) because it would be *soooo* hard to find them. Trust me on this, if a state-associated activity sees a target this big for both espionage and influence operations, the amount of resources they will devout to exploiting it will make the average Silicon Valley VC look like a penny-ante player.

Have a nice future.

Expand full comment

Apparently it's fixed now. But if OpenAI doesn't have some answers, people are gonna go straight for the sci-fi tropes.

...also, I'd kind of like to be able to replicate berserk-GPT. Seriously, it's hard to find a good "foreboding gibberish" generator for creepy far realm abominations. Tried getting GPT proper to do it a few times, but it was too much of a nebbish to really commit. But what it was spouting earlier? Just gold.

Expand full comment

“Please, developers and military personnel, don’t let your chatbots grow up to generals.” Fair enough, but you have to tell it to EleutherAI, whose GPT-Neo is in use by Palantir for their killer drones. And that's way more problematic compared to OpenAI, since open source is open source - and cannot be controlled.

Expand full comment