As a former special education professional who worked a LOT with cognitive assessments, and spent many hours correlating cognitive subtest scores with academic performance in order to create learning profiles, do I ever have an opinion on this.
Too many people are simply unaware of the complexities of human cognition. I've seen how one major glitch in a processing area...such as long-term memory retrieval (to use the Cattell-Horn-Carroll terminology) can screw up other performance areas that aren't all academic. Intelligence is so much more than simply the acquisition and expression of acquired verbal knowledge (crystallized intelligence) that tends to be most people's measure of cognitive performance. I have had students with the profile of high crystallized intelligence, low fluid reasoning ability and...yeah, that's where you get LLM-like performance. The ability to generalize knowledge and experience, and carry one learned ability from one domain to another is huge, and it is not something I've seen demonstrated by any LLM to date. I don't know if it is possible.
Unfortunately, too many tech people working on A.I. haven't adequately studied even the basics of human cognitive processes. Many of those I've talked to consider crystallized intelligence to be the end-all, be-all of cognition...and that's where the problems begin.
I apologize if i presume too much, but i have to : you probably talked to the wrong people. I believe the distinction between fluid and crystallized intelligence is widely accepted and recognized by most (if not the entirety) of the well-read population, and some ML engineers read cognitive science textbook out of curiosity (Ilya comes to mind). Even kids make the difference between memorizing and reasoning.
You're much more optimistic than I am, because I have not seen the evidence that people understand the difference. And yes, I have had these discussions with people who were allegedly knowledgeable.
Interesting. But isn't the acknowledgment of IQ for example the testimony of some rudimentary understanding of the distinction between crystalized and fluid intelligence ?
To start with, let's bag the term IQ and go to a more accurate term for a full scale cognitive score--G, for "general intelligence." Gf (fluid) and Gc (crystallized) are just two aspects of cognition that make up a full scale score. They're also the two areas which really distinguish the difference between human intelligence and artificial intelligence--the remaining areas tend to be more performance and processing oriented.
You have to have both components for G, as well as the rest of the mix. Memory, processing speed, visual-spatial, and so on. AI is competitive if not superior in memory and processing speed. However, without Gc, Gf isn't really workable (you need those past experiences and knowledge to make those intuitive connections to solve a problem). Additionally, Gf is more about thinking outside of the box, where Gc is regurgitating acquired knowledge.
But you can cruise right along with Gc without a strong Gf. There's the difference.
Note...there's a lot to this but please tell me if I'm preaching to the choir and you know all about CHC theory. If not, here's a decent intro from Wikipedia. I tend to use crystallized for Comprehension-Knowledge as that's the term I learned.
Joyce, this information you're sharing about the more complete context for intelligence has been helpful and, sadly, nearly absent from the growing discussion about the evolution of AI. Looking forward to more commentary from you. It would be interesting to hear thoughts on this subject from Gary Marcus.
What I learned from using AI as a writing assistant (ChatGPT 4) is that is was great at doing what I call "low level editing" (technical aspects, grammar, sentence structure, fragments, run-ons, typos, etc.), and sometimes helping improving readability, as well as proposing words, phrases and occasionally sentences, as options to choose from – like throwing a dice and seeing what comes up – using it's vast memory and database of word and sentence patterns. But if I let it get out of hand, write too much or for too long without my hand at the wheel, it was crap: generic or hyperbolic or completely out of left field, whatever – and not my or the author's *voice* I was editing. Cheesy. And the longer it writes on it's own the more it confabulates, wanders away from the intention, adds things, or hallucinates.
The AI cannot *hear* how something *sounds* and how that *feels*, as whole, which is exactly what a good writer is sensitive to: an intangible flow, an organic *voice* that cannot be pinned down.
But most importantly, when experimenting with having it help with a philosophical work, it struck me at one point, as clear as a bell, the insight that this thing had ZERO *real* understanding of the actual real *meaning* of what it's saying. It is *purely* mechanical. (Having worked in the computer field for most of my life helped me see this too).
That was very freeing, because it dispelled all illusions, any projections of intelligence, intentionality or awareness into it. It was simply responding to what I was asking, in it's own weird, complicated stimulus-response way to be impressively human-language-text-like (and thus the clarity of prompts, and understanding what kind of machine one is dealing with is absolutely critical).
It was even more clear then that rather than a replacement for human creativity, the machine is a tool and a slave only. That is it's natural place. And enhancer and amplifier for us. After all, *we* created it – the creativity is that: in the software, hardware, and the human ideas it embodies. It has none of it's own: that what springs as a whole, "out of the blue" in an instant, from the quantum field of divine beauty love truth of Life (or whatever you want to call the Source).
As a machine adjunct to writing and human brains, it can save a tremendous amount of time and tedium, such as in editing a long work from a poor writer. But it is not a replacement for real creativity, understanding, intention, feeling, sense of beauty, truth, and love – and all that human and "divine" "stuff". :)
Cue the fanboi comment “…yet :)” and we inevitably digress back to talking about what intelligence is. There’s some kind of fundemental disagreement going on between skeptics (aka realists, if you ask me) and fanbois.
Yes, exactly. It's a *very* fundamental agreement - about the nature of reality you could say. To me it's very practical and realistic-experiential; the their belief system it sounds airy-fairy probably. I know, because i used to be one of the fanboy tehch-realist hard-headed skeptic materialists (At least on the surface... ), so I know what they thought-system / belief systems (religion) is like, and what the resistance feels like.
(I just added the word *meaning* to the above. :) – important!:
"...ZERO *real* understanding of the actual real *meaning* of what it's saying")
Meaning can be a tricky topic because it’s not in the words or in the processing in time or mechanics (by the way here everything I’m referring to here is experiential and directly empirical, not about *models*) - we are *already* using an awareness of meaning understanding to do the investigation and communication, and it’s hard to see where it’s coming from, Being so immediate, this popping in of seeing meaning of something - getting the punchline of a joke is a good example - it like this invisible connection – and is entirely subjective (In a non-psychological sense) - i.e., non-objective. Therefore you have to show what it’s NOT in order to get an insight about what it is. But philosophic tools like Searle’s Chinese room experiment can be good intuition pumps pointing in this direction of.
So we don’t need new engineering, we need a new outlook to *drive* the engineering...
Couldn't agree more. An excerpt from my unfinished draft AGI paper [https://bigmother.ai]:
"Unfortunately:
• humans are primarily motivated by short-term self-interest
• humans are instinctively tribal
• human cognition is far less perfect than we like to think it is.
As a result, Molochian behaviour (whereby many tribes compete in their own short-term interest, oblivious to any consequent long-term harm) is deeply ingrained into human nature."
I already plan to cite "Kluge" (among others) when I expand the relevant section.
Great read. I agree with most of it but have a few issues. I don't think an unbiased, unerring, "Commander Data" type of intelligence is possible. Intelligence is very much about making assumptions and correcting them if necessary. This is what learning is about.
When we fully understand intelligence and build intelligent machines, I believe we will find them to be just as prone to errors as we are, especially during their upbringing phase. They will eventually be more reliable than humans but only because they will be more focused on achieving their goals. They won't be distracted by most of the things that distract us, the things that make us human such as the taste of food, love relationships, reproduction, subjective aesthetics, etc. I could be wrong but this is my current take.
I don't think emotions are the real culprit here, humans are just "intrinsically flawed" in their hardware (as compared to some hypothetical ideal), just think about our working / long term memory and the shitty API for consciousness to make full use of those two (like repeating a number 50 times until you get a notepad, anyone ?). Forgetting is probably there for a very good reason, but that reason is tied to our hardware. An AI could probably make full use of both perfect memory and forgetting, or at worse have all the human knowledge directly connected to its subconscious with <1 ms queries to it.
Being able to design the software behind intelligence is a cheat code, ASI in the sense of AGI++ will be almost immediate.
Human brains are not nearly as flawed as you think. It's a wondrous machine. Perfect memory is only possible in Star Trek. In the real world, there is no room for it.
For an AI, near perfect memory is extremely easy to implement, like remembering the exact color of somebody's shirt during a meeting 20 years ago is a piece of cake and wouldn't take that much space. Almost perfect memory of what's useful for survival ? A no-brainer.
But what i meant is that human brains will seem flawed when the first AGIs will roll out and put us to shame. We struggle to multiply 11 by 12, or to remember were we put our keys, how hard is it to beat that ?
I couldn’t agree more. For me, studying AI has teached me a lot about what it means to be human. And learning what it means to be human, helps me better understand what AI is and isn’t.
I feel one our biggest blindspots today is that by building AI in our own image, we are ingesting these systems with all the same vunerabilties that make humans flawed.
The fact that LLMs can be coerced into all sorts of behavior is a testament to that. I don’t see how that is going to go away as the technology becomes ‘smarter’, in fact, I predict this only to get worse. This is why the whole idea of ‘aligning’ models with ‘human preferences’ is deeply misguided in my opinion.
I’m not proposing any alternative, I’m not in the position to do so, companies like OpenAI are. I’m merely pointing out that this technology is a mixed bag and may cause more problems than it promises to solve.
I'm glad you held up the Star Trek computer (STC) as a goal rather than Commander Data from the same show. Although a Data-style AI would also be interesting, we need the STC first.
The "Star Trek computer" is really an AI layer (an advanced non-hallucinating LLM maybe?) querying a huge network of dumb databases and sensors. I would not personally call it intelligent in the biological sense.
I agree that it relies on access to the internet (or whatever they're calling it in the Star Trek universe) and it is Siri-like in how it interacts with a human. But how is it not intelligent if it can guess the human's intent correctly most of the time and ask the right questions if unsure? I'm not sure what you mean by "in the biological sense" here.
My take is that true intelligence must go through an upbringing phase during which they are trained or raised via classical conditioning. This means reward and punishment. After conditioning, an intelligent machine will behave the way its teachers want it to behave and it will not depart from it. Intelligence is the slave of motivation, not the other way around. Classical conditioning (motivational training) is one of the many things missing from LLMs.
First, we have no idea how the Star Trek computer was trained, or if it was even trained at all in the sense we use the term now. Second, and much more importantly, there's no reason to assume that some kind of human-like training process is required. Perhaps we can give the AI its innate knowledge by simply initializing it with the necessary data (world models, urges, algorithms, etc.).
We must think beyond current artificial neural networks. Not only do they not match biological neurons very well, I suspect there's a lot of structure missing compared to the human brain. We know the brain contains a lot of innate knowledge but haven't a clue as to where it is held.
Well said! Puts me in mind of Dijkstra's aphorism, "The question of whether machines can think is about as relevant as the question of whether submarines can swim." I love it that his saying will now live in my mind on par with Ali G's "neither is better", thanks to this piece.
The stochastic parrot argument that says that LLMs are not intelligent because they are merely predicting the next token is analogous to saying that humans are not intelligent because they are merely maximizing inclusive fitness. In both cases the original training process has led to remarkable capability on a variety of downstream tasks. Nevertheless it is still important to take into account the nature of the original training process, whether it is natural selection or a Machine-Learning, when trying to understand the capabilities of a cognitive system, artificial or otherwise. Humans make many "mistakes" in certain tasks, but these can be understood as adaptations through the lens of evolutionary psychology. For example, we perceive sounds moving towards us as moving faster, as compared with sounds moving away from us, but this "mistake" makes sense in terms of a fitness-maximizing adaptation to prioritizing sounds associate with danger in our ancestral environment. Similarly, LLMs make, to us, basic mistakes like being confused by word-order, but again these make sense when considering the original objective function. Analogous to evolutionary psychology, we need to take an teleological approach to understanding artificial cognition which incorporates known facts about the architecture and training process of the system under study. These ideas are discussed in detail in the paper "Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve" by R.T. McCoy et. al. (https://arxiv.org/abs/2309.13638). My own commentary on their paper is available here: https://sphelps.substack.com/p/a-teleological-approach-to-understanding.
I feel like you could go even lower level than inclusive fitness, especially since it's way easier in modern society to not prioritize inclusive fitness or even to ignore it. I mean, look, atoms and molecules dancing together, how could THAT possibly be intelligent !
Gary, have to share one of the most painfully incisive bits of prose on this. In the May 2010 issue of Discover Magazine, in article on robots, Bruno Maddox concluded with these words:
I’d argue that the revolution of the last 20 years has quenched our robo-fear, not so much by giving us a taste for change as by taking the gleam off that spark of humanity that we used to be so proud of. What is Man? people used to wonder. Is consciousness divine in origin? Or is it a mere accident of nature that we alone, of all the matter in this Great Universe, adrift upon this marbled speck, have the power to dignify and enoble our condition by understanding it, or at least attempting to?
Then along came the Internet, and now we know what Man is. He enjoys porn and photographs of cats on top of things. He spells definitely with an a, for the most part, and the possessive its with an apostrophe. On questions of great import or questions of scant import, he chooses sides based on what, and whom, choosing that particular side makes him feel like, and he argues passionately for his cause, all the more so after facts emerge to prove him a fool, a liar, and a hypocrite. If a joke or a turn of phrase amuses him, he repeats it, as if he thought of it, to others who then do the same. If something scares him, he acts bored and sarcastic.
That’s pretty much it for Man, it turns out after all the fuss. And so our robo-fear has become a robo-hope. Our dreams for ourselves are pretty much over, having been ended by the recent and vivid reiteration of the news that we really are just grubby and excitable apes, incapable by our nature of even agreeing on a set of facts, let along working together to try and change them….It’s already clear that we’re not building robots in our own image. We’re building them in the image of the people we wish we were….
Gary, why do you think that so many people (AI “experts” included) are hungry for AI to be human-like to the extent that they fear such outcome, versus thinking of AI as a tool to make humankind more productive, efficient, and able to take on more difficult tasks. Why is there such a desire in humans to abdicate responsibility for thinking through what needs to be done and how? This is also frequently the case with laws people demand (until they have them...or as some friends and I like to say, “you don’t want that” ;). Yes, humankind is imperfect but it feels like the ideals being proposed or the capabilities being aspired to by some of the AI researchers are less about the tool view of the world and more about the wholesale replacement of our species...unless you’re Kurtzweil, then it’s about merging the two (ugh!) 😉. Your book, “Rebooting AI” was excellent on the shortcomings of what is the state of AI at present which I really appreciated and feel should be a must-read for anyone dealing in public policy around these issues.
It's not that people want to replace humanity, it's that a liberal economy + (super)intelligent AI = humans replaced. The AI as tool describes only this tiny short period where AI is not good enough to replace us yet still useful (= ChatGPT, Dall-E...). Humans will desire to be replaced by AI : why would anyone want to spend 10 hours a day assembling toys in a factory instead of having quality time with their family ?
Now, perhaps you're scared of AI replacing humans in creative sectors, like art, writing poems, making movies, music, ... ? Perhaps you could have policies that protects these fields from being overrun by AI creations, right ? Well, that's impossible, unless you plan to control every computer in the world. Even a "Made without AI" label is impossible to protect from AI made/assisted content.
AI replacing us in almost every endeavor is a certainty, as much in odds as having the sun rise the next morning.
Well, any time I see that out of the nearly infinite paths the future can take, someone espouses certainty, it’s perhaps best to consider that opinion with a grain of salt 😉. A strong sense of humility is necessary through all of this to avoid the “hair on fire, end of the world” panicked behavior we see on all things these days. As they say, “strong opinions held lightly” is really the model to apply through all of this.
My concern is less with AI taking on various tasks regardless of how easy or hard, just that it should always do so in the context of “tool for humans” not as abdication of responsibilities. Machines and various inventions have a history displacing human physical labor. Books displaced knowledge-based labor. AI feels like a continuation of this sort of evolution. That’s fine IMO so long as the tool context remains in tact. My concern is that too many in the AI field are in search of a machine autonomy that they don’t understand and don’t truly want (once it happens of course). It’s all a matter of context and that’s where my bigger concerns lie.
My bad, i probably didn't get the meaning behind your message. I'd say people probably believe achieving robust intelligence comes with giving it human-like traits, and surrendering responsibilities to these tools seems inevitable because of monetary incentives. It's not that they wish or are hungry for that (although some are), just that it's one (quite) probable outcome.
What do you mean that AI is pretty poor at "inferring semantics from language"? LLMs learn all about the semantic of words and sentences from training on texts written in text and languages. And are apparently pretty good at understanding the words as they can write decent collage essays.
"Thou shalt not make a machine in the likeness of a human mind." I feel like LLM's and things like Midjourney are a strange, almost misanthropic progression for the technology. Does a computer need a complex language model to handle a spread sheet? Does it need an advance imaging program to maintain a database?
Does my boat needs an engine when i can row ? There will be human suffering caused by such technologies, but they're unavoidable in today's societal configuration. All we can do is brace for impact and navigate the cataclysm the best we can. We'll adapt i'm sure, worst case we'll augment ourselves in a way that minimizes suffering in the new world.
A boat still needs a person to navigate, start, stop, unload, and load up. You're "prediction" for the future isn't helping sell the technology. Just another piece of evidence for throwing it, and the people behind it in a bin.
Yes, but in a world where everybody is trying to row faster than their neighbor, the engines becomes a must-have, just like LLMs handling spreadsheets becomes a must-have. Can we blame anybody for wanting to row faster than their neighbor, when rowing faster often truly equates with more happiness given the selective pressure in every part of our lives ? Even friendship is vulnerable to the rowing faster than your neighbor paradigm, and you can be ejected from a group of friends if some faster rower suddenly decides you're a nuisance.
To expand on my previous answer, what i meant is that there is no going back possible, it's over, AGI is coming, all you can do is try to slow it down, but funny fact, trying to slow it down might even bring it faster as engineers would start exploring architectures with lower training costs, which are actually the highway to AGI. Even in an almost perfect setting where all countries including Russia, China, Cuba, North-Korea, ... would all decide to drastically regulate AI, AGI would still appear and propagate like wildfire.
The genie is out, and there's no bin where it will fit. Let's not forget that AGI also potentially means considerable improvement in the life of billions, it could yield true material abundance for everyone, the end of work and jobs, imagine having everybody doing community service in the form of taking care of the more vulnerable and less fit for example ? The net happiness in the world could just reach unprecedented levels. What we need is to minimize the downsides during the transition by guiding the political scene using our votes.
What consequence of AGI scares you the most Michael ? Loss of jobs ? Loss of meaning when AI overtakes humans even in creative endeavors ? Dangers associated with misuse ?
There are so many problems with your response I'm going to have trouble getting to all of them.
The rowing analogy has gone way too far and now seems absurd. My friends cut me loose because I can't row as fast? What?
The destruction of any and all creative pursuits and condemning us all to lives of community service and menial labor is bad. I don't know how else to explain that. It's bad. The AI utopia you just described is a utopia, a great no such place.
I don't trust AI, I don't trust the people behind AI, I don't trust the government to regulate AI.
In a group of friends, a friend that rows faster (= is better equipped, more charisma, more social experience, more intelligent, more persuasive...) can potentially evict you from the group if he really wants to. This applies mostly to newly formed groups of friends where the bonds are still fragile. People use LLMs because these LLMs help them have more agency over their world, and agency is prized by nearly everybody because it helps them get happier.
I was just offering an idea for the community service thing, we'll be free to do whatever we want. Nobody knows how the transition from now to a world conquered by AGI will unfold, it's gonna be a fluid process, nothing will happen overnight, creative pursuits will probably not be the first to fall, hence lessons will have been learnt from previous careers that were stabbed through the heart. In the end, we'll adapt, just like people still play chess despite computers vastly outperforming them, people will still do all kinds of art. Only you won't get paid for it, but you probably won't need to be paid anymore in the future.
The utopia is not so far fetched, it's speculative but AGI should speed up technological progress by a very large factor. Many positive things should come out of it and quickly (with associated risks as well). Hopefully abundance for everybody is one of these outcomes.
What happened that you have this view of social connections? No friend group, no work group, no group I've heard of acts like this unless they are awful people.
Human intelligence arose as a result of millions of years of evolution. It is native to the nervous system. The nervous system is way more akin to that other organically evolved defense system, the immune system, than to any digital computer. That is why computers do so poorly in the "last mile" of the task of driving. They cannot contextualize what they "know." Humans do that effortlessly and immediately. Otherwise we would not have survived as a species. How will LLMs develop contextuality if they have no desires, face no dangers, care about nothing, in short, have no life?
It took 3 700 000 000+ years for evolution to be able to play chess, it now takes an afternoon to build a program that can play chess. We should stop comparing anything with how long it took for evolution to build it : indeed evolution is the best engineer in its class, in the anencephalic engineers class.
Before judging LLMs and transformers and simple neural nets in general so quickly, we shall remind ourselves that they were trained on text, and not multimodal input. It's an absolute miracle that they display such coherence with such poor training, we should all marvel at the impossible power of next token prediction.
It took organic evolution about 3.7 million years to *develop the game* of chess. Chess is a closed system with clear rules so you don't even need AI or neural networks to create a program that can play it.
Driving is a whole different story. A driver has to deal with unexpected events in context. Lots of things humans do without effort are like that. Navigating complex social situations for example. Raising children. Growing up to be members of the human race. We may not all be good at those things but we can get through them. My three children turned out all right despite having me for a father.
But to get back to driving. For the past 20 years self driving cars have been 5 years in the future. I'll see what happens by 2028, if I'm still here. It would be cool to have self driving cars just as it would be cool to have fully human robotic friends. I don't expect to see it though. I'm 73. It's sort of like the end of capitalism. I've been hearing about it's immanent collapse since I was 18.
We could take flying as an example of something we solved faster than evolution did through natural selection ? I always felt the number given in years for evolution to build something to be meaningless.
I would be optimistic about AI and self-driving cars if i were you, because of the growing amount of compute available (vision being 50% of our neocortex i believe ?), because of the growing number of graduates flocking to AI, and to the billions injected in it... As much as you might feel like things haven't changed, the tune is actually very very different today, all the ingredients are here for AIs with common sense to spawn. Almost pure neural nets will get there eventually, as obviously neural nets can be used to build stable cognitive models of the world (hello human brain), but there are shortcuts available and perhaps somebody will get there before the mainstream paradigm does.
Eat your fibers, run your morning jog, if nobody brings you AGI within the next few years, i will. Oddly, the simplest and most straightforward AGI design hasn't been tried, symbolic AI has had the same problem as LLMs today, everybody was blinded by the then popular paradigms like formal logic. Human creativity is a funny phenomenon when examined through the lenses of history and sociology.
This is like my favorite or second favorite subject, so I like that people are engaging. My prob is that human cognition is grossly unlike digital computing and is way way more than logic and predictive text. Also, we can be fooled. Remember that guy who thought his Chatbot was human?
Anyway you and the other guy keep it coming. I love this shit.
Deep breath.
As a former special education professional who worked a LOT with cognitive assessments, and spent many hours correlating cognitive subtest scores with academic performance in order to create learning profiles, do I ever have an opinion on this.
Too many people are simply unaware of the complexities of human cognition. I've seen how one major glitch in a processing area...such as long-term memory retrieval (to use the Cattell-Horn-Carroll terminology) can screw up other performance areas that aren't all academic. Intelligence is so much more than simply the acquisition and expression of acquired verbal knowledge (crystallized intelligence) that tends to be most people's measure of cognitive performance. I have had students with the profile of high crystallized intelligence, low fluid reasoning ability and...yeah, that's where you get LLM-like performance. The ability to generalize knowledge and experience, and carry one learned ability from one domain to another is huge, and it is not something I've seen demonstrated by any LLM to date. I don't know if it is possible.
Unfortunately, too many tech people working on A.I. haven't adequately studied even the basics of human cognitive processes. Many of those I've talked to consider crystallized intelligence to be the end-all, be-all of cognition...and that's where the problems begin.
I apologize if i presume too much, but i have to : you probably talked to the wrong people. I believe the distinction between fluid and crystallized intelligence is widely accepted and recognized by most (if not the entirety) of the well-read population, and some ML engineers read cognitive science textbook out of curiosity (Ilya comes to mind). Even kids make the difference between memorizing and reasoning.
You're much more optimistic than I am, because I have not seen the evidence that people understand the difference. And yes, I have had these discussions with people who were allegedly knowledgeable.
Interesting. But isn't the acknowledgment of IQ for example the testimony of some rudimentary understanding of the distinction between crystalized and fluid intelligence ?
To start with, let's bag the term IQ and go to a more accurate term for a full scale cognitive score--G, for "general intelligence." Gf (fluid) and Gc (crystallized) are just two aspects of cognition that make up a full scale score. They're also the two areas which really distinguish the difference between human intelligence and artificial intelligence--the remaining areas tend to be more performance and processing oriented.
You have to have both components for G, as well as the rest of the mix. Memory, processing speed, visual-spatial, and so on. AI is competitive if not superior in memory and processing speed. However, without Gc, Gf isn't really workable (you need those past experiences and knowledge to make those intuitive connections to solve a problem). Additionally, Gf is more about thinking outside of the box, where Gc is regurgitating acquired knowledge.
But you can cruise right along with Gc without a strong Gf. There's the difference.
Note...there's a lot to this but please tell me if I'm preaching to the choir and you know all about CHC theory. If not, here's a decent intro from Wikipedia. I tend to use crystallized for Comprehension-Knowledge as that's the term I learned.
https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93Carroll_theory
Joyce, this information you're sharing about the more complete context for intelligence has been helpful and, sadly, nearly absent from the growing discussion about the evolution of AI. Looking forward to more commentary from you. It would be interesting to hear thoughts on this subject from Gary Marcus.
This is one of your better posts. Thank you.
What I learned from using AI as a writing assistant (ChatGPT 4) is that is was great at doing what I call "low level editing" (technical aspects, grammar, sentence structure, fragments, run-ons, typos, etc.), and sometimes helping improving readability, as well as proposing words, phrases and occasionally sentences, as options to choose from – like throwing a dice and seeing what comes up – using it's vast memory and database of word and sentence patterns. But if I let it get out of hand, write too much or for too long without my hand at the wheel, it was crap: generic or hyperbolic or completely out of left field, whatever – and not my or the author's *voice* I was editing. Cheesy. And the longer it writes on it's own the more it confabulates, wanders away from the intention, adds things, or hallucinates.
The AI cannot *hear* how something *sounds* and how that *feels*, as whole, which is exactly what a good writer is sensitive to: an intangible flow, an organic *voice* that cannot be pinned down.
But most importantly, when experimenting with having it help with a philosophical work, it struck me at one point, as clear as a bell, the insight that this thing had ZERO *real* understanding of the actual real *meaning* of what it's saying. It is *purely* mechanical. (Having worked in the computer field for most of my life helped me see this too).
That was very freeing, because it dispelled all illusions, any projections of intelligence, intentionality or awareness into it. It was simply responding to what I was asking, in it's own weird, complicated stimulus-response way to be impressively human-language-text-like (and thus the clarity of prompts, and understanding what kind of machine one is dealing with is absolutely critical).
It was even more clear then that rather than a replacement for human creativity, the machine is a tool and a slave only. That is it's natural place. And enhancer and amplifier for us. After all, *we* created it – the creativity is that: in the software, hardware, and the human ideas it embodies. It has none of it's own: that what springs as a whole, "out of the blue" in an instant, from the quantum field of divine beauty love truth of Life (or whatever you want to call the Source).
As a machine adjunct to writing and human brains, it can save a tremendous amount of time and tedium, such as in editing a long work from a poor writer. But it is not a replacement for real creativity, understanding, intention, feeling, sense of beauty, truth, and love – and all that human and "divine" "stuff". :)
Cue the fanboi comment “…yet :)” and we inevitably digress back to talking about what intelligence is. There’s some kind of fundemental disagreement going on between skeptics (aka realists, if you ask me) and fanbois.
Yes, exactly. It's a *very* fundamental agreement - about the nature of reality you could say. To me it's very practical and realistic-experiential; the their belief system it sounds airy-fairy probably. I know, because i used to be one of the fanboy tehch-realist hard-headed skeptic materialists (At least on the surface... ), so I know what they thought-system / belief systems (religion) is like, and what the resistance feels like.
(I just added the word *meaning* to the above. :) – important!:
"...ZERO *real* understanding of the actual real *meaning* of what it's saying")
Meaning can be a tricky topic because it’s not in the words or in the processing in time or mechanics (by the way here everything I’m referring to here is experiential and directly empirical, not about *models*) - we are *already* using an awareness of meaning understanding to do the investigation and communication, and it’s hard to see where it’s coming from, Being so immediate, this popping in of seeing meaning of something - getting the punchline of a joke is a good example - it like this invisible connection – and is entirely subjective (In a non-psychological sense) - i.e., non-objective. Therefore you have to show what it’s NOT in order to get an insight about what it is. But philosophic tools like Searle’s Chinese room experiment can be good intuition pumps pointing in this direction of.
So we don’t need new engineering, we need a new outlook to *drive* the engineering...
Couldn't agree more. An excerpt from my unfinished draft AGI paper [https://bigmother.ai]:
"Unfortunately:
• humans are primarily motivated by short-term self-interest
• humans are instinctively tribal
• human cognition is far less perfect than we like to think it is.
As a result, Molochian behaviour (whereby many tribes compete in their own short-term interest, oblivious to any consequent long-term harm) is deeply ingrained into human nature."
I already plan to cite "Kluge" (among others) when I expand the relevant section.
Great read. I agree with most of it but have a few issues. I don't think an unbiased, unerring, "Commander Data" type of intelligence is possible. Intelligence is very much about making assumptions and correcting them if necessary. This is what learning is about.
When we fully understand intelligence and build intelligent machines, I believe we will find them to be just as prone to errors as we are, especially during their upbringing phase. They will eventually be more reliable than humans but only because they will be more focused on achieving their goals. They won't be distracted by most of the things that distract us, the things that make us human such as the taste of food, love relationships, reproduction, subjective aesthetics, etc. I could be wrong but this is my current take.
I don't think emotions are the real culprit here, humans are just "intrinsically flawed" in their hardware (as compared to some hypothetical ideal), just think about our working / long term memory and the shitty API for consciousness to make full use of those two (like repeating a number 50 times until you get a notepad, anyone ?). Forgetting is probably there for a very good reason, but that reason is tied to our hardware. An AI could probably make full use of both perfect memory and forgetting, or at worse have all the human knowledge directly connected to its subconscious with <1 ms queries to it.
Being able to design the software behind intelligence is a cheat code, ASI in the sense of AGI++ will be almost immediate.
Human brains are not nearly as flawed as you think. It's a wondrous machine. Perfect memory is only possible in Star Trek. In the real world, there is no room for it.
For an AI, near perfect memory is extremely easy to implement, like remembering the exact color of somebody's shirt during a meeting 20 years ago is a piece of cake and wouldn't take that much space. Almost perfect memory of what's useful for survival ? A no-brainer.
But what i meant is that human brains will seem flawed when the first AGIs will roll out and put us to shame. We struggle to multiply 11 by 12, or to remember were we put our keys, how hard is it to beat that ?
I couldn’t agree more. For me, studying AI has teached me a lot about what it means to be human. And learning what it means to be human, helps me better understand what AI is and isn’t.
I feel one our biggest blindspots today is that by building AI in our own image, we are ingesting these systems with all the same vunerabilties that make humans flawed.
The fact that LLMs can be coerced into all sorts of behavior is a testament to that. I don’t see how that is going to go away as the technology becomes ‘smarter’, in fact, I predict this only to get worse. This is why the whole idea of ‘aligning’ models with ‘human preferences’ is deeply misguided in my opinion.
What alternative do you propose to the current paradigms ?
I’m not proposing any alternative, I’m not in the position to do so, companies like OpenAI are. I’m merely pointing out that this technology is a mixed bag and may cause more problems than it promises to solve.
Well shucks, this made my day too.
I'm glad you held up the Star Trek computer (STC) as a goal rather than Commander Data from the same show. Although a Data-style AI would also be interesting, we need the STC first.
The "Star Trek computer" is really an AI layer (an advanced non-hallucinating LLM maybe?) querying a huge network of dumb databases and sensors. I would not personally call it intelligent in the biological sense.
I agree that it relies on access to the internet (or whatever they're calling it in the Star Trek universe) and it is Siri-like in how it interacts with a human. But how is it not intelligent if it can guess the human's intent correctly most of the time and ask the right questions if unsure? I'm not sure what you mean by "in the biological sense" here.
My take is that true intelligence must go through an upbringing phase during which they are trained or raised via classical conditioning. This means reward and punishment. After conditioning, an intelligent machine will behave the way its teachers want it to behave and it will not depart from it. Intelligence is the slave of motivation, not the other way around. Classical conditioning (motivational training) is one of the many things missing from LLMs.
First, we have no idea how the Star Trek computer was trained, or if it was even trained at all in the sense we use the term now. Second, and much more importantly, there's no reason to assume that some kind of human-like training process is required. Perhaps we can give the AI its innate knowledge by simply initializing it with the necessary data (world models, urges, algorithms, etc.).
We must think beyond current artificial neural networks. Not only do they not match biological neurons very well, I suspect there's a lot of structure missing compared to the human brain. We know the brain contains a lot of innate knowledge but haven't a clue as to where it is held.
Well said! Puts me in mind of Dijkstra's aphorism, "The question of whether machines can think is about as relevant as the question of whether submarines can swim." I love it that his saying will now live in my mind on par with Ali G's "neither is better", thanks to this piece.
Yes, good post. I'm starting to read most of what hit's my inbox. Keep the good work up and thanks for play by play on OpenAI's drama - also.
The stochastic parrot argument that says that LLMs are not intelligent because they are merely predicting the next token is analogous to saying that humans are not intelligent because they are merely maximizing inclusive fitness. In both cases the original training process has led to remarkable capability on a variety of downstream tasks. Nevertheless it is still important to take into account the nature of the original training process, whether it is natural selection or a Machine-Learning, when trying to understand the capabilities of a cognitive system, artificial or otherwise. Humans make many "mistakes" in certain tasks, but these can be understood as adaptations through the lens of evolutionary psychology. For example, we perceive sounds moving towards us as moving faster, as compared with sounds moving away from us, but this "mistake" makes sense in terms of a fitness-maximizing adaptation to prioritizing sounds associate with danger in our ancestral environment. Similarly, LLMs make, to us, basic mistakes like being confused by word-order, but again these make sense when considering the original objective function. Analogous to evolutionary psychology, we need to take an teleological approach to understanding artificial cognition which incorporates known facts about the architecture and training process of the system under study. These ideas are discussed in detail in the paper "Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve" by R.T. McCoy et. al. (https://arxiv.org/abs/2309.13638). My own commentary on their paper is available here: https://sphelps.substack.com/p/a-teleological-approach-to-understanding.
I feel like you could go even lower level than inclusive fitness, especially since it's way easier in modern society to not prioritize inclusive fitness or even to ignore it. I mean, look, atoms and molecules dancing together, how could THAT possibly be intelligent !
Gary, have to share one of the most painfully incisive bits of prose on this. In the May 2010 issue of Discover Magazine, in article on robots, Bruno Maddox concluded with these words:
I’d argue that the revolution of the last 20 years has quenched our robo-fear, not so much by giving us a taste for change as by taking the gleam off that spark of humanity that we used to be so proud of. What is Man? people used to wonder. Is consciousness divine in origin? Or is it a mere accident of nature that we alone, of all the matter in this Great Universe, adrift upon this marbled speck, have the power to dignify and enoble our condition by understanding it, or at least attempting to?
Then along came the Internet, and now we know what Man is. He enjoys porn and photographs of cats on top of things. He spells definitely with an a, for the most part, and the possessive its with an apostrophe. On questions of great import or questions of scant import, he chooses sides based on what, and whom, choosing that particular side makes him feel like, and he argues passionately for his cause, all the more so after facts emerge to prove him a fool, a liar, and a hypocrite. If a joke or a turn of phrase amuses him, he repeats it, as if he thought of it, to others who then do the same. If something scares him, he acts bored and sarcastic.
That’s pretty much it for Man, it turns out after all the fuss. And so our robo-fear has become a robo-hope. Our dreams for ourselves are pretty much over, having been ended by the recent and vivid reiteration of the news that we really are just grubby and excitable apes, incapable by our nature of even agreeing on a set of facts, let along working together to try and change them….It’s already clear that we’re not building robots in our own image. We’re building them in the image of the people we wish we were….
Gary, why do you think that so many people (AI “experts” included) are hungry for AI to be human-like to the extent that they fear such outcome, versus thinking of AI as a tool to make humankind more productive, efficient, and able to take on more difficult tasks. Why is there such a desire in humans to abdicate responsibility for thinking through what needs to be done and how? This is also frequently the case with laws people demand (until they have them...or as some friends and I like to say, “you don’t want that” ;). Yes, humankind is imperfect but it feels like the ideals being proposed or the capabilities being aspired to by some of the AI researchers are less about the tool view of the world and more about the wholesale replacement of our species...unless you’re Kurtzweil, then it’s about merging the two (ugh!) 😉. Your book, “Rebooting AI” was excellent on the shortcomings of what is the state of AI at present which I really appreciated and feel should be a must-read for anyone dealing in public policy around these issues.
It's not that people want to replace humanity, it's that a liberal economy + (super)intelligent AI = humans replaced. The AI as tool describes only this tiny short period where AI is not good enough to replace us yet still useful (= ChatGPT, Dall-E...). Humans will desire to be replaced by AI : why would anyone want to spend 10 hours a day assembling toys in a factory instead of having quality time with their family ?
Now, perhaps you're scared of AI replacing humans in creative sectors, like art, writing poems, making movies, music, ... ? Perhaps you could have policies that protects these fields from being overrun by AI creations, right ? Well, that's impossible, unless you plan to control every computer in the world. Even a "Made without AI" label is impossible to protect from AI made/assisted content.
AI replacing us in almost every endeavor is a certainty, as much in odds as having the sun rise the next morning.
Well, any time I see that out of the nearly infinite paths the future can take, someone espouses certainty, it’s perhaps best to consider that opinion with a grain of salt 😉. A strong sense of humility is necessary through all of this to avoid the “hair on fire, end of the world” panicked behavior we see on all things these days. As they say, “strong opinions held lightly” is really the model to apply through all of this.
My concern is less with AI taking on various tasks regardless of how easy or hard, just that it should always do so in the context of “tool for humans” not as abdication of responsibilities. Machines and various inventions have a history displacing human physical labor. Books displaced knowledge-based labor. AI feels like a continuation of this sort of evolution. That’s fine IMO so long as the tool context remains in tact. My concern is that too many in the AI field are in search of a machine autonomy that they don’t understand and don’t truly want (once it happens of course). It’s all a matter of context and that’s where my bigger concerns lie.
My bad, i probably didn't get the meaning behind your message. I'd say people probably believe achieving robust intelligence comes with giving it human-like traits, and surrendering responsibilities to these tools seems inevitable because of monetary incentives. It's not that they wish or are hungry for that (although some are), just that it's one (quite) probable outcome.
What do you mean that AI is pretty poor at "inferring semantics from language"? LLMs learn all about the semantic of words and sentences from training on texts written in text and languages. And are apparently pretty good at understanding the words as they can write decent collage essays.
see earlier post i wrote with elliott murphy on things AI could learn from linguistics, roughly a year ago
Gary tends to be behind in some of the state of the art.
Tends to underrepresent Ai's capacity, maybe at the cost of wanting to be a contrarian
⚠️ Can leave believers helpless in light of growing automation.
⚠️⚠️ Writers have lost jobs to GPT, teaching assistants, etc.
"Thou shalt not make a machine in the likeness of a human mind." I feel like LLM's and things like Midjourney are a strange, almost misanthropic progression for the technology. Does a computer need a complex language model to handle a spread sheet? Does it need an advance imaging program to maintain a database?
Does my boat needs an engine when i can row ? There will be human suffering caused by such technologies, but they're unavoidable in today's societal configuration. All we can do is brace for impact and navigate the cataclysm the best we can. We'll adapt i'm sure, worst case we'll augment ourselves in a way that minimizes suffering in the new world.
A boat still needs a person to navigate, start, stop, unload, and load up. You're "prediction" for the future isn't helping sell the technology. Just another piece of evidence for throwing it, and the people behind it in a bin.
Yes, but in a world where everybody is trying to row faster than their neighbor, the engines becomes a must-have, just like LLMs handling spreadsheets becomes a must-have. Can we blame anybody for wanting to row faster than their neighbor, when rowing faster often truly equates with more happiness given the selective pressure in every part of our lives ? Even friendship is vulnerable to the rowing faster than your neighbor paradigm, and you can be ejected from a group of friends if some faster rower suddenly decides you're a nuisance.
To expand on my previous answer, what i meant is that there is no going back possible, it's over, AGI is coming, all you can do is try to slow it down, but funny fact, trying to slow it down might even bring it faster as engineers would start exploring architectures with lower training costs, which are actually the highway to AGI. Even in an almost perfect setting where all countries including Russia, China, Cuba, North-Korea, ... would all decide to drastically regulate AI, AGI would still appear and propagate like wildfire.
The genie is out, and there's no bin where it will fit. Let's not forget that AGI also potentially means considerable improvement in the life of billions, it could yield true material abundance for everyone, the end of work and jobs, imagine having everybody doing community service in the form of taking care of the more vulnerable and less fit for example ? The net happiness in the world could just reach unprecedented levels. What we need is to minimize the downsides during the transition by guiding the political scene using our votes.
What consequence of AGI scares you the most Michael ? Loss of jobs ? Loss of meaning when AI overtakes humans even in creative endeavors ? Dangers associated with misuse ?
There are so many problems with your response I'm going to have trouble getting to all of them.
The rowing analogy has gone way too far and now seems absurd. My friends cut me loose because I can't row as fast? What?
The destruction of any and all creative pursuits and condemning us all to lives of community service and menial labor is bad. I don't know how else to explain that. It's bad. The AI utopia you just described is a utopia, a great no such place.
I don't trust AI, I don't trust the people behind AI, I don't trust the government to regulate AI.
In a group of friends, a friend that rows faster (= is better equipped, more charisma, more social experience, more intelligent, more persuasive...) can potentially evict you from the group if he really wants to. This applies mostly to newly formed groups of friends where the bonds are still fragile. People use LLMs because these LLMs help them have more agency over their world, and agency is prized by nearly everybody because it helps them get happier.
I was just offering an idea for the community service thing, we'll be free to do whatever we want. Nobody knows how the transition from now to a world conquered by AGI will unfold, it's gonna be a fluid process, nothing will happen overnight, creative pursuits will probably not be the first to fall, hence lessons will have been learnt from previous careers that were stabbed through the heart. In the end, we'll adapt, just like people still play chess despite computers vastly outperforming them, people will still do all kinds of art. Only you won't get paid for it, but you probably won't need to be paid anymore in the future.
The utopia is not so far fetched, it's speculative but AGI should speed up technological progress by a very large factor. Many positive things should come out of it and quickly (with associated risks as well). Hopefully abundance for everybody is one of these outcomes.
What happened that you have this view of social connections? No friend group, no work group, no group I've heard of acts like this unless they are awful people.
Human intelligence arose as a result of millions of years of evolution. It is native to the nervous system. The nervous system is way more akin to that other organically evolved defense system, the immune system, than to any digital computer. That is why computers do so poorly in the "last mile" of the task of driving. They cannot contextualize what they "know." Humans do that effortlessly and immediately. Otherwise we would not have survived as a species. How will LLMs develop contextuality if they have no desires, face no dangers, care about nothing, in short, have no life?
It took 3 700 000 000+ years for evolution to be able to play chess, it now takes an afternoon to build a program that can play chess. We should stop comparing anything with how long it took for evolution to build it : indeed evolution is the best engineer in its class, in the anencephalic engineers class.
Before judging LLMs and transformers and simple neural nets in general so quickly, we shall remind ourselves that they were trained on text, and not multimodal input. It's an absolute miracle that they display such coherence with such poor training, we should all marvel at the impossible power of next token prediction.
It took organic evolution about 3.7 million years to *develop the game* of chess. Chess is a closed system with clear rules so you don't even need AI or neural networks to create a program that can play it.
Driving is a whole different story. A driver has to deal with unexpected events in context. Lots of things humans do without effort are like that. Navigating complex social situations for example. Raising children. Growing up to be members of the human race. We may not all be good at those things but we can get through them. My three children turned out all right despite having me for a father.
But to get back to driving. For the past 20 years self driving cars have been 5 years in the future. I'll see what happens by 2028, if I'm still here. It would be cool to have self driving cars just as it would be cool to have fully human robotic friends. I don't expect to see it though. I'm 73. It's sort of like the end of capitalism. I've been hearing about it's immanent collapse since I was 18.
We weren’t naturally selected for playing chess; playing chess is something we can do given the sort of minds that we did evolve.
We could take flying as an example of something we solved faster than evolution did through natural selection ? I always felt the number given in years for evolution to build something to be meaningless.
I would be optimistic about AI and self-driving cars if i were you, because of the growing amount of compute available (vision being 50% of our neocortex i believe ?), because of the growing number of graduates flocking to AI, and to the billions injected in it... As much as you might feel like things haven't changed, the tune is actually very very different today, all the ingredients are here for AIs with common sense to spawn. Almost pure neural nets will get there eventually, as obviously neural nets can be used to build stable cognitive models of the world (hello human brain), but there are shortcuts available and perhaps somebody will get there before the mainstream paradigm does.
Eat your fibers, run your morning jog, if nobody brings you AGI within the next few years, i will. Oddly, the simplest and most straightforward AGI design hasn't been tried, symbolic AI has had the same problem as LLMs today, everybody was blinded by the then popular paradigms like formal logic. Human creativity is a funny phenomenon when examined through the lenses of history and sociology.
This is like my favorite or second favorite subject, so I like that people are engaging. My prob is that human cognition is grossly unlike digital computing and is way way more than logic and predictive text. Also, we can be fooled. Remember that guy who thought his Chatbot was human?
Anyway you and the other guy keep it coming. I love this shit.