I've never liked the "symbol grounding" term. It seems to imply that it just takes some sort of grounding module to suddenly give everything meaning. As I see it, it's the central issue. A symbol isn't really a symbol unless it is attached to its meaning which means a world model. Until some AI contains a very substantial world model and the machinery to use it and enhance it on the fly, there will be no AGI. As LLMs do not even attempt to build a world model, except for one based on word order, I doubt it will get anywhere close to AGI.
Disagree. It's all just patching. Until the AI can learn on its own, we won't get far with LLMs. Humans use language to communicate. Their cognition is not based on language. This is important. Any AI that is centered on language will always be at a severe disadvantage with respect to reproducing human cognition.
"I actually think that LLM are a step up from plain neural nets and statistics. It allows a machine to "think" at the language level"
We need to acknowledge that anyone who thinks that human verbal languages differ only slightly from humanly devised computer programming "languages"; that their mode of operation is similar; and that they're interchangeable, or in some sense "translatable"- is making a terrible mistake right from the jump. I would hope that AI professionals are clear on that much. That they've learned it in a course somewhere, as part of their training.
Computer "languages" are sets of instructions. The instructions were devised by humans, but any similarity between computer programs and human communications is coincidental. The rules of programming and developing algorithms are very strict and precise. if the human operator throws in a spare space or & what have you in the course of typing the instructions, those offhand trivial mistakes shut down the task at hand. Human languages are not organized like that, and their sole purpose isn't to give marching orders.
One great asset of generative AI is its momentum. Put to a task, it's tireless. But that momentum is often miscast as "emergent learning", as if another level of complexity is waiting. I don't notice anything in AI capabilities that isn't set into motion at the outset by the humans "outside"- either to order a machine to carry out some task, or to hit the off switch. AI learning can take place on its own momentum darn near perpetually. But not only is there no reason for AI to eventually develop the sophistication of self-aware consciousness, there are some very good arguments that the capability of self-aware consciousness is inherently foreclosed to machine intelligence.
"Of course, we need a whole lot more than language. We need symbol grounding. Verification. Actual models (as you say)."
AI is never going to comprehend symbolism. The most important difference between human languages and the instructions to program computers is this: computer instructions are inherently denotative (that's why code strings of programming instructions have to be written with unerring precision). Human languages are inferential, referential, suggestive, subjective, connotative. Never the twain shall meet, at least not at the computer end. There's no there there. Working in the denotative realm known as "numbers", computers are great. They're superb at tasks like calculation, compiling, and ordering. They can't tell what anything means.
Hence the requirement for "modeling"; it provides a superficial fakery of inferential thought. If that's all you got, it's more noisy, inaccurate, misleading trouble than it's worth. When applied with the intention of guiding AI to hack Symbolism (the realm of abstraction, heavily modulated and moderated by human culture(s), plural emphasized) Modeling is never going to be based on anything but some great anonymous summed "past" partially built of media clickbait, contrasting cultural narratives, idioms, idiosyncracies, popular delusions and the madness of (human) crowds. The machine remains unevaluating of the data it aggregates and selects, just as it's unseeing when selecting photos to craft deepfakes. As a result, it's easy for a computer to spout nonsense- or rote, worthlessly uninformative generalities and piffle. Or the cheapest of cheap-shot stereotypes, or absurdly chauvinistic strategic military evaulations. At any given moment, on any question of human behavior, the inertial accumulation function of computer learning always favors the House. The Status Quo. The Past As Prologue, 100%. The Static pretension, of Total Predictability. The Superstructure, hoaxes and all. GIGO.
Unless you ask a good, worthwhile question worth considering, that is. Perhaps using a geography framework, from that sturdiest of disciplines. Cultural geography and physical geography. How to use resources without polluting water. Which sources of energy are suitable for which locations, programming all the germane questions into the assessment. What sorts of shelter could be built readily and designed for optimal community living esthetics, while living lightly on the planet. I like the idea of AI learning to excel at Farmville. I don't want AI to ever call the shots, of course. What I would like to see if AI whipping up a design for a livable watershed region- energy, electricity, shelter, transportation, agriculture, industry- that prompts human observers from the headwaters to the coastline to be impressed by result. AI as perfect host(ess) with the most(ess.) AI that knows how to cater a party, so everyone has a good time. AI that's 'thought" of everything, while selflessly not requiring a cut of the action. And then after that, it's up to the humans to know how to act.
"Social Engineering" has a bad rep because it's associated with psychology, politics, and social conditioning. The kind of Social Engineering we need is AI assessments of how to address the public commons, material infrastructure, water, soil, and development concerns on a planet of 8 billion people, to build places- neighborhoods, communities, cities- that people can live and thrive in, instead of just enduring.
This is no joke. Some cans can't be kicked down the road much farther. New York City needs to step up its game as far as preventing saltwater intrusion into the water works, for instance. AI should be able to help crunch those numbers- and if it's learning the right lessons, it should be able to do it in a way more comprehensive than humans, with an ability to flag problems that are obscured by an excess of data that's too much for even trained professional humans to thoroughly process and properly evaulate into rough-draft form. Good old egoless, unreflective AI can treat those tasks as if it were advising on running a terrarium. That's the level of detachment AI has. The detachment bears watching, but it's a lot less trouble than if AI had an agenda. I don't think AI has an agenda. It's inert. Not autonomous.
I'm hoping that somewhere, someone is using AI as a learning aid for cultural ecology planning, so we don't drown in our own shit. What dismays me is that most of the media buss about it is self-absorbed media people insisting that AI be exploited as a political tool, or to explain human behaviors. AI can probably explain human beings in some important ways in terms of our animal-material impacts on the natural systems of the planet. But AI isn't for persuading someone to vote for someone for President. Yet that's the supposed "ability" that's getting all the attention.
"I am hopeful the chatbot paradigm has a lot to give if augmented properly."
As long as we're dreaming, here's my dream for AI: that it be programmed to competently evaluate fact claims and arguments in a debate (or a news report, or an editorial, etc.) based on a thorough acquaintance with informal (verbal) logic and logical fallacies.
I'm not sure if that's possible. But if it is doable, AI has an advantage that no human judge can offer: a default state of complete impartiality. So if iAI can learn to read through debate propositions, claims, and inference on both sides of an argument, it should be able to accurately point out all of the logical fallacies used by the debaters on BOTH sides. Interestingly, a position isn't necessarily discredited by indulgence in logical fallacies by its advocates; sometimes it means that the position deserves better arguments than the shoddy talking points of the advocates.
I notice two main problems when reading online disputes- especially political disputes: 1) both sides are sloppy as hell, because they don't know logical fallacies when they're staring them in their face, or ones that they're spouting themselves; and 2) debaters of some skill and acquaintance with informal logic who turn their attention toward focus intensely on every weak point of their adversaries, while refusing to consider the weak points of their own position.
Ideally, properly trained AI--ideally--could use its innate impartiality to advise both sides in a dispute exactly where and when they're indulging in self-deception and presenting misleading arguments. Not a Judge, so much as a debate coach.
That's my Dream for what AI could accomplish in online communication. If that's beyond it's capabilities, that elevation of the game still has to be done. I don't know how much more nonsense I can stand to read. Here are some English-language presentations of the rules of logical fallacy detection: [ "logical fallacies" + list ]
I've never liked the "symbol grounding" term. It seems to imply that it just takes some sort of grounding module to suddenly give everything meaning. As I see it, it's the central issue. A symbol isn't really a symbol unless it is attached to its meaning which means a world model. Until some AI contains a very substantial world model and the machinery to use it and enhance it on the fly, there will be no AGI. As LLMs do not even attempt to build a world model, except for one based on word order, I doubt it will get anywhere close to AGI.
Disagree. It's all just patching. Until the AI can learn on its own, we won't get far with LLMs. Humans use language to communicate. Their cognition is not based on language. This is important. Any AI that is centered on language will always be at a severe disadvantage with respect to reproducing human cognition.
"I actually think that LLM are a step up from plain neural nets and statistics. It allows a machine to "think" at the language level"
We need to acknowledge that anyone who thinks that human verbal languages differ only slightly from humanly devised computer programming "languages"; that their mode of operation is similar; and that they're interchangeable, or in some sense "translatable"- is making a terrible mistake right from the jump. I would hope that AI professionals are clear on that much. That they've learned it in a course somewhere, as part of their training.
Computer "languages" are sets of instructions. The instructions were devised by humans, but any similarity between computer programs and human communications is coincidental. The rules of programming and developing algorithms are very strict and precise. if the human operator throws in a spare space or & what have you in the course of typing the instructions, those offhand trivial mistakes shut down the task at hand. Human languages are not organized like that, and their sole purpose isn't to give marching orders.
One great asset of generative AI is its momentum. Put to a task, it's tireless. But that momentum is often miscast as "emergent learning", as if another level of complexity is waiting. I don't notice anything in AI capabilities that isn't set into motion at the outset by the humans "outside"- either to order a machine to carry out some task, or to hit the off switch. AI learning can take place on its own momentum darn near perpetually. But not only is there no reason for AI to eventually develop the sophistication of self-aware consciousness, there are some very good arguments that the capability of self-aware consciousness is inherently foreclosed to machine intelligence.
"Of course, we need a whole lot more than language. We need symbol grounding. Verification. Actual models (as you say)."
AI is never going to comprehend symbolism. The most important difference between human languages and the instructions to program computers is this: computer instructions are inherently denotative (that's why code strings of programming instructions have to be written with unerring precision). Human languages are inferential, referential, suggestive, subjective, connotative. Never the twain shall meet, at least not at the computer end. There's no there there. Working in the denotative realm known as "numbers", computers are great. They're superb at tasks like calculation, compiling, and ordering. They can't tell what anything means.
Hence the requirement for "modeling"; it provides a superficial fakery of inferential thought. If that's all you got, it's more noisy, inaccurate, misleading trouble than it's worth. When applied with the intention of guiding AI to hack Symbolism (the realm of abstraction, heavily modulated and moderated by human culture(s), plural emphasized) Modeling is never going to be based on anything but some great anonymous summed "past" partially built of media clickbait, contrasting cultural narratives, idioms, idiosyncracies, popular delusions and the madness of (human) crowds. The machine remains unevaluating of the data it aggregates and selects, just as it's unseeing when selecting photos to craft deepfakes. As a result, it's easy for a computer to spout nonsense- or rote, worthlessly uninformative generalities and piffle. Or the cheapest of cheap-shot stereotypes, or absurdly chauvinistic strategic military evaulations. At any given moment, on any question of human behavior, the inertial accumulation function of computer learning always favors the House. The Status Quo. The Past As Prologue, 100%. The Static pretension, of Total Predictability. The Superstructure, hoaxes and all. GIGO.
Unless you ask a good, worthwhile question worth considering, that is. Perhaps using a geography framework, from that sturdiest of disciplines. Cultural geography and physical geography. How to use resources without polluting water. Which sources of energy are suitable for which locations, programming all the germane questions into the assessment. What sorts of shelter could be built readily and designed for optimal community living esthetics, while living lightly on the planet. I like the idea of AI learning to excel at Farmville. I don't want AI to ever call the shots, of course. What I would like to see if AI whipping up a design for a livable watershed region- energy, electricity, shelter, transportation, agriculture, industry- that prompts human observers from the headwaters to the coastline to be impressed by result. AI as perfect host(ess) with the most(ess.) AI that knows how to cater a party, so everyone has a good time. AI that's 'thought" of everything, while selflessly not requiring a cut of the action. And then after that, it's up to the humans to know how to act.
"Social Engineering" has a bad rep because it's associated with psychology, politics, and social conditioning. The kind of Social Engineering we need is AI assessments of how to address the public commons, material infrastructure, water, soil, and development concerns on a planet of 8 billion people, to build places- neighborhoods, communities, cities- that people can live and thrive in, instead of just enduring.
This is no joke. Some cans can't be kicked down the road much farther. New York City needs to step up its game as far as preventing saltwater intrusion into the water works, for instance. AI should be able to help crunch those numbers- and if it's learning the right lessons, it should be able to do it in a way more comprehensive than humans, with an ability to flag problems that are obscured by an excess of data that's too much for even trained professional humans to thoroughly process and properly evaulate into rough-draft form. Good old egoless, unreflective AI can treat those tasks as if it were advising on running a terrarium. That's the level of detachment AI has. The detachment bears watching, but it's a lot less trouble than if AI had an agenda. I don't think AI has an agenda. It's inert. Not autonomous.
I'm hoping that somewhere, someone is using AI as a learning aid for cultural ecology planning, so we don't drown in our own shit. What dismays me is that most of the media buss about it is self-absorbed media people insisting that AI be exploited as a political tool, or to explain human behaviors. AI can probably explain human beings in some important ways in terms of our animal-material impacts on the natural systems of the planet. But AI isn't for persuading someone to vote for someone for President. Yet that's the supposed "ability" that's getting all the attention.
"I am hopeful the chatbot paradigm has a lot to give if augmented properly."
As long as we're dreaming, here's my dream for AI: that it be programmed to competently evaluate fact claims and arguments in a debate (or a news report, or an editorial, etc.) based on a thorough acquaintance with informal (verbal) logic and logical fallacies.
I'm not sure if that's possible. But if it is doable, AI has an advantage that no human judge can offer: a default state of complete impartiality. So if iAI can learn to read through debate propositions, claims, and inference on both sides of an argument, it should be able to accurately point out all of the logical fallacies used by the debaters on BOTH sides. Interestingly, a position isn't necessarily discredited by indulgence in logical fallacies by its advocates; sometimes it means that the position deserves better arguments than the shoddy talking points of the advocates.
I notice two main problems when reading online disputes- especially political disputes: 1) both sides are sloppy as hell, because they don't know logical fallacies when they're staring them in their face, or ones that they're spouting themselves; and 2) debaters of some skill and acquaintance with informal logic who turn their attention toward focus intensely on every weak point of their adversaries, while refusing to consider the weak points of their own position.
Ideally, properly trained AI--ideally--could use its innate impartiality to advise both sides in a dispute exactly where and when they're indulging in self-deception and presenting misleading arguments. Not a Judge, so much as a debate coach.
That's my Dream for what AI could accomplish in online communication. If that's beyond it's capabilities, that elevation of the game still has to be done. I don't know how much more nonsense I can stand to read. Here are some English-language presentations of the rules of logical fallacy detection: [ "logical fallacies" + list ]
https://search.yahoo.com/yhs/search?p=logical+fallacies+list&hspart=iba&hsimp=yhs-syn_launcham¶m2=9dUI1n2R0BLDxNuWfiP4aSFOTltNdSPoIx38%2BUf%2FiXrvPdoGmStdlfwLFZYDvqkAfvapUGDUlfVlBewW80EIyUtis4%2BjOvTCFfhraeyFu2TnmlX6mrUgddSxV%2BTHvbyN1%2BjGfkiz4RwnIt%2BO%2FGk2zbakLrfRzuVAWA%2BSPatqxEska%2BAkue2MX%2F9BDiOattkkHfTCLFyV%2FDrpaZnmybstz8Djjz5lLSZSarPWsplmubU%3D¶m3=HpCyCT2cXaKG4CVDR00rqgObRQahimQNt2d5ZCR7Jy3IZoD3T11qaq2nywASZYgKE9AoLtDK9wXsg9iWQUp8XOLam8Hq%2B0MCnFoApNXGvcpZLwFIlSc6RmsIqnWJBazI6jMD%2F7RihweG%2BNE4iWw1D0WCp00U3IyNw%2Ba%2F2P1aOoa5pp%2F4fIYPPV75CgfuJ87F2WaTHVcG8mDlAyfQAu9PUSmsjjjQorcSNhcVWo%2Btva2rqeOA%2FlO7tHW8t7agWGgujXTzvWpe0Udi%2BU1OMDrXJ4KqCwkJWa2vrJg91bWTm3o%3D&type=f2%3A%3B.6850610d4680680b2811f3dcdca6be379af%3B5.ac48522a20946644e52a8ef8e64166f19c0ca9cdf89835745bb551d3fa4fa48fc420970e4b5f6bcb11f118aacdec6241e5cbb471aee
https://roothogmusic.com/used-guitars-for-sale-by-owner-near-me/