Constant improvement. If you're not willing to question it and push it to its limits - it will never be as good as it could be. With something this important - you can't afford to let it slide by with a "good enough".
Eye opening. Have just bought your book to read carefully. Like most people, I’ve been asleep at the wheel for the last decade on AI (other than reading every damned sci-fi book worth a candle) but this year, independently of the whole ChatGPT explosion, set myself a New Year resolution to “read six books about AI.” I’ve just finished Max Tegmark’s book and shall graduate to yours. Keep up the good work. We’re listening.
Plenty of us have your back! Your concerns and approach here is super valid.
We really need to get ahead of this stuff ASAP. These LLMs are going to bring on a DDoS attack on the communications around our elections. Misinformation is one thing, the volume of garbage, nonsense content is going to make it extremely difficult to find out what is actually happening. Reality will become harder and harder to detect. Fact will become the needle in a hay stack.
Volume is very much the problem. I've been watching what's going on in the literary world and it's very worrying. Already one science fiction magazine has had to close submissions because of AI-generated spam. Some self-publishers are going for high volume business models and are using AI to keep pace with their voracious readers. Amazon does not care at all about the quality of books that pass through its self-publishing plaform, hasn't bothered to deal with the problem with fake books (scraped, plagiarised etc) that already exist and certainly won't bother with AI-generated books, no matter how bad they are. So publishing is in for a world of pain, but seem generally unaware.
I find it hard to get alarmed about these examples. Any person with a bit of literary skill can write the same thing and publish it. We don’t need a AI model to write or disseminate incorrect information. The problem is people who can’t spend a minute to learn, read history, and think critically. So where does that lead us to, giving up control of the guardrails to some group of people who inevitably will use it to control the rest of us? Who gets to decide what is true and what is misinformation? I think of all the Covid information in the last few years. If I’m missing something let me know.
You can lead people to information, but you can't make them think.
And let's never forget the millions of people who spent their youth sitting at the back of the classroom but were suddenly qualified, years later, to undertake their 'own research' into the science of COVID-19, and their 'own research' into the development of new vaccines.
"Who gets to decide what is true and what is misinformation?" Not so much "who gets to", but "how do we". We're already living in a world corrupted with mis/disinformation. Now multiply that corrupting influence ten fold, not by carefully constructed misinformation designed by a human, but at the click of button. We already employing AI backed countermeasures to check for us whether we're dealing with human generated text or AI generated text. This doesn't bode well.
Let me add a ray of hope from a cynical perspective. The fear is that disinformation will be produced wholesale by bots flooding the information world, thereby, I assume, muddying, the good info being currently provided. And there’s the rub. Does anyone believe the current info environment provides decent information? I doubt it. So the fear is that it will be made much worse and this is bad. But here is my cynical ray of hope. What if the problem is not bad information but the credulity of the reading public (and here I especially include our intellectual elites). In the west we tend to believe what we read even if it is garbage. Flood the zone with OBVIOUS RELENTLESS GARBAGE, and maybe we will stop doing this! Maybe we will act as people did in the old Soviet Union where they had to read critically and evaluate the crap they were being fed. The biggest problem right now is not JUST the misinformation, but the way that otherwise intelligent minds believe what they are told. One way to possibly stop this is for the source to be seen as potentially very toxic. Chatgpt will do this. Oh, and btw, there is no stopping it now. All the demos in the world wont put this genie back in the bottle. The only thing to do is prepare ourselves and one good step in that direction is to stop reflexively believing What you read and what we are told.
re: "One keen Twitter reader pointed me to two small but real examples of actual harm by ChatGPT, surely the tip of an unpleasant iceberg:"
The "harm" from the German example seems to be due to a poor quality education system, or worst case poor signup instructions on the part of OpenAI, regarding its fallibility. There are lots of sources of poor information online and offline that gullible people with poor reasoning skills will fall for. There are vast numbers of people getting nonsense from other humans via the net. That example illustrates nothing new in kind about the issue of AI. Its not doing much for your case to cite silly trivial examples that point to other problems rather than premature release of AI as being the concern.
In the 2nd case a human spread misinformation: I'm sure in the meantime that Microsoft Word and Google docs were used to create vastly more misinformation that was spread than ChatGPT was. Yes, it may make it easier: just as computers made it easier. These examples don't relate to the issue of "scale" from the prior post. I added another comment replying to someone there about the issue of "scale" being an overlapping issue which can often be addressed in ways unrelated to whether the information is generated by a sophisticated AI or the tech they've used before now, or say a Mechanical Turk like poor of cheap human labor someplace.
There may be issues to be concerned about: but your reactions give the impression of someone who may know AI well: but is just now giving superficial thinking to issues that others have been thinking about for decades in more nuanced and in depth exploration of the issues related to misinformation and society. (I'm giving the benefit of the doubt that someone as prominent as this author is capable of in depth nuanced exploration of complex ideas, but merely hasn't taken time to do so in this case. I haven't read his books or other writings).
This is very interesting. And to me the main point is not that ChatGPT can, if properly prompted, output fake news, misinformation, etc. The main point is that these models are an excellent tool for the misinformation job, and that any organization or sovereign-state with some determination and budget can train such a model and use it without restrains within a jurisdiction friendly to their goals.
I just saw the tweet exchange where prof Robin Hanson claimed there are no serious arguments for your regulatory proposals and you claimed there weren't serious arguments against. I'd suggest that the issue is that Hanson grasps that your arguments appear to ignore a whole body of literature regarding "government failures" and flaws in regulatory approaches to issues, like regulatory capture. You seem to hand wave away critique based on an implicit underlying axiomatic unquestioned assumption of government competence that seemingly you won't examine despite there being serious academic work questioning it.
It seems the issue is that you lack the background knowledge to grasp that some of the things that you mindlessly trust as axioms are seriously questioned. For your argument to be "serious" you need to seriously address existing concerns regarding regulatory approaches to problems and justify why they don't apply in this case. You can't merely handwave them away. Thats why some find it hard to take your case as "serious". Steelman your opponents arguments and address them: rather than pretending they don't exist or attacking them as if they were strawmen not to be considered seriously.
For instance you tweeted something about the FDA being better than there being no-FDA as if that were an axiomatic fact: rather than something where there is serious academic debate. Some cite data and arguments from the world of public choice theory that due to flawed incentives, the FDA has slowed the process of spreading new treatments (despite the atypical rush during covid) and therefore led to more deaths by lack of availability of treatment than the deaths that would have occurred without it. Lawsuits, reputation risk, etc, lead companies to try to avoid releasing treatments that kill people. If anything the bigger problems with flawed treatments in the realm of alternative medicine are completely ignored by the FDA (e.g. people fall for worthless homeopathic treatments ubiquitous at drug stores, let alone widespread more serious quackery) since they assume the government is protecting them and therefore let their guard down and special interest pressures has lead alternative medicine to creep into influencing legislators and regulators.
"He who knows only his own side of the case, knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side; if he does not so much as know what they are, he has no ground for preferring either opinion. The rational position for him would be suspension of judgment, and unless he contents himself with that, he is either[Pg 68] led by authority, or adopts, like the generality of the world, the side to which he feels most inclination. Nor is it enough that he should hear the arguments of adversaries from his own teachers, presented as they state them, and accompanied by what they offer as refutations. That is not the way to do justice to the arguments, or bring them into real contact with his own mind. He must be able to hear them from persons who actually believe them; who defend them in earnest, and do their very utmost for them. He must know them in their most plausible and persuasive form; he must feel the whole force of the difficulty which the true view of the subject has to encounter and dispose of; else he will never really possess himself of the portion of truth which meets and removes that difficulty. Ninety-nine in a hundred of what are called educated men are in this condition; even of those who can argue fluently for their opinions. Their conclusion may be true, but it might be false for anything they know: they have never thrown themselves into the mental position of those who think differently from them, and considered what such persons may have to say; and consequently they do not, in any proper sense of the word, know the doctrine which they themselves profess. They do[Pg 69] not know those parts of it which explain and justify the remainder; the considerations which show that a fact which seemingly conflicts with another is reconcilable with it, or that, of two apparently strong reasons, one and not the other ought to be preferred. All that part of the truth which turns the scale, and decides the judgment of a completely informed mind, they are strangers to; nor is it ever really known, but to those who have attended equally and impartially to both sides, and endeavoured to see the reasons of both in the strongest light. So essential is this discipline to a real understanding of moral and human subjects, that if opponents of all important truths do not exist, it is indispensable to imagine them, and supply them with the strongest arguments which the most skilful devil's advocate can conjure up."
"John Stuart Mill (1806–73) was the most influential English language philosopher of the nineteenth century. He was a naturalist, a utilitarian, and a liberal, whose work explores the consequences of a thoroughgoing empiricist outlook."
On a prior post I referred to Thomas Jefferson's quote: "Sometimes it is said that man cannot be trusted with the government of himself. Can he, then be trusted with the government of others? Or have we found angels in the form of kings to govern him?"
Many people today seem to have an implicit assumption we have somehow found angels in the form of government bureaucrats to govern us. They assume that merely commanding government to regulate something will lead it to do a good job. Merely because you wish to believe the FDA is necessarily better than not having the alternative doesn't magically make it so due to the reality that imperfect humans are involved in a system with flawed incentives.
There are private certification agencies in other realms, like underwriters laboratories. Competitive private certification agencies that provide insurance for safety and/or efficacy of medical products would have incentives to do a good job in ways that a monopolistic government regulatory bureau doesn't.
Just as some companies do a good job and others do a poor job: the results of a government agency will vary greatly. Public choice economists study the realities of how imperfect humans operate within government given the various incentives at play.
To be taken seriously you need to consider the potential problems with regulation: not merely a simplistic assumption that its magically guaranteed to be great just because you wish it to be and tell politicians to make it great.
I got into more specifics on prior comments on other posts. There are dangers with government regulation that you haven't acknowledged or addressed. Most people imagine government regulatory agencies as under the control of people like them (even if not angels): but imagine the politicians you most hate and fear. Conservatives fear control by the woke, progressives might fear a resurgent Trumpist populist taking control or say a return to a "moral majority" religious right getting control of the regulatory process. Or more likely the likely big companies taking control to benefit themselves at the expense of startups.
I suspect you haven't read the literature on problems with the FDA from Stanford's Hoover Institution or in economic academic literature. Or other sources like I suspect the Cato Institute, GMU's Mercatus Center, etc. Many people have no reason to realize what sort of academic work has been done in the realm of public choice economics and regulatory capture (whose founders won nobel prizes in economics). Of course I'm not going to convince you with a few sound bites: and you'd likely need to do a fair amount of reading before you'd have the background knowledge to even engage in a productive informed debate about the issue. Its possible you are aware of the literature: but I'm suspecting not.
Try reversing things to put yourself in the place of some of your critics who have background knowledge on the issues regarding problems with regulatory agencies in general. By analogy: how do comments on AI theory from outsiders who haven't studied AI, cognitive science or related disciplines but merely read newspapers and leap to conclusions sound to you? Do they sound like serious commentary? Do they often sound simplistic and not "serious"?
This is why hopes for regulation and similar, such as you have been promoting and which I support, won’t be nearly enough. This is moving from technique, truth and culture to interests and the weakness of human institutions. Unless there are absolute sanctions which disable rogue behaviour, and even then, there’ll always be a few who will try to break through. One or two will. What then?
From the AIAAIC website itself: «OpenCage CEO Ed FreyFogle believes the problem likely stems from ChatGPT picking up on YouTube tutorials in which people describe OpenCage providing a phone look-up service - a rumour they had rebutted in an April 2022 blog post.»
So, according to OpenCage themselves, the real culprit here are those people who have made YouTube tutorials in which they gave false information, and ChatGPT merely believed them, as any other person who could have watched those tutorials and believed them.
How does this even remotely imply that "ChatGPT" is causing harm?
The internet is an echo chamber and AI is a megaphone. Two tools that multiply magnitude of output working together can obviously be used for good or bad. I'm amazed by how amazed people are by this realization. I think 2 things are true. Humans are incapable of not innovating using this technology and it's going to change the world. This is the ultimate faith in humanity test. I think we'll figure it out, besides being a naive optimist is way more fun.
I would like to draw your attention to the new service for analyzing the veracity of information and worldwide distribution blockchain project of reliable information on the Internet CyberPravda.com. My name is Timur Sadekov and I'm the founder and CEO of this project.
We have found a way to mathematically determine the reliability of information and have developed a fundamentally new algorithm that does not require the use of cryptographic certificates of states and corporations, voting tokens that can bribe any user, or artificial intelligence algorithms that are not able to understand the meaning of what a person said. The algorithm does not require external administration, review by experts or special content curators. We have neither semantics nor linguistics - all these approaches have not justified themselves. We have found a unique and very unusual combination of mathematics, psychology and game theory and have developed a purely mathematical international multilingual correlation algorithm that allows us to get a deeper scientometric assessment of the accuracy and reliability of information sources compared to the PageRank algorithm or the Hirsch index. The algorithm allows betting on different versions of events with automatic determination of the winner and allows to create a holistic structural and motivational frame in which users and news agencies can earn money by publishing reliable information, and a high reputation rating becomes a fundamentally new social elevator.
Our method is based on the fact that any news or publications can be represented as a sequence of elementary events securely recorded and encrypted inside the blockchain: who, when, what, where, how much etc. All users who want to tell about important events should post a sequence of facts. These blocks will be so simple that it will be easy to check these facts from all sides and enable automatic translation and make the system multilingual and fully international. Of great importance is the fact that fake news never continues and always contradicts each other. And vice versa, any true fact or event will have supporters who will be interested in publishing the most detailed information about it. All this makes it possible to assign each message in the system a credibility rating, and authors — a reputation rating. The basis of credibility is the same composition of event quanta, and the basis of reputation is the constancy and stability of this composition in large socially diverse groups. We call it “the chemistry of truth” and “the DNA of reputation”.
The main know-how of the Cyberpravda is a combination of modern blockchain protocols with algorithms of mathematical analysis of logical networks and graph theory for analysis of complex logical chains consisting of hundreds of facts from many different sources allowing mathematically to evaluate the reliability rating of information on the Internet and assess the reputation of its authors depending on the correlations of facts and arguments published by various authors with evidences and refutations from other users, the authors' scientific weight, their activity, account verification, reliability of data sources and many other factors.The algorithm is completely transparent and visible to all users of the system. Everyone will know the rules of the game and will be able to check them at any time. Thousands of authors will participate in the process, espousing different points of view.
All this should allow restoring the former importance and value of online journalism for professionals who investigated, wrote, fact-checked, double-checked information, and then had their work reviewed by experienced competitors. Thus, CyberPravda rating, of course, does not claim to be the supreme truth, but as close to it as possible, being inherently a reflection of the social consensus on any discussed topic.
On the basis of the calculated rating, a digital certificate of reliability of published information is formed, protected from spoofing and falsification by cryptographic methods as NFT (non-fungible tokens) that cannot be sold, bought, transferred or hacked by other users, but users who made a significant contribution to the knowledge database can earn money proportionately with NFTs which express their reputation rating. These ratings should become the basis for new types of web crawling algorithms and new services for ranking information in social networks, allowing to filter out fake news, falsifications, lies and irresponsible authors.
Constant improvement. If you're not willing to question it and push it to its limits - it will never be as good as it could be. With something this important - you can't afford to let it slide by with a "good enough".
Good work Mr. Marcus.
Keep it up, Gary. If winter comes, it’s the fault of the hucksters, not the honest practitioners
Eye opening. Have just bought your book to read carefully. Like most people, I’ve been asleep at the wheel for the last decade on AI (other than reading every damned sci-fi book worth a candle) but this year, independently of the whole ChatGPT explosion, set myself a New Year resolution to “read six books about AI.” I’ve just finished Max Tegmark’s book and shall graduate to yours. Keep up the good work. We’re listening.
Plenty of us have your back! Your concerns and approach here is super valid.
We really need to get ahead of this stuff ASAP. These LLMs are going to bring on a DDoS attack on the communications around our elections. Misinformation is one thing, the volume of garbage, nonsense content is going to make it extremely difficult to find out what is actually happening. Reality will become harder and harder to detect. Fact will become the needle in a hay stack.
Volume is very much the problem. I've been watching what's going on in the literary world and it's very worrying. Already one science fiction magazine has had to close submissions because of AI-generated spam. Some self-publishers are going for high volume business models and are using AI to keep pace with their voracious readers. Amazon does not care at all about the quality of books that pass through its self-publishing plaform, hasn't bothered to deal with the problem with fake books (scraped, plagiarised etc) that already exist and certainly won't bother with AI-generated books, no matter how bad they are. So publishing is in for a world of pain, but seem generally unaware.
I've written about this problem in depth here: https://wordcounting.substack.com/p/can-publishing-survive-the-oncoming
I find it hard to get alarmed about these examples. Any person with a bit of literary skill can write the same thing and publish it. We don’t need a AI model to write or disseminate incorrect information. The problem is people who can’t spend a minute to learn, read history, and think critically. So where does that lead us to, giving up control of the guardrails to some group of people who inevitably will use it to control the rest of us? Who gets to decide what is true and what is misinformation? I think of all the Covid information in the last few years. If I’m missing something let me know.
what you are missing: volume, the difference betwen retail and wholesale
But don't state sponsored troll farms already have the resources to produce BS on a large scale?
I’ll accept and consider that, the rest of my argument is still concerning.
You can lead people to information, but you can't make them think.
And let's never forget the millions of people who spent their youth sitting at the back of the classroom but were suddenly qualified, years later, to undertake their 'own research' into the science of COVID-19, and their 'own research' into the development of new vaccines.
"Who gets to decide what is true and what is misinformation?" Not so much "who gets to", but "how do we". We're already living in a world corrupted with mis/disinformation. Now multiply that corrupting influence ten fold, not by carefully constructed misinformation designed by a human, but at the click of button. We already employing AI backed countermeasures to check for us whether we're dealing with human generated text or AI generated text. This doesn't bode well.
Let me add a ray of hope from a cynical perspective. The fear is that disinformation will be produced wholesale by bots flooding the information world, thereby, I assume, muddying, the good info being currently provided. And there’s the rub. Does anyone believe the current info environment provides decent information? I doubt it. So the fear is that it will be made much worse and this is bad. But here is my cynical ray of hope. What if the problem is not bad information but the credulity of the reading public (and here I especially include our intellectual elites). In the west we tend to believe what we read even if it is garbage. Flood the zone with OBVIOUS RELENTLESS GARBAGE, and maybe we will stop doing this! Maybe we will act as people did in the old Soviet Union where they had to read critically and evaluate the crap they were being fed. The biggest problem right now is not JUST the misinformation, but the way that otherwise intelligent minds believe what they are told. One way to possibly stop this is for the source to be seen as potentially very toxic. Chatgpt will do this. Oh, and btw, there is no stopping it now. All the demos in the world wont put this genie back in the bottle. The only thing to do is prepare ourselves and one good step in that direction is to stop reflexively believing What you read and what we are told.
interesting perspective, almost an uncanny valley in reverse
Meanwhile, OpenAI says "Less hype would be good." (!) https://www.businessinsider.com/openais-cto-murati-wants-less-hype-around-gpt-4-chatgpt-2023-3
re: "One keen Twitter reader pointed me to two small but real examples of actual harm by ChatGPT, surely the tip of an unpleasant iceberg:"
The "harm" from the German example seems to be due to a poor quality education system, or worst case poor signup instructions on the part of OpenAI, regarding its fallibility. There are lots of sources of poor information online and offline that gullible people with poor reasoning skills will fall for. There are vast numbers of people getting nonsense from other humans via the net. That example illustrates nothing new in kind about the issue of AI. Its not doing much for your case to cite silly trivial examples that point to other problems rather than premature release of AI as being the concern.
In the 2nd case a human spread misinformation: I'm sure in the meantime that Microsoft Word and Google docs were used to create vastly more misinformation that was spread than ChatGPT was. Yes, it may make it easier: just as computers made it easier. These examples don't relate to the issue of "scale" from the prior post. I added another comment replying to someone there about the issue of "scale" being an overlapping issue which can often be addressed in ways unrelated to whether the information is generated by a sophisticated AI or the tech they've used before now, or say a Mechanical Turk like poor of cheap human labor someplace.
There may be issues to be concerned about: but your reactions give the impression of someone who may know AI well: but is just now giving superficial thinking to issues that others have been thinking about for decades in more nuanced and in depth exploration of the issues related to misinformation and society. (I'm giving the benefit of the doubt that someone as prominent as this author is capable of in depth nuanced exploration of complex ideas, but merely hasn't taken time to do so in this case. I haven't read his books or other writings).
This is very interesting. And to me the main point is not that ChatGPT can, if properly prompted, output fake news, misinformation, etc. The main point is that these models are an excellent tool for the misinformation job, and that any organization or sovereign-state with some determination and budget can train such a model and use it without restrains within a jurisdiction friendly to their goals.
I just saw the tweet exchange where prof Robin Hanson claimed there are no serious arguments for your regulatory proposals and you claimed there weren't serious arguments against. I'd suggest that the issue is that Hanson grasps that your arguments appear to ignore a whole body of literature regarding "government failures" and flaws in regulatory approaches to issues, like regulatory capture. You seem to hand wave away critique based on an implicit underlying axiomatic unquestioned assumption of government competence that seemingly you won't examine despite there being serious academic work questioning it.
It seems the issue is that you lack the background knowledge to grasp that some of the things that you mindlessly trust as axioms are seriously questioned. For your argument to be "serious" you need to seriously address existing concerns regarding regulatory approaches to problems and justify why they don't apply in this case. You can't merely handwave them away. Thats why some find it hard to take your case as "serious". Steelman your opponents arguments and address them: rather than pretending they don't exist or attacking them as if they were strawmen not to be considered seriously.
For instance you tweeted something about the FDA being better than there being no-FDA as if that were an axiomatic fact: rather than something where there is serious academic debate. Some cite data and arguments from the world of public choice theory that due to flawed incentives, the FDA has slowed the process of spreading new treatments (despite the atypical rush during covid) and therefore led to more deaths by lack of availability of treatment than the deaths that would have occurred without it. Lawsuits, reputation risk, etc, lead companies to try to avoid releasing treatments that kill people. If anything the bigger problems with flawed treatments in the realm of alternative medicine are completely ignored by the FDA (e.g. people fall for worthless homeopathic treatments ubiquitous at drug stores, let alone widespread more serious quackery) since they assume the government is protecting them and therefore let their guard down and special interest pressures has lead alternative medicine to creep into influencing legislators and regulators.
you aren’t going to convince me to dismantle the FDA, but this is still a more serious attempt than Hanson’s one-liner
The noted classical liberal philosopher John Stuart Mill:
https://www.gutenberg.org/files/34901/34901-h/34901-h.htm
"He who knows only his own side of the case, knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side; if he does not so much as know what they are, he has no ground for preferring either opinion. The rational position for him would be suspension of judgment, and unless he contents himself with that, he is either[Pg 68] led by authority, or adopts, like the generality of the world, the side to which he feels most inclination. Nor is it enough that he should hear the arguments of adversaries from his own teachers, presented as they state them, and accompanied by what they offer as refutations. That is not the way to do justice to the arguments, or bring them into real contact with his own mind. He must be able to hear them from persons who actually believe them; who defend them in earnest, and do their very utmost for them. He must know them in their most plausible and persuasive form; he must feel the whole force of the difficulty which the true view of the subject has to encounter and dispose of; else he will never really possess himself of the portion of truth which meets and removes that difficulty. Ninety-nine in a hundred of what are called educated men are in this condition; even of those who can argue fluently for their opinions. Their conclusion may be true, but it might be false for anything they know: they have never thrown themselves into the mental position of those who think differently from them, and considered what such persons may have to say; and consequently they do not, in any proper sense of the word, know the doctrine which they themselves profess. They do[Pg 69] not know those parts of it which explain and justify the remainder; the considerations which show that a fact which seemingly conflicts with another is reconcilable with it, or that, of two apparently strong reasons, one and not the other ought to be preferred. All that part of the truth which turns the scale, and decides the judgment of a completely informed mind, they are strangers to; nor is it ever really known, but to those who have attended equally and impartially to both sides, and endeavoured to see the reasons of both in the strongest light. So essential is this discipline to a real understanding of moral and human subjects, that if opponents of all important truths do not exist, it is indispensable to imagine them, and supply them with the strongest arguments which the most skilful devil's advocate can conjure up."
https://plato.stanford.edu/archives/spr2017/entries/mill/
"John Stuart Mill (1806–73) was the most influential English language philosopher of the nineteenth century. He was a naturalist, a utilitarian, and a liberal, whose work explores the consequences of a thoroughgoing empiricist outlook."
On a prior post I referred to Thomas Jefferson's quote: "Sometimes it is said that man cannot be trusted with the government of himself. Can he, then be trusted with the government of others? Or have we found angels in the form of kings to govern him?"
Many people today seem to have an implicit assumption we have somehow found angels in the form of government bureaucrats to govern us. They assume that merely commanding government to regulate something will lead it to do a good job. Merely because you wish to believe the FDA is necessarily better than not having the alternative doesn't magically make it so due to the reality that imperfect humans are involved in a system with flawed incentives.
There are private certification agencies in other realms, like underwriters laboratories. Competitive private certification agencies that provide insurance for safety and/or efficacy of medical products would have incentives to do a good job in ways that a monopolistic government regulatory bureau doesn't.
Just as some companies do a good job and others do a poor job: the results of a government agency will vary greatly. Public choice economists study the realities of how imperfect humans operate within government given the various incentives at play.
To be taken seriously you need to consider the potential problems with regulation: not merely a simplistic assumption that its magically guaranteed to be great just because you wish it to be and tell politicians to make it great.
I got into more specifics on prior comments on other posts. There are dangers with government regulation that you haven't acknowledged or addressed. Most people imagine government regulatory agencies as under the control of people like them (even if not angels): but imagine the politicians you most hate and fear. Conservatives fear control by the woke, progressives might fear a resurgent Trumpist populist taking control or say a return to a "moral majority" religious right getting control of the regulatory process. Or more likely the likely big companies taking control to benefit themselves at the expense of startups.
I suspect you haven't read the literature on problems with the FDA from Stanford's Hoover Institution or in economic academic literature. Or other sources like I suspect the Cato Institute, GMU's Mercatus Center, etc. Many people have no reason to realize what sort of academic work has been done in the realm of public choice economics and regulatory capture (whose founders won nobel prizes in economics). Of course I'm not going to convince you with a few sound bites: and you'd likely need to do a fair amount of reading before you'd have the background knowledge to even engage in a productive informed debate about the issue. Its possible you are aware of the literature: but I'm suspecting not.
Try reversing things to put yourself in the place of some of your critics who have background knowledge on the issues regarding problems with regulatory agencies in general. By analogy: how do comments on AI theory from outsiders who haven't studied AI, cognitive science or related disciplines but merely read newspapers and leap to conclusions sound to you? Do they sound like serious commentary? Do they often sound simplistic and not "serious"?
This is why hopes for regulation and similar, such as you have been promoting and which I support, won’t be nearly enough. This is moving from technique, truth and culture to interests and the weakness of human institutions. Unless there are absolute sanctions which disable rogue behaviour, and even then, there’ll always be a few who will try to break through. One or two will. What then?
Yes, my point is that ultimately people will adapt to new tech since people and their institutions adapt.
Now GM wants ChatGPT in its vehicles. Madness. Like tulip mania.
"'ChatGPT is going to be in everything,' GM Vice President Scott Miller said"
From the AIAAIC website itself: «OpenCage CEO Ed FreyFogle believes the problem likely stems from ChatGPT picking up on YouTube tutorials in which people describe OpenCage providing a phone look-up service - a rumour they had rebutted in an April 2022 blog post.»
So, according to OpenCage themselves, the real culprit here are those people who have made YouTube tutorials in which they gave false information, and ChatGPT merely believed them, as any other person who could have watched those tutorials and believed them.
How does this even remotely imply that "ChatGPT" is causing harm?
The internet is an echo chamber and AI is a megaphone. Two tools that multiply magnitude of output working together can obviously be used for good or bad. I'm amazed by how amazed people are by this realization. I think 2 things are true. Humans are incapable of not innovating using this technology and it's going to change the world. This is the ultimate faith in humanity test. I think we'll figure it out, besides being a naive optimist is way more fun.
Hi!
I would like to draw your attention to the new service for analyzing the veracity of information and worldwide distribution blockchain project of reliable information on the Internet CyberPravda.com. My name is Timur Sadekov and I'm the founder and CEO of this project.
We have found a way to mathematically determine the reliability of information and have developed a fundamentally new algorithm that does not require the use of cryptographic certificates of states and corporations, voting tokens that can bribe any user, or artificial intelligence algorithms that are not able to understand the meaning of what a person said. The algorithm does not require external administration, review by experts or special content curators. We have neither semantics nor linguistics - all these approaches have not justified themselves. We have found a unique and very unusual combination of mathematics, psychology and game theory and have developed a purely mathematical international multilingual correlation algorithm that allows us to get a deeper scientometric assessment of the accuracy and reliability of information sources compared to the PageRank algorithm or the Hirsch index. The algorithm allows betting on different versions of events with automatic determination of the winner and allows to create a holistic structural and motivational frame in which users and news agencies can earn money by publishing reliable information, and a high reputation rating becomes a fundamentally new social elevator.
Our method is based on the fact that any news or publications can be represented as a sequence of elementary events securely recorded and encrypted inside the blockchain: who, when, what, where, how much etc. All users who want to tell about important events should post a sequence of facts. These blocks will be so simple that it will be easy to check these facts from all sides and enable automatic translation and make the system multilingual and fully international. Of great importance is the fact that fake news never continues and always contradicts each other. And vice versa, any true fact or event will have supporters who will be interested in publishing the most detailed information about it. All this makes it possible to assign each message in the system a credibility rating, and authors — a reputation rating. The basis of credibility is the same composition of event quanta, and the basis of reputation is the constancy and stability of this composition in large socially diverse groups. We call it “the chemistry of truth” and “the DNA of reputation”.
The main know-how of the Cyberpravda is a combination of modern blockchain protocols with algorithms of mathematical analysis of logical networks and graph theory for analysis of complex logical chains consisting of hundreds of facts from many different sources allowing mathematically to evaluate the reliability rating of information on the Internet and assess the reputation of its authors depending on the correlations of facts and arguments published by various authors with evidences and refutations from other users, the authors' scientific weight, their activity, account verification, reliability of data sources and many other factors.The algorithm is completely transparent and visible to all users of the system. Everyone will know the rules of the game and will be able to check them at any time. Thousands of authors will participate in the process, espousing different points of view.
All this should allow restoring the former importance and value of online journalism for professionals who investigated, wrote, fact-checked, double-checked information, and then had their work reviewed by experienced competitors. Thus, CyberPravda rating, of course, does not claim to be the supreme truth, but as close to it as possible, being inherently a reflection of the social consensus on any discussed topic.
On the basis of the calculated rating, a digital certificate of reliability of published information is formed, protected from spoofing and falsification by cryptographic methods as NFT (non-fungible tokens) that cannot be sold, bought, transferred or hacked by other users, but users who made a significant contribution to the knowledge database can earn money proportionately with NFTs which express their reputation rating. These ratings should become the basis for new types of web crawling algorithms and new services for ranking information in social networks, allowing to filter out fake news, falsifications, lies and irresponsible authors.
CyberPravda Data Room Dashboard: https://docs.google.com/spreadsheets/d/16-ySPw5vy2wUIvlsV_nx64jbJpL4_Ns5HxZ0SyfAUP0
YouTube video: https://youtu.be/jFZVhp_GJtY
@sadekovtimur