The game is afoot, but a lot of folks are still in denial
Constant improvement. If you're not willing to question it and push it to its limits - it will never be as good as it could be. With something this important - you can't afford to let it slide by with a "good enough".
Good work Mr. Marcus.
Keep it up, Gary. If winter comes, it’s the fault of the hucksters, not the honest practitioners
Eye opening. Have just bought your book to read carefully. Like most people, I’ve been asleep at the wheel for the last decade on AI (other than reading every damned sci-fi book worth a candle) but this year, independently of the whole ChatGPT explosion, set myself a New Year resolution to “read six books about AI.” I’ve just finished Max Tegmark’s book and shall graduate to yours. Keep up the good work. We’re listening.
Plenty of us have your back! Your concerns and approach here is super valid.
We really need to get ahead of this stuff ASAP. These LLMs are going to bring on a DDoS attack on the communications around our elections. Misinformation is one thing, the volume of garbage, nonsense content is going to make it extremely difficult to find out what is actually happening. Reality will become harder and harder to detect. Fact will become the needle in a hay stack.
Volume is very much the problem. I've been watching what's going on in the literary world and it's very worrying. Already one science fiction magazine has had to close submissions because of AI-generated spam. Some self-publishers are going for high volume business models and are using AI to keep pace with their voracious readers. Amazon does not care at all about the quality of books that pass through its self-publishing plaform, hasn't bothered to deal with the problem with fake books (scraped, plagiarised etc) that already exist and certainly won't bother with AI-generated books, no matter how bad they are. So publishing is in for a world of pain, but seem generally unaware.
I've written about this problem in depth here: https://wordcounting.substack.com/p/can-publishing-survive-the-oncoming
I find it hard to get alarmed about these examples. Any person with a bit of literary skill can write the same thing and publish it. We don’t need a AI model to write or disseminate incorrect information. The problem is people who can’t spend a minute to learn, read history, and think critically. So where does that lead us to, giving up control of the guardrails to some group of people who inevitably will use it to control the rest of us? Who gets to decide what is true and what is misinformation? I think of all the Covid information in the last few years. If I’m missing something let me know.
Let me add a ray of hope from a cynical perspective. The fear is that disinformation will be produced wholesale by bots flooding the information world, thereby, I assume, muddying, the good info being currently provided. And there’s the rub. Does anyone believe the current info environment provides decent information? I doubt it. So the fear is that it will be made much worse and this is bad. But here is my cynical ray of hope. What if the problem is not bad information but the credulity of the reading public (and here I especially include our intellectual elites). In the west we tend to believe what we read even if it is garbage. Flood the zone with OBVIOUS RELENTLESS GARBAGE, and maybe we will stop doing this! Maybe we will act as people did in the old Soviet Union where they had to read critically and evaluate the crap they were being fed. The biggest problem right now is not JUST the misinformation, but the way that otherwise intelligent minds believe what they are told. One way to possibly stop this is for the source to be seen as potentially very toxic. Chatgpt will do this. Oh, and btw, there is no stopping it now. All the demos in the world wont put this genie back in the bottle. The only thing to do is prepare ourselves and one good step in that direction is to stop reflexively believing What you read and what we are told.
Meanwhile, OpenAI says "Less hype would be good." (!) https://www.businessinsider.com/openais-cto-murati-wants-less-hype-around-gpt-4-chatgpt-2023-3
re: "One keen Twitter reader pointed me to two small but real examples of actual harm by ChatGPT, surely the tip of an unpleasant iceberg:"
The "harm" from the German example seems to be due to a poor quality education system, or worst case poor signup instructions on the part of OpenAI, regarding its fallibility. There are lots of sources of poor information online and offline that gullible people with poor reasoning skills will fall for. There are vast numbers of people getting nonsense from other humans via the net. That example illustrates nothing new in kind about the issue of AI. Its not doing much for your case to cite silly trivial examples that point to other problems rather than premature release of AI as being the concern.
In the 2nd case a human spread misinformation: I'm sure in the meantime that Microsoft Word and Google docs were used to create vastly more misinformation that was spread than ChatGPT was. Yes, it may make it easier: just as computers made it easier. These examples don't relate to the issue of "scale" from the prior post. I added another comment replying to someone there about the issue of "scale" being an overlapping issue which can often be addressed in ways unrelated to whether the information is generated by a sophisticated AI or the tech they've used before now, or say a Mechanical Turk like poor of cheap human labor someplace.
There may be issues to be concerned about: but your reactions give the impression of someone who may know AI well: but is just now giving superficial thinking to issues that others have been thinking about for decades in more nuanced and in depth exploration of the issues related to misinformation and society. (I'm giving the benefit of the doubt that someone as prominent as this author is capable of in depth nuanced exploration of complex ideas, but merely hasn't taken time to do so in this case. I haven't read his books or other writings).
This is very interesting. And to me the main point is not that ChatGPT can, if properly prompted, output fake news, misinformation, etc. The main point is that these models are an excellent tool for the misinformation job, and that any organization or sovereign-state with some determination and budget can train such a model and use it without restrains within a jurisdiction friendly to their goals.
I just saw the tweet exchange where prof Robin Hanson claimed there are no serious arguments for your regulatory proposals and you claimed there weren't serious arguments against. I'd suggest that the issue is that Hanson grasps that your arguments appear to ignore a whole body of literature regarding "government failures" and flaws in regulatory approaches to issues, like regulatory capture. You seem to hand wave away critique based on an implicit underlying axiomatic unquestioned assumption of government competence that seemingly you won't examine despite there being serious academic work questioning it.
It seems the issue is that you lack the background knowledge to grasp that some of the things that you mindlessly trust as axioms are seriously questioned. For your argument to be "serious" you need to seriously address existing concerns regarding regulatory approaches to problems and justify why they don't apply in this case. You can't merely handwave them away. Thats why some find it hard to take your case as "serious". Steelman your opponents arguments and address them: rather than pretending they don't exist or attacking them as if they were strawmen not to be considered seriously.
For instance you tweeted something about the FDA being better than there being no-FDA as if that were an axiomatic fact: rather than something where there is serious academic debate. Some cite data and arguments from the world of public choice theory that due to flawed incentives, the FDA has slowed the process of spreading new treatments (despite the atypical rush during covid) and therefore led to more deaths by lack of availability of treatment than the deaths that would have occurred without it. Lawsuits, reputation risk, etc, lead companies to try to avoid releasing treatments that kill people. If anything the bigger problems with flawed treatments in the realm of alternative medicine are completely ignored by the FDA (e.g. people fall for worthless homeopathic treatments ubiquitous at drug stores, let alone widespread more serious quackery) since they assume the government is protecting them and therefore let their guard down and special interest pressures has lead alternative medicine to creep into influencing legislators and regulators.
This is why hopes for regulation and similar, such as you have been promoting and which I support, won’t be nearly enough. This is moving from technique, truth and culture to interests and the weakness of human institutions. Unless there are absolute sanctions which disable rogue behaviour, and even then, there’ll always be a few who will try to break through. One or two will. What then?
Now GM wants ChatGPT in its vehicles. Madness. Like tulip mania.
"'ChatGPT is going to be in everything,' GM Vice President Scott Miller said"
From the AIAAIC website itself: «OpenCage CEO Ed FreyFogle believes the problem likely stems from ChatGPT picking up on YouTube tutorials in which people describe OpenCage providing a phone look-up service - a rumour they had rebutted in an April 2022 blog post.»
So, according to OpenCage themselves, the real culprit here are those people who have made YouTube tutorials in which they gave false information, and ChatGPT merely believed them, as any other person who could have watched those tutorials and believed them.
How does this even remotely imply that "ChatGPT" is causing harm?
The internet is an echo chamber and AI is a megaphone. Two tools that multiply magnitude of output working together can obviously be used for good or bad. I'm amazed by how amazed people are by this realization. I think 2 things are true. Humans are incapable of not innovating using this technology and it's going to change the world. This is the ultimate faith in humanity test. I think we'll figure it out, besides being a naive optimist is way more fun.
I would like to draw your attention to the new service for analyzing the veracity of information and worldwide distribution blockchain project of reliable information on the Internet CyberPravda.com. My name is Timur Sadekov and I'm the founder and CEO of this project.
We have found a way to mathematically determine the reliability of information and have developed a fundamentally new algorithm that does not require the use of cryptographic certificates of states and corporations, voting tokens that can bribe any user, or artificial intelligence algorithms that are not able to understand the meaning of what a person said. The algorithm does not require external administration, review by experts or special content curators. We have neither semantics nor linguistics - all these approaches have not justified themselves. We have found a unique and very unusual combination of mathematics, psychology and game theory and have developed a purely mathematical international multilingual correlation algorithm that allows us to get a deeper scientometric assessment of the accuracy and reliability of information sources compared to the PageRank algorithm or the Hirsch index. The algorithm allows betting on different versions of events with automatic determination of the winner and allows to create a holistic structural and motivational frame in which users and news agencies can earn money by publishing reliable information, and a high reputation rating becomes a fundamentally new social elevator.
Our method is based on the fact that any news or publications can be represented as a sequence of elementary events securely recorded and encrypted inside the blockchain: who, when, what, where, how much etc. All users who want to tell about important events should post a sequence of facts. These blocks will be so simple that it will be easy to check these facts from all sides and enable automatic translation and make the system multilingual and fully international. Of great importance is the fact that fake news never continues and always contradicts each other. And vice versa, any true fact or event will have supporters who will be interested in publishing the most detailed information about it. All this makes it possible to assign each message in the system a credibility rating, and authors — a reputation rating. The basis of credibility is the same composition of event quanta, and the basis of reputation is the constancy and stability of this composition in large socially diverse groups. We call it “the chemistry of truth” and “the DNA of reputation”.
The main know-how of the Cyberpravda is a combination of modern blockchain protocols with algorithms of mathematical analysis of logical networks and graph theory for analysis of complex logical chains consisting of hundreds of facts from many different sources allowing mathematically to evaluate the reliability rating of information on the Internet and assess the reputation of its authors depending on the correlations of facts and arguments published by various authors with evidences and refutations from other users, the authors' scientific weight, their activity, account verification, reliability of data sources and many other factors.The algorithm is completely transparent and visible to all users of the system. Everyone will know the rules of the game and will be able to check them at any time. Thousands of authors will participate in the process, espousing different points of view.
All this should allow restoring the former importance and value of online journalism for professionals who investigated, wrote, fact-checked, double-checked information, and then had their work reviewed by experienced competitors. Thus, CyberPravda rating, of course, does not claim to be the supreme truth, but as close to it as possible, being inherently a reflection of the social consensus on any discussed topic.
On the basis of the calculated rating, a digital certificate of reliability of published information is formed, protected from spoofing and falsification by cryptographic methods as NFT (non-fungible tokens) that cannot be sold, bought, transferred or hacked by other users, but users who made a significant contribution to the knowledge database can earn money proportionately with NFTs which express their reputation rating. These ratings should become the basis for new types of web crawling algorithms and new services for ranking information in social networks, allowing to filter out fake news, falsifications, lies and irresponsible authors.
CyberPravda Data Room Dashboard: https://docs.google.com/spreadsheets/d/16-ySPw5vy2wUIvlsV_nx64jbJpL4_Ns5HxZ0SyfAUP0
YouTube video: https://youtu.be/jFZVhp_GJtY