Discussion about this post

User's avatar
Amy A's avatar

Major publishers and aggregators are also under pressure to add chatbots to their products. Elsevier is offering Scopus AI, for example. It uses RAG and a knowledge graph, and the answers seem to come from the summaries of the papers it selects; however, it may still be very misleading. The answers are slightly different every time, depending on the papers it includes. This is not obvious to most users. Users are asking that the summaries be consistent, and Elsevier's reps were not making it obvious that this is not possible. They also said it was great for topics they did not understand, but underwhelmed when they were experts in the topic. This is a red flag to me - nearly any source looks good when you know very little, and it is hard to know what you don't know. Scopus AI may be fine if users understand that it is a place to generate ideas to explore further (and verify), but it may be a problem if users assume that it is trustworthy.

Expand full comment
Dr. Jason Polak's avatar

In my opinion, this sort of thing is expected, given that science has degenerated significantly from just a seeking of knowledge. A lot of science is more of a game now that is just about securing more funding for research that really won't help anyone. It's also become a religion, not in the way it operates, but regarding its societal role: people need something to believe in because religion is being displaced.

So, I wonder: does this AI fakery show the danger of AI (which is indeed a danger) or is it actually better at showing the farce of modern science?

Expand full comment
45 more comments...

No posts