So, it sounds like another Theranos situation where the media never bother to really talk/listen to the pathologists (or, in this case the scientists/researchers in the AI trenches) and instead only listen to those with a vested interest.
So, it sounds like another Theranos situation where the media never bother to really talk/listen to the pathologists (or, in this case the scientists/researchers in the AI trenches) and instead only listen to those with a vested interest.
I think the Theranos comparison is unfair. Theranos committed outright fraud. I don't hear the AI companies being accused of that. As far as I know, there's no law against unrealistic expectations or simply bad reasoning. Many companies fail because the stuff they're working on doesn't fulfill expectations. It's not fraud.
I largely agree with you. I mostly don't believe anyone is out and out perpetuating fraud. Still, when I hear Sam Altman or (in the past) Mira Murati claim in interviews that they're seeing signs of "intelligence" " emerging" from their next-gen models, I have to wonder. Could they possibly believe this? Obviously they have a lot of financial incentive to drink their own Kool-Aid. Do they believe it in the abstract and feel like they need to buy more time to make it happen? I don't know how to interpret it.
Who knows what they truly believe? It doesn't really matter because they are free to hold whatever opinions they want. It is usual for CEOs and CTOs to hype their products. It is on us to decide whether what they say is useful, with the help of experts like Gary Marcus of course.
Until the algorithms change radically from the LLM technology they are all using now, I would dismiss all comments about "intelligence emerging". Whatever intelligence they output is derived from the human intelligence embodied in their training data, which is pretty much the entire internet. They are improving their products by scaling (ie, more training data) and little tweaks to try to control their output (eg, adhere to social norms, filter in favor of truth). As far as I can tell, they have no plan to reach AGI other than a general desire to do so.
You are right, of course. However, I could care less whether its fraud or people choosing to believe whatever they want.
If people who have all the money in the world and every opportunity to listen to criticism want to lie to themselves, and it causes a lot of harm in the process, then its just a clever way of gaming the system. Fraud at least allows them to be put in jail more easily.
I'd prefer to just follow my moral intuitions here than accept some arbitrary law.
Aside from that, LLMs/AI is a much bigger industry than Theranos, which was small potatoes compared to this really.
Fair - I think it wasn't Theranos - but might be about to tip over into that. If Open AI et al. have discovered what they are selling (GAI) is bunk, then they will be commiting fraud when they go to the market and sell it as such.
Certainly there is motivation to do fraud when there's so much money riding on it. On the other hand, the nature of AI and LLMs is that a company has to release it into the wild before people will regard it as real, regardless of their talk. That makes it hard to commit fraud.
Unrealistic expectations in and of itself are not fraud. However, selling those unrealistic expectations while you know they can't be met in any way, is. That is precisely what Theranos was doing, selling something they knew they couldn't actually deliver. I fail to see the difference in what OpenAI is selling to the world. And if the OpenAI crowd isn't aware of this, then they are merely delusional.
I believe TheranosтАЩ fraud was telling people their blood was being processed by their invention while actually sending it to a regular lab. They stepped over the line. As far as I know, OpenAI hasnтАЩt done anything like this. What they are actually selling can be tried out by their customers. Telling people that AGI is just around the corner is just hype and legal. Of course, if it is seen as unrealistic, it will damage their reputation and investors will bail. I think we are at that point now.
Actually, she was not convicted for that specifically, she was convicted for telling investors she could do something that she knowingly couldn't. The big debate, after the whole Theranos debacle, was about the Silicon Valley idea of "fake it until you make it". OpenAI is also telling investors they can do something (in the near future) that they know they cannot. The only difference with Theranos is that the actual risk for people using it, is harder to understand (fake blood-testing results vs fake information).
Regardless of whether a particular case rises to the level of fraud in the legal sense, one thing is clear: the тАЬfake it till you make itтАЭ crowd give legitimate scientists and engineers a bad name.
So, it sounds like another Theranos situation where the media never bother to really talk/listen to the pathologists (or, in this case the scientists/researchers in the AI trenches) and instead only listen to those with a vested interest.
I think the Theranos comparison is unfair. Theranos committed outright fraud. I don't hear the AI companies being accused of that. As far as I know, there's no law against unrealistic expectations or simply bad reasoning. Many companies fail because the stuff they're working on doesn't fulfill expectations. It's not fraud.
I largely agree with you. I mostly don't believe anyone is out and out perpetuating fraud. Still, when I hear Sam Altman or (in the past) Mira Murati claim in interviews that they're seeing signs of "intelligence" " emerging" from their next-gen models, I have to wonder. Could they possibly believe this? Obviously they have a lot of financial incentive to drink their own Kool-Aid. Do they believe it in the abstract and feel like they need to buy more time to make it happen? I don't know how to interpret it.
from my talking to people working at places like openAI, they truly believe it
Who knows what they truly believe? It doesn't really matter because they are free to hold whatever opinions they want. It is usual for CEOs and CTOs to hype their products. It is on us to decide whether what they say is useful, with the help of experts like Gary Marcus of course.
Until the algorithms change radically from the LLM technology they are all using now, I would dismiss all comments about "intelligence emerging". Whatever intelligence they output is derived from the human intelligence embodied in their training data, which is pretty much the entire internet. They are improving their products by scaling (ie, more training data) and little tweaks to try to control their output (eg, adhere to social norms, filter in favor of truth). As far as I can tell, they have no plan to reach AGI other than a general desire to do so.
You are right, of course. However, I could care less whether its fraud or people choosing to believe whatever they want.
If people who have all the money in the world and every opportunity to listen to criticism want to lie to themselves, and it causes a lot of harm in the process, then its just a clever way of gaming the system. Fraud at least allows them to be put in jail more easily.
I'd prefer to just follow my moral intuitions here than accept some arbitrary law.
Aside from that, LLMs/AI is a much bigger industry than Theranos, which was small potatoes compared to this really.
Fair - I think it wasn't Theranos - but might be about to tip over into that. If Open AI et al. have discovered what they are selling (GAI) is bunk, then they will be commiting fraud when they go to the market and sell it as such.
Certainly there is motivation to do fraud when there's so much money riding on it. On the other hand, the nature of AI and LLMs is that a company has to release it into the wild before people will regard it as real, regardless of their talk. That makes it hard to commit fraud.
Unrealistic expectations in and of itself are not fraud. However, selling those unrealistic expectations while you know they can't be met in any way, is. That is precisely what Theranos was doing, selling something they knew they couldn't actually deliver. I fail to see the difference in what OpenAI is selling to the world. And if the OpenAI crowd isn't aware of this, then they are merely delusional.
I believe TheranosтАЩ fraud was telling people their blood was being processed by their invention while actually sending it to a regular lab. They stepped over the line. As far as I know, OpenAI hasnтАЩt done anything like this. What they are actually selling can be tried out by their customers. Telling people that AGI is just around the corner is just hype and legal. Of course, if it is seen as unrealistic, it will damage their reputation and investors will bail. I think we are at that point now.
Actually, she was not convicted for that specifically, she was convicted for telling investors she could do something that she knowingly couldn't. The big debate, after the whole Theranos debacle, was about the Silicon Valley idea of "fake it until you make it". OpenAI is also telling investors they can do something (in the near future) that they know they cannot. The only difference with Theranos is that the actual risk for people using it, is harder to understand (fake blood-testing results vs fake information).
Regardless of whether a particular case rises to the level of fraud in the legal sense, one thing is clear: the тАЬfake it till you make itтАЭ crowd give legitimate scientists and engineers a bad name.
No self respecting scientist could/would knowingly work for an organization that was engaging in that.