There's another WSJ article on Copilot that essentially says the only useful bit is meeting summaries, and even those have confabulations (Bob talked about product development, when there was no Bob and no discussion of product development). The Excel tool (still not official) was especially called out as being useless, while event the PPT was not worth anyone's time. Microsoft said it planned to nudge users to use the tools.
It’s February so I assume the Q1 24 earnings calls are only halfway through, in which case the last bar is artificially small. I think the sentiment is correct though- the moral panic has largely subsided and the hype cycle is doing its Gartner thing.
Hmm ... on second thoughts there are quite a lot of problems with that bar chart. I would have gone for something like a line graph showing a 12-week trailing average of the proportion of reports mentioning "AI" or "machine learning". ("AI" already captures those mentioning "Generative AI"). The trailing average smooths out any short-term bumps (Thanksgiving, Xmas, random variation) and allows for any unknown effect like "tech savvy companies report earlier in the quarter" or something. Taking a proportion reduces any issues with missing data or otherwise missing or late reports and helps address the "part way through the quarter" issue identified in my previous post. That should show the trend more reliably.
Back in 2022 or even early 2023, even I would've taken a bet on 5% odds of AGI happening within the decade. (i.e. I wouldn't have bet on a double-or-nothing payout, but on 19:1, I might have put down money I could afford to lose).
It may not fit with the self-reinforcing belief of this particular echo-chamber here that it's all overhyped and can't possibly amount to anything useful, much less AGI.
But it can do a lot of good in the education space, where personalized curricula are impossible to reach with human teachers, but pretty straightforward with AI based tutors.
Sal Khan certainly understands tutoring very well, having authored thousands (I think he said 5,000+ in an interview with Bill Gates) of training videos and thought deeply about how students learn or where / why they trip up.
Again, why is it unfortunate if evidence shows that such AI tutors improve learning for a large number of students?
WaPo is very leftist and owned by a very vested player in all this. Clearly there's an agenda to have these "AI"s become oracles of truth, morlaity, and culture ... according to their values which they've decided are the best ones for everyone else.
Such ad-hominem attack about the article being published in WaPo and not WSJ says more about whoever makes the comment (and their political leanings). Should be irrelevant regarding the degree to which AI based tutors are shaping education.
And that's it. The problem isn't that it is useless, it's that it's become a tool in the Marxist revolution that is Bay Area. The other major problem is some of these so called 'God Fathers' of AI wanting to end their career being able to say that connectionist models can , end to end, do reasoning, planning, etc.; "we won!". If you just use LLMs for what they're good at they will create value.
There are certainly those who understand this and are building responsibly.
And yet, NVIDIA stock prices increased 4 times in the last 3 months in anticipation of the enormous market for chips due to AI revolution. Some people think it is just a hype and investors will sooner or later find out it wasn't worth it. I have a question for them: why don't you do the short selling to earn a lot of money when the stocks inevitably goes down?
There's an old saying that applies here, goes something like this: "the market can remain irrational longer than you can remain solvent". It's reasonable to bet against a hype bubble in the long term, but no one ever knows when exactly it will burst.
And yet, I find ChatGPT Plus to be one of the most useful tools I've come across in 30 years in web publishing.
I think what we're seeing is that many web writers have hyped up AI in order to build interest in their articles, and now the pendulum is swinging back in the other direction, and some are using anti-AI hype to build interest in their articles. The herd mentality group consensus breathlessly follows each swing of the hype pendulum, repeating the hype of the moment as if it were their own idea.
"So let me get this straight? Sora’s can’t reliably handle basic physics, ChatGPT had an unexplained meltdown, Gemini can’t even remember who was on Apollo 11, there are no formal guarantees that any of this will ever work, you yourself admit you don’t know how it works, or how much better it will get next time around, and you want another $7 trillion dollars?"
"The fundamental problem here is this: Large Language Models are gullible. Their only source of information is their training data combined with the information that you feed them. If you feed them a prompt that includes malicious instructions—however those instructions are presented—they will follow those instructions.
This is a hard problem to solve, because we need them to stay gullible. They’re useful because they follow our instructions. Trying to differentiate between “good” instructions and “bad” instructions is a very hard—currently intractable—problem.
No surprise Gemini doesn't know who was on Apollo 11. Gemini came before Apollo.
good one :)
There's another WSJ article on Copilot that essentially says the only useful bit is meeting summaries, and even those have confabulations (Bob talked about product development, when there was no Bob and no discussion of product development). The Excel tool (still not official) was especially called out as being useless, while event the PPT was not worth anyone's time. Microsoft said it planned to nudge users to use the tools.
Reminds me of The Atomic Age. People were given radioactive salts as a cure all, and we were supposed to all be driving atomic-powered cars soon. 😆
It’s February so I assume the Q1 24 earnings calls are only halfway through, in which case the last bar is artificially small. I think the sentiment is correct though- the moral panic has largely subsided and the hype cycle is doing its Gartner thing.
Hmm ... on second thoughts there are quite a lot of problems with that bar chart. I would have gone for something like a line graph showing a 12-week trailing average of the proportion of reports mentioning "AI" or "machine learning". ("AI" already captures those mentioning "Generative AI"). The trailing average smooths out any short-term bumps (Thanksgiving, Xmas, random variation) and allows for any unknown effect like "tech savvy companies report earlier in the quarter" or something. Taking a proportion reduces any issues with missing data or otherwise missing or late reports and helps address the "part way through the quarter" issue identified in my previous post. That should show the trend more reliably.
Back in 2022 or even early 2023, even I would've taken a bet on 5% odds of AGI happening within the decade. (i.e. I wouldn't have bet on a double-or-nothing payout, but on 19:1, I might have put down money I could afford to lose).
Looking back, even that feels wildly optimistic.
Unfortunately, Khanmingo got a rave review on Washington Post:
https://www.washingtonpost.com/opinions/2024/02/22/artificial-intelligence-sal-khan/
that is unfortunate, if the WSJ piece is remotely correct
I don't understand why this is unfortunate.
It may not fit with the self-reinforcing belief of this particular echo-chamber here that it's all overhyped and can't possibly amount to anything useful, much less AGI.
But it can do a lot of good in the education space, where personalized curricula are impossible to reach with human teachers, but pretty straightforward with AI based tutors.
Sal Khan certainly understands tutoring very well, having authored thousands (I think he said 5,000+ in an interview with Bill Gates) of training videos and thought deeply about how students learn or where / why they trip up.
Again, why is it unfortunate if evidence shows that such AI tutors improve learning for a large number of students?
WaPo is very leftist and owned by a very vested player in all this. Clearly there's an agenda to have these "AI"s become oracles of truth, morlaity, and culture ... according to their values which they've decided are the best ones for everyone else.
WSJ is still in the business of journalism.
Such ad-hominem attack about the article being published in WaPo and not WSJ says more about whoever makes the comment (and their political leanings). Should be irrelevant regarding the degree to which AI based tutors are shaping education.
Winter is coming.
I gave my family and friends the gift of perplexity.ai (Andoid and iOS app available, not connected, etc.) Genuinely better than Google
And that's it. The problem isn't that it is useless, it's that it's become a tool in the Marxist revolution that is Bay Area. The other major problem is some of these so called 'God Fathers' of AI wanting to end their career being able to say that connectionist models can , end to end, do reasoning, planning, etc.; "we won!". If you just use LLMs for what they're good at they will create value.
There are certainly those who understand this and are building responsibly.
And yet, NVIDIA stock prices increased 4 times in the last 3 months in anticipation of the enormous market for chips due to AI revolution. Some people think it is just a hype and investors will sooner or later find out it wasn't worth it. I have a question for them: why don't you do the short selling to earn a lot of money when the stocks inevitably goes down?
There's an old saying that applies here, goes something like this: "the market can remain irrational longer than you can remain solvent". It's reasonable to bet against a hype bubble in the long term, but no one ever knows when exactly it will burst.
Tesla is still valued for the driverless cars it doesn’t seem to be able to produce, for example.
In a gold rush, the most lucrative job is a shovel salesman.
Time to offload $NVDA :)
And yet, I find ChatGPT Plus to be one of the most useful tools I've come across in 30 years in web publishing.
I think what we're seeing is that many web writers have hyped up AI in order to build interest in their articles, and now the pendulum is swinging back in the other direction, and some are using anti-AI hype to build interest in their articles. The herd mentality group consensus breathlessly follows each swing of the hype pendulum, repeating the hype of the moment as if it were their own idea.
ChatGPT told me on Friday night that the mushroom cloud for the Trinity test was 183 metres
"So let me get this straight? Sora’s can’t reliably handle basic physics, ChatGPT had an unexplained meltdown, Gemini can’t even remember who was on Apollo 11, there are no formal guarantees that any of this will ever work, you yourself admit you don’t know how it works, or how much better it will get next time around, and you want another $7 trillion dollars?"
Well, when you say it like that...
Another AI Winter is coming?
In my country we have the saying "A miracle lasts three days", seems appropriate :)
I just saw this from Schneier on Security https://simonwillison.net/2023/Oct/14/multi-modal-prompt-injection/;
"The fundamental problem here is this: Large Language Models are gullible. Their only source of information is their training data combined with the information that you feed them. If you feed them a prompt that includes malicious instructions—however those instructions are presented—they will follow those instructions.
This is a hard problem to solve, because we need them to stay gullible. They’re useful because they follow our instructions. Trying to differentiate between “good” instructions and “bad” instructions is a very hard—currently intractable—problem.