The "pessimists" you named are just die-hard optimists in disguise, as in ludicrously optimistic in the capabilities of AI. Well, perhaps they ARE pessimists, in that they're pessimistic about the intelligence of human beings. Allow me to explain... The subtext behind AI apocalypse scenarios is "system architects are idiots." For example, taking nuclear weapon launch controls away from human hands and hooking them directly to AI. The surface message is that of "architects are idiots, except me, who will save the rest of you from dumb setups that I dream up." The real message is one of "AI is so powerful it's scary, and so obscure in its workings as to parallel witchcraft. Therefore, worship warlocks like me if you value your life."
This might be an extremely cynical view, but I think that one way people at the top of society / tech industry can feel they are important is by thinking they are the cause of its demise. The implication being that they are so smart that they can create the terminator. It's also a lot more interesting/fancy to say that you're making doomsday terminators than saying you work in applied stats.
It seems like LLM's would take the discovery part out of education if they are used there. The feeling of discovery is the most important part of learning and becoming motivated to learn. The fact that LLM's are often wrong is secondary to the part where people are relying on them to think. That's how you turn people into passive recepter's (the opposite of education).
I don't understand why/how sal khan would think they are useful for online education.
The scam being perpetrated on creators is enticing them to train ChatGPT to write in the creator's style and voice -- on the pretense that the creators who do so will be able to use Ai as a "writing assistant" to speed up the creator's work. Big pile of bull chips ... what it will accomplish is enabling copyright violators and plagiarism scoffers to instruct ChatGPT to write something -- anything -- in the style and voice of someone who has not authorized it to be done. In other words, enable the dumb and the lazy to steal the core, indeed only, unique product which a creator has -- his or her style and voice. Bad shit, if you ask me.
I agree with all your points except for the desirability of global governance for AI. That just means more layers of idiotic and corrupt politicians trying to steer AI in ways that they hope buy votes.
Regarding math...I guess before we let it solve Khan academy problems, we need to solve the encryption issue. What are we going to do when these machines are so good at math that current encryption is useless? Haven't you thought that maybe we are already there but it's being held back?
"lady you can make baby without that much mental power you don t need to know math or biology" The first comment on my TedX talk I gave a few weeks ago about the challenges to our continued relationship with AI. Inspired by a lot of what you say Gary, I hope you like it (and long live women in STEM!). https://www.youtube.com/watch?v=9DXm54ZkSiU
The culprit of the poor performance of the Humane AI Pin is the AI model rather than the hardware. To me it should serve as a reality check for the hyped up expectations of AI. Not sure everyone got it since most just blamed Humane which really is just responsible for the hardware. Few blamed OpenAI on their model.
“Together, I believe that we can achieve the OpenAI mission of ensuring that artificial general intelligence benefits all of humanity”
2004 sees more people going to the polls worldwide than any other year in decades, if they really believed this then they wouldn’t be releasing their buggy unregulated and unreliable products and upgrades this year of all years
But seeing as their sole motivation is ‘can’t wait to get obscenely rich’ then this is precisely what they are doing
"' frequently wrong, never in doubt,' and that’s just about the worst quality I can imagine for an educator.” Yeah but perfect for a fascist dictator. Hey perhaps we’ve found it a niche after all. 😂
There were a couple of articles regarding AI, and MLLU in today's NYT. It would be great if you had the time and inclination to give us some feedback on both of them.
The "pessimists" you named are just die-hard optimists in disguise, as in ludicrously optimistic in the capabilities of AI. Well, perhaps they ARE pessimists, in that they're pessimistic about the intelligence of human beings. Allow me to explain... The subtext behind AI apocalypse scenarios is "system architects are idiots." For example, taking nuclear weapon launch controls away from human hands and hooking them directly to AI. The surface message is that of "architects are idiots, except me, who will save the rest of you from dumb setups that I dream up." The real message is one of "AI is so powerful it's scary, and so obscure in its workings as to parallel witchcraft. Therefore, worship warlocks like me if you value your life."
This might be an extremely cynical view, but I think that one way people at the top of society / tech industry can feel they are important is by thinking they are the cause of its demise. The implication being that they are so smart that they can create the terminator. It's also a lot more interesting/fancy to say that you're making doomsday terminators than saying you work in applied stats.
> Therefore, worship warlocks like me if you value your life.
This nailed it for me. There's always a rather obvious ego / control motivation behind any extreme and cult like devotion to... anything.
Warlock? Or nutjob?
https://www.linkedin.com/posts/simonay_largelanguagemodels-llms-generativeai-activity-7187495253367558145-l3F5?utm_source=share&utm_medium=member_desktop
Just about all of them are from the TESCREAL cult https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/
Any particular reason you're not speaking this year, Gary? The world is in need of sobering assessments more than ever.
hopefully Rumman and Meg Mitchell (not sure if she is on main stage?) will do some of that. good to have some fresh faces.
It seems like LLM's would take the discovery part out of education if they are used there. The feeling of discovery is the most important part of learning and becoming motivated to learn. The fact that LLM's are often wrong is secondary to the part where people are relying on them to think. That's how you turn people into passive recepter's (the opposite of education).
I don't understand why/how sal khan would think they are useful for online education.
The scam being perpetrated on creators is enticing them to train ChatGPT to write in the creator's style and voice -- on the pretense that the creators who do so will be able to use Ai as a "writing assistant" to speed up the creator's work. Big pile of bull chips ... what it will accomplish is enabling copyright violators and plagiarism scoffers to instruct ChatGPT to write something -- anything -- in the style and voice of someone who has not authorized it to be done. In other words, enable the dumb and the lazy to steal the core, indeed only, unique product which a creator has -- his or her style and voice. Bad shit, if you ask me.
ShatGPT, and you can quote me. ;)
I agree with all your points except for the desirability of global governance for AI. That just means more layers of idiotic and corrupt politicians trying to steer AI in ways that they hope buy votes.
Regarding math...I guess before we let it solve Khan academy problems, we need to solve the encryption issue. What are we going to do when these machines are so good at math that current encryption is useless? Haven't you thought that maybe we are already there but it's being held back?
"lady you can make baby without that much mental power you don t need to know math or biology" The first comment on my TedX talk I gave a few weeks ago about the challenges to our continued relationship with AI. Inspired by a lot of what you say Gary, I hope you like it (and long live women in STEM!). https://www.youtube.com/watch?v=9DXm54ZkSiU
What very few people know: "Marcus on AI" is actually written by a non-public GPT-5 engine making fun of AI skeptics! :-D Love your work, Marcus!
Thanks for the shout out Gary. I'm writing this after attending (more like crashing) the ASU+GSV conference which devoted three full days to enthusiastic hyping of AI in education, largely through use of personalized tutors. I predict this won't end well. Additional thoughts can be found here: https://buildcognitiveresonance.substack.com/ and here: https://www.educationnext.org/generative-ai-in-education-another-mindless-mistake/
The culprit of the poor performance of the Humane AI Pin is the AI model rather than the hardware. To me it should serve as a reality check for the hyped up expectations of AI. Not sure everyone got it since most just blamed Humane which really is just responsible for the hardware. Few blamed OpenAI on their model.
“Together, I believe that we can achieve the OpenAI mission of ensuring that artificial general intelligence benefits all of humanity”
2004 sees more people going to the polls worldwide than any other year in decades, if they really believed this then they wouldn’t be releasing their buggy unregulated and unreliable products and upgrades this year of all years
But seeing as their sole motivation is ‘can’t wait to get obscenely rich’ then this is precisely what they are doing
"' frequently wrong, never in doubt,' and that’s just about the worst quality I can imagine for an educator.” Yeah but perfect for a fascist dictator. Hey perhaps we’ve found it a niche after all. 😂
There were a couple of articles regarding AI, and MLLU in today's NYT. It would be great if you had the time and inclination to give us some feedback on both of them.
Humane: what if we put an Alexa on your shirt 🤔