I appreciate the shout-out. And I agree it's not enough of a redesign to fully salvage the situation; it was just something I came up with on the fly as my class was being undone by ChatGPT. I didn't have much hope though, and I actually quit teaching after that semester and have mostly given up on higher education. Felt like fleeing a burning building.
I left academia before AI, but for the same basic reason. I taught at good but not elite HBCUs and PWIs. Laptops, phones, now AI. The "social contract" between profs and students was shredded. High schools stopped requiring students to read and write. To help students think more clearly and deeply is an impossible goal in college if students are mostly there for the credentials and networking. Motivated students are an increasingly rare breed. You'll always have a few in a lecture hall, but they will be grossly outnumbered by students who would need a one on one relationship to appreciate a good book or essay. Some temperaments can thrive in these situations. Most need a higher level of engagement. I agree that you have to be very creative in the classroom these days, and that some projects / assignments are largely AI-proof. I would add that the need to constantly innovate is also exhausting.
Can you elaborate? Why was it like fleeing a burning building?
Higher Ed has a lousy rep right now. But if it falls apart further something needs to take its place, and thoughts from someone so recently in the trenches would be welcome.
It's not any one thing. I was an adjunct and it was getting harder and harder to justify the long hours with the low pay. AI was in some sense the straw that broke the camel's back rather than a sole reason I left.
My field is religious studies (religion and science), and that department is already struggling to find enrollment and justify its existence to the neoliberal university system that only understands education for the purposes of "getting a job." So there's fewer and fewer opportunities out there, and fewer students that are interested. Those that are tend to treat humanities as something they'd *like* to study but can't because it's "not practical." Post-Covid this situation has been intensified. I'm not sure what can be done to fix it because it would require our entire culture's priorities to be re-oriented in a pro-human direction.
One thing that is both concerning and gives me hope is a foundational flaw in the idea of LLMs as AI. Most human fields of study incorporate the idea that reality cannot be fully known or described. We build ideas and frameworks of ideas and hope that they do a decent job at explaining reality but also see their limitations. Even physicists have come face to face with this fact as the more grandiose unified theories of everything become more and more reliant on unobserved (and possibly unobservable) aspects of the universe.
For any human concerned with reality, it's pretty clear that any form of measurement collapses reality down to whatever the data collection method could capture. A lovely spring afternoon one a tree-lined street with breeze rustling leaves and the murmur of conversations by passersby gets reduced to a time series of temperatures, sound levels, wind speeds, and light frequencies, but they are not the same.
The LLM versions of AI are premised entirely on assembling data in ways that resemble other data tagged as being related to the prompt. But most human interactions and even most of the kinds of work that AI boosters are claiming LLMs can "replace" actually requires the understand that reality is thinly represented by data. To make decisions and take action one frequently has to construct an understanding of reality out of experience, ideas, perceptions and information and act on that construction.
I think people are finding that out as they experience the meaninglessness of this both on the internet (sadly LLMs are really well suited for shitposting) but also in how unsatisfying digitally mediated interactions are. At some point all the remixing of skibidy toilet videos cannot sustain that much interest. I read a stat recently that university chaplains have seen a surge of interest in spirituality among young people. Perhaps its pollyannaish, but I think most humans do really care about having meaning in our lives.
Sounds like that re-orientation needs to be the/one of key aims for those of us who care about the future, if any, of our species. How do we go about this? Do we need another conference? Online workshops? Picket lines? Both strategy and tactics need to be dealt with by all who are willing to take on this project. What's the next step. If not by you, and me, then by whom?
These discussions about LLMs writing essays seem disconnected from reality.... Do the authors not realize that professors don't actually want these essays? They're not collecting them from students and then selling them on the street corner. No one wants to read them. In fact, we actually have to PAY the people who do read them (TAs) to compensate them from the value destruction that it causes.
Clearly, producing these essays is not the point of education. The reason we ask students to write them is that, for a human brain, the most efficient way to produce a good essay is to first understand the topic and then think critically about it. The classroom is a factory for producing human understanding and critical thinking skills. The essays, which have negative economic value, are actually a waste product.
The situation is as if someone saw a factory that was pumping waste into the river and decided to make a new factory that pumps waste into the river even faster and more cheaply, not noticing that the point of the factory was to produce bicycles not waste. If someone did that with a physical factory, they would get sued for polluting the river. It's a wonder that the same is not on the table here for those polluting the educational system.
Insights from 2 semester grading papers as a GSI: I hated it so much I that I didn’t notice a neighbor’s apartment on fire (had to blast music and close the curtains to properly give the task the attention the students deserved) and it was so much easier to grade an A paper. I told my students the later at the start of the second semester and received significantly fewer complaints in my grading. It’s hard work to teach you why you didn’t get an A, kids!
All of this was human attention to humans, learning from humans. AI generated papers completely miss the point.
Maybe we should only do oral examinations. This would require to have many more teachers, in particular at university. While this contradicts measures of efficiency, maybe we have to go that way.
So true. I recently had the infuriating experience of a student using AI to plagiarize me and submitting it as his research paper for me to grade. I can’t prove it 100% but a mother knows her child.
It seems to follow that we shouldnt use essays anymore to assess human understanding and critical thinking skills. I hear that a lot. But it is not clear, what are the alternatives?
The root problem is not AI but the fact that well before AI turned up higher ed had become a transaction of tuition and time for credentials and connections. AI puts the lie to this situation.
I got my BA in 1978 and went on to teach college in political science. I’ve also taught general first-year seminar courses, writing, and logic. I have observed the shift in pedagogy and student attitudes and expectations first hand in the classroom, especially over the last five years. 2006 was another universe compared with today — remember the iPhone was not introduced until 2007.
Good point. The algorithmification of education has happened before AI. AI is now making use of that. This raises the wider question: How can we make society and culture resistant against algorithms?
Part of the answer will be to make society less efficient. To introduce more friction into the digital world. We need to think about digital friction as a resource.
This does lead me to a point that I was recently thinking about: that overly credulous and insufficiently critical users of these models are as much of a problem as the claims made by the companies that produce them.
For instance, take OpenAI and ChatGPT. The company was recently bragging about having improved to only about 37% hallucinations or so in its SimpleQA test using its GPT-4.5 model. 19% on PersonQA. Similarly, it had a 33% PersonQA hallucination rate for GPT-o3, which is actually rather worse than o1.
One can probably call into question how cherry-picked the evaluations are: no one knows what their input data is, their confidence intervals, or even whether they have contaminated the training data.
Still, OpenAI nonetheless chose to release results that showed, "yes, a fifth of the time, you will get inaccurate information about public figures, and a third of the time, you will get inaccurate information about basic facts."
But the most dedicated users of these models go way beyond the already rosy portrayal of the companies and make claims that even they (mostly) don't have the audacity to make.
We see a lot of them in comments here:
People posting single correct results from an LLM as "proof" that any criticisms of their accuracy are invented.
People claiming consistent 0% hallucinations (i.e. 100% accuracy) in their long-form LLM outputs. Obviously, they don't produce any proof.
People claiming that within the next few years, AIs will make everyone smarter than Einstein. Obviously, they cannot prove this.
People claiming that AIs will soon become benevolent gods and make us all immortal. I don't think they even care about proof.
The risk of AI companies is that people who, by and large, understand how to do good science are, suborned by profit, not doing it well or even have misrepresented the representativity of their results. But the risk of AI acolyte users is arguably even more dangerous: people who often never had any pretension to scientific thinking interpreting and even sharing their individual experiences and impressions as an authoritative perspective on what the models are capable of doing, rather than as only the starting point for a more careful analysis.
Just an aside. I think of hallucination as a good thing. LLMs becomes less valuable if we try to distort them by making hallucinations go away. LLMs are probability distributions distilled from data. That is what they are and what they should be. If we want to know facts, we should use other tools.
Hmmm. Well, the issue is that people want LLMs to give them correct responses. Hallucinations undermine everything from trying to use LLMs as search engines to attempts to use them as math solvers. If people were more modest about their capabilities, perhaps hallucinations would not be a problem.
I understand that people want correct responses. But that is not what LLMs can do by design. LLMs are probability distributions.
Of course one can try to work around this in various ways, but then LLMs stop being probability distributions that are an objective representation of their training data, which, in my opinion, lowers their value.
I recently took a math class that used the "flipped classroom" model (video lectures at home, team projects in class, peer-reviewed homework), no AI, and it was outstanding. Far more engaging and thorough than vanilla lectures with homework. Also without question much more work for the lecturer. Maybe more rewarding, if they're that way inclined? I think it would encourage students to make the best use of AI, which IMO is to support individual exploration. I could imagine that the class teamwork would expose any over-reliance on AI without penalizing its legitimate use.
Take-home written assignments have become all but worthless, but you can still do in-class discussions and participation, video assignments, and of course in-class quizzes and tests. Based on my experience teaching this past year, it’s actually good for students - teaching them to communicate, participate and project themselves will serve them well in most career paths.
Great point… you know, there was a time when grading was done only on in-person basis. I admit it was a while back, and in Eastern Europe. But I always thought Western systems were inferior because they rely too much on trust. My university admittance consisted of 4 in-person exams, rather than averaging my high school grades. I think this is the answer to the very real ChatGPT threat to education.
In-person is definitely coming back. Just graded final exams which were in-person and had numerous essay questions. It was quite obvious which students can write effectively - and which have been dependent on AI. I don't say that lightly or happily - but it's pretty clear that many have been "hiding" behind screens and technology these past few years. As educators, there's an obligation to help fix that.
I ran into an obstacle to this approach whereby I couldn't require handwritten work due to conflicts with formal and informal accommodations. Take-home online or typed is promoted as a way to achieve universal access. I was even penalized for writing on the board rather than having preset powerpoint handouts. There are lots of directives from the disability office, but the onus is placed on the professors to accommodate via universal access rather than the disability office to provide necessary supports at the individual need level. I'm sympathetic to the necessity of accommodations and high-quality universal access is fantastic when it is done well, but unfortunately the modern digitized-good-handwritten-bad paradigm has inadvertently created a modified environment for all rather than individualized accommodations and supports. I do hope that there will be a move toward hybrid technology as the means for universal access, such as e-ink scribe tablets that can provide digitized accommodations when needed but default to analog and handwritten as the primary interaction. I also think that grading of written work will have to go back to the expectations of analog days, whereby students are graded for ideas and broad structure with fewer penalties for grammatical errors and imperfect flow.
Some of what you describe is what I called the algorithmification of education. We have to resist that. We need to organize against it. But it will be difficult. It will need more resources for education, not less. And, btw, the most valuable resource in this is human attention.
Good question that many are working on. The answer is at the intersection of human creativity and AI power - LLMs are complementary and not a replacement. If that fundamental idea takes hold I believe the students will find the way.
In the words of Pogo, "We have met the enemy, and he is ... us!"
We allowed the basically anti-education wunderkinder to build giant gamification enterprises that spent their time manipulating growing children into believing that general education and the ability to think for oneself were useless or meaningless endeavors.
At the same time, we much too readily accepted the preposterous premise that you can establish fact and truth via popularity voting. Should we be surprised that very few, if any, young people are left who value genuine learning enough to expend the energy?
This isn't one of the things I'm worried about. People's natural intellectual curiosity didn't get killed off because ChatGPT showed up. Those who want to learn, will still learn. Those who don't want to learn, probably never did anyway.
Who is being cheated? From an academic's point of view the fundamental danger is that what we do (provide educational services) is undermined and eventually fades into irrelevance taking us with it. But students don't really care about that. Instead, it is they who are cheating themselves - yet they don't care, because they want the qualification without doing the work. So, they pay tuition fees (where applicable), or they exchange their time for nothing except a piece of paper which will be (has been) rapidly devalued. They have learned nothing. It is back to kindergarten - but now they can drink and have sex - its a party - a delaying of adulthood - a permanent adolescence. They have gamed the system, because we have turned the process into a game - and they want to win. Instead time needs to be spent to teach/share the alternatives. Yes, they can give us money and get a degree that proves nothing. Or, they can give us money, get a degree (which on its own means nothing) and learn something. The soon to be defunct idea that you get a degree to get a job will revert to you get a degree to become educated (i.e. capable of reasoned critical thought) something that used to be considered a virtue in itself (but only for the rich and clever). Higher education as a response to unemployment of young people will no longer be a tenable strategy. It was always a kind of grift. Inflation does not only affect prices. Using AI is fine provided you know how to do the work first without it, then it is a great tool for testing ideas and improving flow. But without the confidence to edit its output based on a solid hypothesis in the first place - it is intellectual masturbation. Recently Chatgpt has been trying to convince me I am a genius, but I know it is a stupid machine and its opinion counts for shit.
" The soon to be defunct idea that you get a degree to get a job will revert to you get a degree to become educated" Just how soon, and who's going to do that defuncting?
"But students don't really care about that. Instead, it is they who are cheating themselves"
Students dont cheat because they want to cheat, but because the incentive structure of our education system encourages it. We also need to think systemically. It is impossible to work against the incentives.
Students no longer learn the material. They learn how to cheat their way through the material. The aren't even bullshitting. They aren't even up to that level.
Schools and universities have to radically change their teaching and testing. Oral exams and in-room hand-written work. If universities don't. it will hollow out society and the nation will crash. This may already be happening.
Harvard's Shadow Scholar thesis former president will become the norm. All bullshit, not even comprehending their own so-called writing.
The flip side of this is that professors and reviewers of scholarly papers are using AI to evaluate it.
Stupid is its own reward at a national level. Usually, it means losing a major war and becoming a vassal nation.
It is inevitable only given our current economic system which puts the wrong price on some important transaction costs: Digital time is cheap and human time is expensive.
Maybe you should write an essay about the planned Visa and Mastercard initiatives to let companies, including OpenAI, develop tools for automated purchasing with credit cards based on LLM models.
What I am wondering about that...is the probability that the system buys you a several-thousand-dollar plane ticket instead of food for the month 48% (like the hallucination rate of o4-mini on PersonQA), 37% (like the hallucination rate of GPT-4.5 on SimpleQA) or around 1% (like the hallucination rate in an open document summarization task from Huggingface)? I'm sure that all of these are acceptable risks.
Dedicate a few classroom sessions for essay type, open book (but closed computer/phone) tests and that should be the basis of grades. Anything else is a mockery of grades.
"In a tweet that I cannot readily find but that got several million views, a professor used this technique with his students."
That was me. I ended up deleting my twitter last year because everything going on there, so I don't think the tweet's around anymore. I wrote about it in Scientific American, however: https://www.scientificamerican.com/article/to-educate-students-about-ai-make-them-use-it/
I appreciate the shout-out. And I agree it's not enough of a redesign to fully salvage the situation; it was just something I came up with on the fly as my class was being undone by ChatGPT. I didn't have much hope though, and I actually quit teaching after that semester and have mostly given up on higher education. Felt like fleeing a burning building.
thanks. adding the tweet in the online version now. i actually had a screenshot.
I think I might have one too. As well as a screencap of Elon tweeting at me...
But thanks so much for your continued work on this topic.
I left academia before AI, but for the same basic reason. I taught at good but not elite HBCUs and PWIs. Laptops, phones, now AI. The "social contract" between profs and students was shredded. High schools stopped requiring students to read and write. To help students think more clearly and deeply is an impossible goal in college if students are mostly there for the credentials and networking. Motivated students are an increasingly rare breed. You'll always have a few in a lecture hall, but they will be grossly outnumbered by students who would need a one on one relationship to appreciate a good book or essay. Some temperaments can thrive in these situations. Most need a higher level of engagement. I agree that you have to be very creative in the classroom these days, and that some projects / assignments are largely AI-proof. I would add that the need to constantly innovate is also exhausting.
Can you elaborate? Why was it like fleeing a burning building?
Higher Ed has a lousy rep right now. But if it falls apart further something needs to take its place, and thoughts from someone so recently in the trenches would be welcome.
It's not any one thing. I was an adjunct and it was getting harder and harder to justify the long hours with the low pay. AI was in some sense the straw that broke the camel's back rather than a sole reason I left.
My field is religious studies (religion and science), and that department is already struggling to find enrollment and justify its existence to the neoliberal university system that only understands education for the purposes of "getting a job." So there's fewer and fewer opportunities out there, and fewer students that are interested. Those that are tend to treat humanities as something they'd *like* to study but can't because it's "not practical." Post-Covid this situation has been intensified. I'm not sure what can be done to fix it because it would require our entire culture's priorities to be re-oriented in a pro-human direction.
One thing that is both concerning and gives me hope is a foundational flaw in the idea of LLMs as AI. Most human fields of study incorporate the idea that reality cannot be fully known or described. We build ideas and frameworks of ideas and hope that they do a decent job at explaining reality but also see their limitations. Even physicists have come face to face with this fact as the more grandiose unified theories of everything become more and more reliant on unobserved (and possibly unobservable) aspects of the universe.
For any human concerned with reality, it's pretty clear that any form of measurement collapses reality down to whatever the data collection method could capture. A lovely spring afternoon one a tree-lined street with breeze rustling leaves and the murmur of conversations by passersby gets reduced to a time series of temperatures, sound levels, wind speeds, and light frequencies, but they are not the same.
The LLM versions of AI are premised entirely on assembling data in ways that resemble other data tagged as being related to the prompt. But most human interactions and even most of the kinds of work that AI boosters are claiming LLMs can "replace" actually requires the understand that reality is thinly represented by data. To make decisions and take action one frequently has to construct an understanding of reality out of experience, ideas, perceptions and information and act on that construction.
I think people are finding that out as they experience the meaninglessness of this both on the internet (sadly LLMs are really well suited for shitposting) but also in how unsatisfying digitally mediated interactions are. At some point all the remixing of skibidy toilet videos cannot sustain that much interest. I read a stat recently that university chaplains have seen a surge of interest in spirituality among young people. Perhaps its pollyannaish, but I think most humans do really care about having meaning in our lives.
So true and so utterly heartbreaking.
Sounds like that re-orientation needs to be the/one of key aims for those of us who care about the future, if any, of our species. How do we go about this? Do we need another conference? Online workshops? Picket lines? Both strategy and tactics need to be dealt with by all who are willing to take on this project. What's the next step. If not by you, and me, then by whom?
These discussions about LLMs writing essays seem disconnected from reality.... Do the authors not realize that professors don't actually want these essays? They're not collecting them from students and then selling them on the street corner. No one wants to read them. In fact, we actually have to PAY the people who do read them (TAs) to compensate them from the value destruction that it causes.
Clearly, producing these essays is not the point of education. The reason we ask students to write them is that, for a human brain, the most efficient way to produce a good essay is to first understand the topic and then think critically about it. The classroom is a factory for producing human understanding and critical thinking skills. The essays, which have negative economic value, are actually a waste product.
The situation is as if someone saw a factory that was pumping waste into the river and decided to make a new factory that pumps waste into the river even faster and more cheaply, not noticing that the point of the factory was to produce bicycles not waste. If someone did that with a physical factory, they would get sued for polluting the river. It's a wonder that the same is not on the table here for those polluting the educational system.
Insights from 2 semester grading papers as a GSI: I hated it so much I that I didn’t notice a neighbor’s apartment on fire (had to blast music and close the curtains to properly give the task the attention the students deserved) and it was so much easier to grade an A paper. I told my students the later at the start of the second semester and received significantly fewer complaints in my grading. It’s hard work to teach you why you didn’t get an A, kids!
All of this was human attention to humans, learning from humans. AI generated papers completely miss the point.
Maybe we should only do oral examinations. This would require to have many more teachers, in particular at university. While this contradicts measures of efficiency, maybe we have to go that way.
Excellent analogy for those who see only the surface, and try to emulate it, thinking it is the full reality. A lot like AI?
So true. I recently had the infuriating experience of a student using AI to plagiarize me and submitting it as his research paper for me to grade. I can’t prove it 100% but a mother knows her child.
It seems to follow that we shouldnt use essays anymore to assess human understanding and critical thinking skills. I hear that a lot. But it is not clear, what are the alternatives?
Maybe we need to go back to the ability to make good conversation at cocktail parties as the true measure of an educated person.
😂
Some people thought that ELIZA made a pretty good conversation.
The root problem is not AI but the fact that well before AI turned up higher ed had become a transaction of tuition and time for credentials and connections. AI puts the lie to this situation.
It took way more than just tuition and time to earn my BS in Engineering in ‘06. What degree did you earn without effort?
I got my BA in 1978 and went on to teach college in political science. I’ve also taught general first-year seminar courses, writing, and logic. I have observed the shift in pedagogy and student attitudes and expectations first hand in the classroom, especially over the last five years. 2006 was another universe compared with today — remember the iPhone was not introduced until 2007.
Good point. The algorithmification of education has happened before AI. AI is now making use of that. This raises the wider question: How can we make society and culture resistant against algorithms?
Part of the answer will be to make society less efficient. To introduce more friction into the digital world. We need to think about digital friction as a resource.
This does lead me to a point that I was recently thinking about: that overly credulous and insufficiently critical users of these models are as much of a problem as the claims made by the companies that produce them.
For instance, take OpenAI and ChatGPT. The company was recently bragging about having improved to only about 37% hallucinations or so in its SimpleQA test using its GPT-4.5 model. 19% on PersonQA. Similarly, it had a 33% PersonQA hallucination rate for GPT-o3, which is actually rather worse than o1.
One can probably call into question how cherry-picked the evaluations are: no one knows what their input data is, their confidence intervals, or even whether they have contaminated the training data.
Still, OpenAI nonetheless chose to release results that showed, "yes, a fifth of the time, you will get inaccurate information about public figures, and a third of the time, you will get inaccurate information about basic facts."
But the most dedicated users of these models go way beyond the already rosy portrayal of the companies and make claims that even they (mostly) don't have the audacity to make.
We see a lot of them in comments here:
People posting single correct results from an LLM as "proof" that any criticisms of their accuracy are invented.
People claiming consistent 0% hallucinations (i.e. 100% accuracy) in their long-form LLM outputs. Obviously, they don't produce any proof.
People claiming that within the next few years, AIs will make everyone smarter than Einstein. Obviously, they cannot prove this.
People claiming that AIs will soon become benevolent gods and make us all immortal. I don't think they even care about proof.
The risk of AI companies is that people who, by and large, understand how to do good science are, suborned by profit, not doing it well or even have misrepresented the representativity of their results. But the risk of AI acolyte users is arguably even more dangerous: people who often never had any pretension to scientific thinking interpreting and even sharing their individual experiences and impressions as an authoritative perspective on what the models are capable of doing, rather than as only the starting point for a more careful analysis.
Just an aside. I think of hallucination as a good thing. LLMs becomes less valuable if we try to distort them by making hallucinations go away. LLMs are probability distributions distilled from data. That is what they are and what they should be. If we want to know facts, we should use other tools.
Hmmm. Well, the issue is that people want LLMs to give them correct responses. Hallucinations undermine everything from trying to use LLMs as search engines to attempts to use them as math solvers. If people were more modest about their capabilities, perhaps hallucinations would not be a problem.
I understand that people want correct responses. But that is not what LLMs can do by design. LLMs are probability distributions.
Of course one can try to work around this in various ways, but then LLMs stop being probability distributions that are an objective representation of their training data, which, in my opinion, lowers their value.
Btw, just found https://www.theguardian.com/technology/2025/may/14/elon-musk-grok-white-genocide which is a good example what happens if one starts to mess with LLMs.
I recently took a math class that used the "flipped classroom" model (video lectures at home, team projects in class, peer-reviewed homework), no AI, and it was outstanding. Far more engaging and thorough than vanilla lectures with homework. Also without question much more work for the lecturer. Maybe more rewarding, if they're that way inclined? I think it would encourage students to make the best use of AI, which IMO is to support individual exploration. I could imagine that the class teamwork would expose any over-reliance on AI without penalizing its legitimate use.
LLMs are a crutch, not a tool.
They allow one to do things without putting in any time and effort.
And if one has never learned to do things without them, one is forever reliant — hooked — on them.
Which, of course, is precisely what the AI companies want. Once hooked, a person is theirs for life.
They allow one to APPEAR to do things. In reality they aren't doing anything at all.
Good point
Take-home written assignments have become all but worthless, but you can still do in-class discussions and participation, video assignments, and of course in-class quizzes and tests. Based on my experience teaching this past year, it’s actually good for students - teaching them to communicate, participate and project themselves will serve them well in most career paths.
Great point… you know, there was a time when grading was done only on in-person basis. I admit it was a while back, and in Eastern Europe. But I always thought Western systems were inferior because they rely too much on trust. My university admittance consisted of 4 in-person exams, rather than averaging my high school grades. I think this is the answer to the very real ChatGPT threat to education.
In-person is definitely coming back. Just graded final exams which were in-person and had numerous essay questions. It was quite obvious which students can write effectively - and which have been dependent on AI. I don't say that lightly or happily - but it's pretty clear that many have been "hiding" behind screens and technology these past few years. As educators, there's an obligation to help fix that.
I ran into an obstacle to this approach whereby I couldn't require handwritten work due to conflicts with formal and informal accommodations. Take-home online or typed is promoted as a way to achieve universal access. I was even penalized for writing on the board rather than having preset powerpoint handouts. There are lots of directives from the disability office, but the onus is placed on the professors to accommodate via universal access rather than the disability office to provide necessary supports at the individual need level. I'm sympathetic to the necessity of accommodations and high-quality universal access is fantastic when it is done well, but unfortunately the modern digitized-good-handwritten-bad paradigm has inadvertently created a modified environment for all rather than individualized accommodations and supports. I do hope that there will be a move toward hybrid technology as the means for universal access, such as e-ink scribe tablets that can provide digitized accommodations when needed but default to analog and handwritten as the primary interaction. I also think that grading of written work will have to go back to the expectations of analog days, whereby students are graded for ideas and broad structure with fewer penalties for grammatical errors and imperfect flow.
Some of what you describe is what I called the algorithmification of education. We have to resist that. We need to organize against it. But it will be difficult. It will need more resources for education, not less. And, btw, the most valuable resource in this is human attention.
But we also should teach the good use of LLMs How can we combine what you say with that?
Good question that many are working on. The answer is at the intersection of human creativity and AI power - LLMs are complementary and not a replacement. If that fundamental idea takes hold I believe the students will find the way.
In the words of Pogo, "We have met the enemy, and he is ... us!"
We allowed the basically anti-education wunderkinder to build giant gamification enterprises that spent their time manipulating growing children into believing that general education and the ability to think for oneself were useless or meaningless endeavors.
At the same time, we much too readily accepted the preposterous premise that you can establish fact and truth via popularity voting. Should we be surprised that very few, if any, young people are left who value genuine learning enough to expend the energy?
This isn't one of the things I'm worried about. People's natural intellectual curiosity didn't get killed off because ChatGPT showed up. Those who want to learn, will still learn. Those who don't want to learn, probably never did anyway.
The problem is that, for those people who don't want to learn, it's now incredibly easy to cheat.
One problem is that it increases what Hannah Arendt called the atomization of society (in "The Origins of Totalitarianism").
"Democracy, which thrives on having an educated citizenry, will crumble."
Technofeudalism, on the other hand, will thrive on masses of uneducated citizenry.
Who is being cheated? From an academic's point of view the fundamental danger is that what we do (provide educational services) is undermined and eventually fades into irrelevance taking us with it. But students don't really care about that. Instead, it is they who are cheating themselves - yet they don't care, because they want the qualification without doing the work. So, they pay tuition fees (where applicable), or they exchange their time for nothing except a piece of paper which will be (has been) rapidly devalued. They have learned nothing. It is back to kindergarten - but now they can drink and have sex - its a party - a delaying of adulthood - a permanent adolescence. They have gamed the system, because we have turned the process into a game - and they want to win. Instead time needs to be spent to teach/share the alternatives. Yes, they can give us money and get a degree that proves nothing. Or, they can give us money, get a degree (which on its own means nothing) and learn something. The soon to be defunct idea that you get a degree to get a job will revert to you get a degree to become educated (i.e. capable of reasoned critical thought) something that used to be considered a virtue in itself (but only for the rich and clever). Higher education as a response to unemployment of young people will no longer be a tenable strategy. It was always a kind of grift. Inflation does not only affect prices. Using AI is fine provided you know how to do the work first without it, then it is a great tool for testing ideas and improving flow. But without the confidence to edit its output based on a solid hypothesis in the first place - it is intellectual masturbation. Recently Chatgpt has been trying to convince me I am a genius, but I know it is a stupid machine and its opinion counts for shit.
" The soon to be defunct idea that you get a degree to get a job will revert to you get a degree to become educated" Just how soon, and who's going to do that defuncting?
"But students don't really care about that. Instead, it is they who are cheating themselves"
Students dont cheat because they want to cheat, but because the incentive structure of our education system encourages it. We also need to think systemically. It is impossible to work against the incentives.
This is "The Shadow Scholar" writ large. And it is devastating to the concept of education.
https://www.chronicle.com/article/the-shadow-scholar/
Students no longer learn the material. They learn how to cheat their way through the material. The aren't even bullshitting. They aren't even up to that level.
Schools and universities have to radically change their teaching and testing. Oral exams and in-room hand-written work. If universities don't. it will hollow out society and the nation will crash. This may already be happening.
Harvard's Shadow Scholar thesis former president will become the norm. All bullshit, not even comprehending their own so-called writing.
The flip side of this is that professors and reviewers of scholarly papers are using AI to evaluate it.
Stupid is its own reward at a national level. Usually, it means losing a major war and becoming a vassal nation.
LLMs are just hastening the inevitable transition to parrotocracy.
It is inevitable only given our current economic system which puts the wrong price on some important transaction costs: Digital time is cheap and human time is expensive.
I guess we could always reinvent exams, where there is no ability to use AI.
Maybe you should write an essay about the planned Visa and Mastercard initiatives to let companies, including OpenAI, develop tools for automated purchasing with credit cards based on LLM models.
What I am wondering about that...is the probability that the system buys you a several-thousand-dollar plane ticket instead of food for the month 48% (like the hallucination rate of o4-mini on PersonQA), 37% (like the hallucination rate of GPT-4.5 on SimpleQA) or around 1% (like the hallucination rate in an open document summarization task from Huggingface)? I'm sure that all of these are acceptable risks.
Dedicate a few classroom sessions for essay type, open book (but closed computer/phone) tests and that should be the basis of grades. Anything else is a mockery of grades.