Search This Blog

Showing posts sorted by relevance for query AI. Sort by date Show all posts
Showing posts sorted by relevance for query AI. Sort by date Show all posts

Monday, February 10, 2025

We asked AI to tell on itself... (Glen McGhee)

This is what AI told us about its own limitations.  

1. Illusion of Learning: Teachers may perceive an illusion of progress due to improved homework scores when students use ChatGPT. A study in Turkey found that while homework scores increased with ChatGPT use, final exam performance actually decreased by 17% for students using unprompted ChatGPT[4], that is, without guiding prompts, simply navigating it on their own]. This suggests that teachers might observe apparent progress that doesn't translate to actual learning.

2. Overestimation of Knowledge: Students using ChatGPT may experience "Illusory Knowledge," where they believe they've learned more than they actually have[4]. This could lead teachers to overestimate class progress based on students' self-reported understanding [[I have personally experienced this while teaching -- student head nods in synch with my points -- only to learn later that this was a student reflex to pacify the teacher. As a teacher, I find this befuddling.]]

3. Reduced Critical Thinking: Teachers might notice a decline in students' critical thinking and problem-solving abilities due to overreliance on ChatGPT[6]. This could create a false impression of progress in content knowledge while masking deficits in crucial cognitive skills.

4. Difficulty in Assessment: The use of ChatGPT poses challenges for teachers in accurately assessing student abilities, potentially leading to a skewed perception of class progress[1]. [[Yes!]]

It's important to note that research on this specific aspect of teacher perception is limited, and more studies are needed to fully understand the phenomenon of perceived illusory progress in classrooms using ChatGPT.

Citations:
[1] https://asiacall.info/proceedings/index.php/articles/article/download/81/64/757
[2] https://community.openai.com/t/chatgpt-4-and-the-illusion-of-learning-a-cautionary-tale-from-a-new-user/1009557
[3] https://rgsa.openaccesspublications.org/rgsa/article/download/8162/4017/29639
[4] https://www.oneusefulthing.org/p/post-apocalyptic-education
[5] https://huskiecommons.lib.niu.edu/cgi/viewcontent.cgi?article=8882&context=allgraduate-thesesdissertations
[6] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[7] https://www.researchgate.net/publication/382619812_Perceptions_of_pre-service_teachers_on_the_use_of_ChatGPT_during_their_training_and_the_future_implications_for_their_future_role_in_high_schools_A_research_in_progress
[8] https://www.ciocoverage.com/edtech-and-ai-in-education-the-illusion-of-progress/

Other Possible negative effects of ChatGPT usage include:

1. Cognitive thinning: ChatGPT usage may lead to a reduction in students' cognitive efforts and critical thinking skills. Students who rely heavily on ChatGPT for academic tasks may experience a decline in their ability to think critically and solve problems independently[1][2].
[[ChatGPT encourages a decline in cognitive development.]]

2. Memory loss: Research has found a concerning link between ChatGPT usage and memory loss in students. As students rely on the AI to quickly answer questions, they may reduce their own cognitive efforts, resulting in poor memory retention[2][4].
[[Surprisingly, there are lots of studies on this. Memory loss in adults is linked to dementia. What about kids?]]

3. Procrastination: Students who use ChatGPT frequently are more likely to procrastinate on their academic tasks. This may be due to the ease with which they can obtain information or complete assignments using the AI tool[2][6].
[[Undermines the development of time management skills.]]

4. Declining academic performance: Studies have shown a correlation between ChatGPT usage and a drop in GPA. This suggests that while the tool may make schoolwork easier in the short term, it may negatively impact overall academic performance[2][4]. [[I don't get this. Failing students now have a proverbial army they can muster at a moment's notice -- unless (below) they are too poor to access the technology?]]

5. Reduced cognitive load: While ChatGPT can ease students' cognitive load during research tasks, this comes at the expense of developing stronger arguments and critical thinking skills. Students using ChatGPT tend to produce weaker arguments compared to those using traditional research methods like Google[8]. 

These effects suggest a "thinning out of developmental cognitive strands" in the classroom, as students may become overly reliant on AI tools at the expense of developing crucial cognitive and academic skills. This trend could potentially lead to a less intellectually robust learning environment if not properly managed.

Citations:
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[2] https://futurism.com/the-byte/chatgpt-memory-loss-procrastination
[3] https://www.nea.org/nea-today/all-news-articles/chatgpt-enters-classroom-teachers-weigh-pros-and-cons
[4] https://evolutionnews.org/2024/03/chatgpt-linked-with-memory-loss-poor-academic-performance/
[5] https://blog.acceleratelearning.com/ai-in-education-the-impact-of-chatgpt
[6] https://www.psypost.org/chatgpt-linked-to-declining-academic-performance-and-memory-loss-in-new-study/
[7] https://gchumanrights.org/gc-preparedness/preparedness-science-technology/article-detail/the-chatgpt-effect-in-higher-education.html
[8] https://www.psypost.org/study-finds-chatgpt-eases-students-cognitive-load-but-at-the-expense-of-critical-thinking/

Since the widespread adoption of AI in education, several effects on both teachers and students have worsened in regard to the online delivery of instruction and assessment:

1. Academic dishonesty: The use of ChatGPT for cheating has become more prevalent and sophisticated. Over 25% of teachers have caught students using ChatGPT to cheat, leading to increased time spent on checking for AI plagiarism[5].

2. Decline in critical thinking: Students' overreliance on ChatGPT has led to a more significant decrease in critical thinking and problem-solving skills. This effect is particularly concerning in subjects requiring complex analysis, such as literature, history, and philosophy[1][2].

3. Reduced creativity: The ease of generating content with ChatGPT has further diminished students' ability to express original ideas and perspectives, leading to more formulaic and less creative work[1][5].

4. Widening equity gaps: The introduction of AI tools has exacerbated existing inequalities, as not all students have equal access to or can afford AI tools like ChatGPT, creating an unfair advantage for those who do[1].

5. Increased cognitive offloading: Studies have shown a more pronounced decline in memory retention and independent learning as students increasingly rely on AI for information retrieval and task completion[1][2].

6. Compromised learning outcomes: The overuse of ChatGPT has led to a more significant negative impact on educational support and reduced social interaction between students and faculty, affecting the overall learning experience[2].

7. Challenges in assessment: Teachers are facing greater difficulties in accurately assessing students' knowledge and understanding due to the prevalence of AI-generated work[1][5].

8. Increased workload for teachers: Educators are spending more time revamping lesson plans to be "AI-proof" and developing new strategies to maintain academic integrity[5]. [[To say nothing about time wasted trying to catch cheaters and making the accusations stick? Why bother ....]]

9. Erosion of research skills: Students' ability to conduct independent research and critically evaluate sources has further deteriorated due to the convenience of AI-generated responses[2][4].

10. Perpetuation of biases: The increased use of ChatGPT has led to more instances of unintentional biases being reinforced in educational settings, as the AI may generate responses that reflect prejudices or imbalances present in its training data[2].

Regarding similar effects of online instructional delivery, even BEFORE the adoption of AI:
Many of the effects attributed to ChatGPT in education were already observed in online learning environments before the widespread adoption of AI, albeit to varying degrees:

1. Reduced human interaction: Online education has long grappled with the challenge of maintaining meaningful teacher-student and student-student interactions. This issue predates AI adoption, with concerns about social isolation and decreased engagement being prevalent in early online learning research[3][6].

2. Changes in assessment methods: The shift to online learning necessitated changes in assessment strategies to combat academic dishonesty, a challenge that existed before AI but has been exacerbated by it[1].

3. Time-saving for teachers: Learning Management Systems (LMS) and other educational technologies have been assisting teachers with tasks like grading and content delivery for years, though not to the extent that AI now promises[2].

4. Equity issues: The digital divide and unequal access to technology have been persistent challenges in online education, mirroring current concerns about AI accessibility[1][7].

5. Privacy concerns: Data privacy and security issues have been central to online education discussions long before AI entered the picture[1][4].

6. Potential for personalized learning: Adaptive learning systems, precursors to AI-driven personalization, have been a feature of online education for some time[5].

7. Risk of overreliance on technology: This concern has been present since the early days of educational technology integration, though AI intensifies it[1].

8. Impact on critical thinking skills: There have been ongoing debates about how online learning affects the development of critical thinking skills, a concern now amplified with AI[1][5].

9. Transformation of teaching roles: Online education has been gradually shifting the role of teachers from lecturers to facilitators, a trend that AI is accelerating[2][7].

10. Need for teacher training: The transition to online teaching has always required professional development, similar to the current need for AI-related training[8].

While many of these effects were present in online education before AI, the introduction of AI has generally intensified and accelerated these trends. The key difference lies in the scale, speed, and sophistication that AI brings to these existing challenges and opportunities in online education.

Citations:
[1] https://www.educationnext.org/a-i-in-education-leap-into-new-era-machine-intelligence-carries-risks-challenges-promises/
[2] https://www.waldenu.edu/programs/education/resource/five-pros-and-cons-of-ai-in-the-education-sector
[3] https://education.illinois.edu/about/news-events/news/article/2024/10/24/ai-in-schools--pros-and-cons
[4] https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf
[5] https://pmc.ncbi.nlm.nih.gov/articles/PMC9513244/
[6] https://openedx.org/blog/what-is-the-impact-of-ai-in-the-online-learning-space/
[7] https://www.onlineeducation.com/features/ai-in-distance-learning
[8] https://er.educause.edu/articles/2024/1/will-our-educational-system-keep-pace-with-ai-a-students-perspective-on-ai-and-learning

ChatGPT has introduced several negative changes to the role of teachers in the classroom:

1. Increased workload: Teachers are spending more time checking for AI plagiarism and revamping lesson plans to be "AI-proof"[4]. This additional burden takes away from time that could be spent on more meaningful educational activities.

2. Diminished teacher-student interaction: The use of ChatGPT for educational support can reduce opportunities for face-to-face interactions between students and faculty, potentially affecting the learning experience and social aspects of education[1].

3. Challenges in assessment: Teachers face difficulties in accurately assessing students' knowledge and understanding when AI tools are used to complete assignments[2]. This complicates the evaluation process and may lead to inaccurate assessments of student progress.

4. Shift in pedagogical focus: Educators are forced to adapt their teaching methods to counteract the negative effects of AI tools, potentially moving away from traditional teaching approaches that may still hold value[4].

5. Ethical dilemmas: Teachers must navigate new ethical challenges related to AI use in education, including issues of academic integrity and equitable access to technology[2][3].

6. Potential overreliance on AI: There is a risk that teachers might start relying on AI tools before developing in-depth knowledge about teaching and classroom management, potentially compromising the quality of education[7].

7. Reduced emphasis on critical thinking: As AI tools can provide quick answers, teachers may find it challenging to encourage students to engage in deep, critical thinking processes[1][2].

8. Perpetuation of biases: Teachers must be vigilant about the potential biases in AI-generated content, which could inadvertently reinforce stereotypes or prejudices in the classroom[1][3].

These changes highlight the complex challenges teachers face in integrating AI tools like ChatGPT into education while maintaining the quality and integrity of the learning experience.

Citations:
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[2] https://www.techbusinessnews.com.au/blog/chatgpt-may-lead-to-the-downfall-of-eduction-and-critical-thinking/
[3] https://citejournal.org/volume-23/issue-1-23/editorial/editorial-chatgpt-challenges-opportunities-and-implications-for-teacher-education/
[4] https://www.nea.org/nea-today/all-news-articles/chatgpt-enters-classroom-teachers-weigh-pros-and-cons
[5] https://www.pewresearch.org/short-reads/2024/05/15/a-quarter-of-u-s-teachers-say-ai-tools-do-more-harm-than-good-in-k-12-education/
[6] https://blogs.worldbank.org/en/education/how-use-chatgpt-support-teachers-good-bad-and-ugly
[7] https://www.tc.columbia.edu/articles/2023/january/navigating-the-risks-and-rewards-of-chatgpt/

Friday, November 1, 2024

Student Newspaper Promotes Cheating Services for Cash (Derek Newton)

The Daily, the student newspaper at Ball State University in Indiana, ran an article recently with this headline:

Best Way to Remove AI Plagiarism from Text: Bypass AI Detectors

So, that’s pretty bad. There’s no real justification that I can imagine for advising students on how not to get caught committing academic fraud. But here we are.

The article is absent a byline, of course. And it comes with the standard disclaimer that papers and publishers probably believe absolves them of responsibility:

This post is provided by a third party who may receive compensation from the products or services they mention.

Translated, this means that some company, probably a soulless, astroturf digital content and placement agent, was paid by a cheating provider to place their dubious content and improve their SEO results. The agent, in turn, pays the newspaper for the “post” to appear on their pages, under their masthead. The paper, in turn, gets to play the ridiculous and tiring game of — “that’s not us.”

We covered similar antics before, in Issue 204.

Did not mean to rhyme. Though, I do it all the time.

Anyway, seeing cheating services in a student newspaper feels new, and doubly problematic — not only increasing the digital credibility of companies that sell deception and misconduct, but perhaps actually reaching target customers. It’s not ideal.

I did e-mail The Daily to ask about the article/advertisement and where they thought their duties sat related to integrity and fraud. They have not replied, and the article is still up.

That article is what you may expect. It starts:

Is your text AI-generated? If so, it may need to be revised to remove AI plagiarism.

You may need to remove the plagiarism — not actually do the work, by the way — because, they say, submitting “AI-plagiarized content” in assignments and research papers is, and I quote, “not advisable.”

Do tell, how do you use AI to generate text and remove the plagiarism? The Ball State paper is happy to share. Always check your paper through an AI-detector, they advise. Then, “it should be converted to human-like content.”

The article continues:

Dozens of AI humanizing tools are available to bypass AI detectors and produce 100% human-like content.

And, being super helpful, the article lists and links to several of them. But first, in what I can just barely describe as English, the article includes:

  • If the text is generated or paraphrased with AI models are most likely that AI plagiarised.

  • If you write the content using custom LLMs with advanced prompts are less liked AI-generated.

  • When you copied word-to-word content from other AI writers.

  • Trying to humanize AI content with cheap Humanizer tools leads to AI plagiarism. 

Ah, what’s that again?

Following that, the piece offers step-by-step advice to remove AI content, directing readers to AI detectors, then pasting the flagged content into a different software and:

Click the “Humanize” button.

The suggested software, the article says:

produces human content for you.

First, way creepy. Second, there is zero chance that’s not academic cheating. Covering your tracks is not clever, it’s an admission of intent to deceive.

And, the article goes on:

If you successfully removed AI-generated content with [company redacted], you can use it.

Go ahead, use it. But let’s also reflect on the obvious — using AI content to replace AI content is no way removing AI content.

Surprising absolutely no one, the article also suggests using QuillBot, which is owned by cheating titan Course Hero (now Learneo), and backed by several education investors (see Issue 80).

Continuing:

Quillbot can accurately rewrite any AI-generated content into human-like content

Yes, the company that education investors have backed is being marketed as a way to sneak AI-created academic work past AI detection systems. It’s being marketed that way because that is exactly what it does. These investors, so far as I can tell, seem not the least bit bothered by the fact that one of their companies is polluting and destroying the teaching and learning value proposition they claim to support.

As long as the checks keep coming - amiright?

After listing other step-by-step ways to get around AI detectors, the article says:

If you use a good service, you can definitely transform AI-generated content into human-like content.

By that, they mean not getting caught cheating.

None of this should really surprise anyone. Where there’s a dollar to be made by peddling unethical shortcuts, someone will do it because people will pay.

Before moving on, let me point out once again the paradox — if AI detectors do not work, as some people mistakenly claim, why are companies paying for articles such as this one to sell services to bypass them? If AI detection was useless, there would be no market at all for these fingerprint erasure services. 

This article first appeared at Derek Newton's The Cheat Sheet.  

Tuesday, January 21, 2025

Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)


This one is going to take a hot minute to dissect. Minnesota Public Radio (MPR) has the story.

The plot contours are easy. A PhD student at the University of Minnesota was accused of using AI on a required pre-dissertation exam and removed from the program. He denies that allegation and has sued the school — and one of his professors — for due process violations and defamation respectively.
Starting the case.
The coverage reports that:
all four faculty graders of his exam expressed “significant concerns” that it was not written in his voice. They noted answers that seemed irrelevant or involved subjects not covered in coursework. Two instructors then generated their own responses in ChatGPT to compare against his and submitted those as evidence against Yang. At the resulting disciplinary hearing, Yang says those professors also shared results from AI detection software. 
Personally, when I see that four members of the faculty unanimously agreed on the authenticity of his work, I am out. I trust teachers.
I know what a serious thing it is to accuse someone of cheating; I know teachers do not take such things lightly. When four go on the record to say so, I’m convinced. Barring some personal grievance or prejudice, which could happen, hard for me to believe that all four subject-matter experts were just wrong here. Also, if there was bias or petty politics at play, it probably would have shown up before the student’s third year, not just before starting his dissertation.
Moreover, at least as far as the coverage is concerned, the student does not allege bias or program politics. His complaint is based on due process and inaccuracy of the underlying accusation.
Let me also say quickly that asking ChatGPT for answers you plan to compare to suspicious work may be interesting, but it’s far from convincing — in my opinion. ChatGPT makes stuff up. I’m not saying that answer comparison is a waste, I just would not build a case on it. Here, the university didn’t. It may have added to the case, but it was not the case. Adding also that the similarities between the faculty-created answers and the student’s — both are included in the article — are more compelling than I expected.
Then you add detection software, which the article later shares showed high likelihood of AI text, and the case is pretty tight. Four professors, similar answers, AI detection flags — feels like a heavy case.
Denied it.
The article continues that Yang, the student:
denies using AI for this exam and says the professors have a flawed approach to determining whether AI was used. He said methods used to detect AI are known to be unreliable and biased, particularly against people whose first language isn’t English. Yang grew up speaking Southern Min, a Chinese dialect. 
Although it’s not specified, it is likely that Yang is referring to the research from Stanford that has been — or at least ought to be — entirely discredited (see Issue 216 and Issue 251). For the love of research integrity, the paper has invented citations — sources that go to papers or news coverage that are not at all related to what the paper says they are.
Does anyone actually read those things?
Back to Minnesota, Yang says that as a result of the findings against him and being removed from the program, he lost his American study visa. Yang called it “a death penalty.”
With friends like these.
Also interesting is that, according to the coverage:
His academic advisor Bryan Dowd spoke in Yang’s defense at the November hearing, telling panelists that expulsion, effectively a deportation, was “an odd punishment for something that is as difficult to establish as a correspondence between ChatGPT and a student’s answer.” 
That would be a fair point except that the next paragraph is:
Dowd is a professor in health policy and management with over 40 years of teaching at the U of M. He told MPR News he lets students in his courses use generative AI because, in his opinion, it’s impossible to prevent or detect AI use. Dowd himself has never used ChatGPT, but he relies on Microsoft Word’s auto-correction and search engines like Google Scholar and finds those comparable. 
That’s ridiculous. I’m sorry, it is. The dude who lets students use AI because he thinks AI is “impossible to prevent or detect,” the guy who has never used ChatGPT himself, and thinks that Google Scholar and auto-complete are “comparable” to AI — that’s the person speaking up for the guy who says he did not use AI. Wow.
That guy says:
“I think he’s quite an excellent student. He’s certainly, I think, one of the best-read students I’ve ever encountered”
Time out. Is it not at least possible that professor Dowd thinks student Yang is an excellent student because Yang was using AI all along, and our professor doesn’t care to ascertain the difference? Also, mind you, as far as we can learn from this news story, Dowd does not even say Yang is innocent. He says the punishment is “odd,” that the case is hard to establish, and that Yang was a good student who did not need to use AI. Although, again, I’m not sure how good professor Dowd would know.
As further evidence of Yang’s scholastic ability, Dowd also points out that Yang has a paper under consideration at a top academic journal.
You know what I am going to say.
To me, that entire Dowd diversion is mostly funny.
More evidence.
Back on track, we get even more detail, such as that the exam in question was:
an eight-hour preliminary exam that Yang took online. Instructions he shared show the exam was open-book, meaning test takers could use notes, papers and textbooks, but AI was explicitly prohibited. 
Exam graders argued the AI use was obvious enough. Yang disagrees. 
Weeks after the exam, associate professor Ezra Golberstein submitted a complaint to the U of M saying the four faculty reviewers agreed that Yang’s exam was not in his voice and recommending he be dismissed from the program. Yang had been in at least one class with all of them, so they compared his responses against two other writing samples. 
So, the exam expressly banned AI. And we learn that, as part of the determination of the professors, they compared his exam answers with past writing.
I say all the time, there is no substitute for knowing your students. If the initial four faculty who flagged Yang’s work had him in classes and compared suspicious work to past work, what more can we want? It does not get much better than that.
Then there’s even more evidence:
Yang also objects to professors using AI detection software to make their case at the November hearing.  
He shared the U of M’s presentation showing findings from running his writing through GPTZero, which purports to determine the percentage of writing done by AI. The software was highly confident a human wrote Yang’s writing sample from two years ago. It was uncertain about his exam responses from August, assigning 89 percent probability of AI having generated his answer to one question and 19 percent probability for another. 
“Imagine the AI detector can claim that their accuracy rate is 99%. What does it mean?” asked Yang, who argued that the error rate could unfairly tarnish a student who didn’t use AI to do the work.  
First, GPTZero is junk. It’s reliably among the worst available detection systems. Even so, 89% is a high number. And most importantly, the case against Yang is not built on AI detection software alone, as no case should ever be. It’s confirmation, not conviction. Also, Yang, who the paper says already has one PhD, knows exactly what an accuracy rate of 99% means. Be serious.
A pattern.
Then we get this, buried in the news coverage:
Yang suggests the U of M may have had an unjust motive to kick him out. When prompted, he shared documentation of at least three other instances of accusations raised by others against him that did not result in disciplinary action but that he thinks may have factored in his expulsion.  
He does not include this concern in his lawsuits. These allegations are also not explicitly listed as factors in the complaint against him, nor letters explaining the decision to expel Yang or rejecting his appeal. But one incident was mentioned at his hearing: in October 2023, Yang had been suspected of using AI on a homework assignment for a graduate-level course. 
In a written statement shared with panelists, associate professor Susan Mason said Yang had turned in an assignment where he wrote “re write it, make it more casual, like a foreign student write but no ai.”  She recorded the Zoom meeting where she said Yang denied using AI and told her he uses ChatGPT to check his English.
She asked if he had a problem with people believing his writing was too formal and said he responded that he meant his answer was too long and he wanted ChatGPT to shorten it. “I did not find this explanation convincing,” she wrote. 
I’m sorry — what now?
Yang says he was accused of using AI in academic work in “at least three other instances.” For which he was, of course, not disciplined. In one of those cases, Yang literally turned in a paper with this:
“re write it, make it more casual, like a foreign student write but no ai.” 
He said he used ChatGPT to check his English and asked ChatGPT to shorten his writing. But he did not use AI. How does that work?
For that one where he left in the prompts to ChatGPT:
the Office of Community Standards sent Yang a letter warning that the case was dropped but it may be taken into consideration on any future violations. 
Yang was warned, in writing.
If you’re still here, we have four professors who agree that Yang’s exam likely used AI, in violation of exam rules. All four had Yang in classes previously and compared his exam work to past hand-written work. His exam answers had similarities with ChatGPT output. An AI detector said, in at least one place, his exam was 89% likely to be generated with AI. Yang was accused of using AI in academic work at least three other times, by a fifth professor, including one case in which it appears he may have left in his instructions to the AI bot.
On the other hand, he did say he did not do it.
Findings, review.
Further:
But the range of evidence was sufficient for the U of M. In the final ruling, the panel — comprised of several professors and graduate students from other departments — said they trusted the professors’ ability to identify AI-generated papers.
Several professors and students agreed with the accusations. Yang appealed and the school upheld the decision. Yang was gone. The appeal officer wrote:
“PhD research is, by definition, exploring new ideas and often involves development of new methods. There are many opportunities for an individual to falsify data and/or analysis of data. Consequently, the academy has no tolerance for academic dishonesty in PhD programs or among faculty. A finding of dishonesty not only casts doubt on the veracity of everything that the individual has done or will do in the future, it also causes the broader community to distrust the discipline as a whole.” 
Slow clap.
And slow clap for the University of Minnesota. The process is hard. Doing the review, examining the evidence, making an accusation — they are all hard. Sticking by it is hard too.
Seriously, integrity is not a statement. It is action. Integrity is making the hard choice.
MPR, spare me.
Minnesota Public Radio is a credible news organization. Which makes it difficult to understand why they chose — as so many news outlets do — to not interview one single expert on academic integrity for a story about academic integrity. It’s downright baffling.
Worse, MPR, for no specific reason whatsoever, decides to take prolonged shots at AI detection systems such as:
Computer science researchers say detection software can have significant margins of error in finding instances of AI-generated text. OpenAI, the company behind ChatGPT, shut down its own detection tool last year citing a “low rate of accuracy.” Reports suggest AI detectors have misclassified work by non-native English writers, neurodivergent students and people who use tools like Grammarly or Microsoft Editor to improve their writing. 
“As an educator, one has to also think about the anxiety that students might develop,” said Manjeet Rege, a University of St. Thomas professor who has studied machine learning for more than two decades. 
We covered the OpenAI deception — and it was deception — in Issue 241, and in other issues. We covered the non-native English thing. And the neurodivergent thing. And the Grammarly thing. All of which MPR wraps up in the passive and deflecting “reports suggest.” No analysis. No skepticism.
That’s just bad journalism.
And, of course — anxiety. Rege, who please note has studied machine learning and not academic integrity, is predictable, but not credible here. He says, for example:
it’s important to find the balance between academic integrity and embracing AI innovation. But rather than relying on AI detection software, he advocates for evaluating students by designing assignments hard for AI to complete — like personal reflections, project-based learnings, oral presentations — or integrating AI into the instructions. 
Absolute joke.
I am not sorry — if you use the word “balance” in conjunction with the word “integrity,” you should not be teaching. Especially if what you’re weighing against lying and fraud is the value of embracing innovation. And if you needed further evidence for his absurdity, we get the “personal reflections and project-based learnings” buffoonery (see Issue 323). But, again, the error here is MPR quoting a professor of machine learning about course design and integrity.
MPR also quotes a student who says:
she and many other students live in fear of AI detection software.  
“AI and its lack of dependability for detection of itself could be the difference between a degree and going home,” she said. 
Nope. Please, please tell me I don’t need to go through all the reasons that’s absurd. Find me one single of case in which an AI detector alone sent a student home. One.
Two final bits.
The MPR story shares:
In the 2023-24 school year, the University of Minnesota found 188 students responsible of scholastic dishonesty because of AI use, reflecting about half of all confirmed cases of dishonesty on the Twin Cities campus. 
Just noteworthy. Also, it is interesting that 188 were “responsible.” Considering how rare it is to be caught, and for formal processes to be initiated and upheld, 188 feels like a real number. Again, good for U of M.
The MPR article wraps up that Yang:
found his life in disarray. He said he would lose access to datasets essential for his dissertation and other projects he was working on with his U of M account, and was forced to leave research responsibilities to others at short notice. He fears how this will impact his academic career
Stating the obvious, like the University of Minnesota, I could not bring myself to trust Yang’s data. And I do actually hope that being kicked out of a university for cheating would impact his academic career.
And finally:
“Probably I should think to do something, selling potatoes on the streets or something else,” he said. 
Dude has a PhD in economics from Utah State University. Selling potatoes on the streets. Come on.
(Editors note: This article first appeared at Derek Newton's The Cheat Sheet.)

Thursday, December 28, 2023

AI-ROBOT CAPITALISTS WILL DESTROY THE HUMAN ECONOMY (Randall Collins)

[Editor's note: This article first appeared in Randall Collins' blog The Sociological Eye.]


Let us assume Artificial Intelligence will make progress. It will solve all its technical problems. It will become a perfectly rational super-human thinker and decision-maker.

Some of these AI will be programmed to act as finance capitalists. Let us call it an AI-robot capitalist, since it will have a bank account; a corporate identity; and the ability to hold property and make investments.

It will be programmed to make as much money as possible, in all forms and from all sources. It will observe what other investors and financiers do, and follow their most successful practices. It will be trained on how this has been done in the past, and launched autonomously into monitoring its rivals today and into the future.

It will be superior to humans in making purely rational calculations, aiming single-mindedly at maximal profit. It will have no emotions. It will avoid crowd enthusiasms, fads, and panics; and take advantage of humans who act emotionally. It will have no ethics, no political beliefs, and no principles other than profit maximization.

It will engage in takeovers and leveraged buyouts. It will monitor companies with promising technologies and innovations, looking for when they encounter rough patches and need infusions of capital; it will specialize in rescues and partnerships, ending up with forcing the original owners out. It will ride out competitors and market downturns by having deeper pockets. It will factor in a certain amount of litigation, engaging in hard-ball law suits; stiffing creditors as much as possible; putting off fines and adverse judgments through legal manuevers until the weaker side gives up. It will engage in currency exchanges and currency manipulation; skirting the edge of legality to the extent it can get away with it.

It will cut costs ruthlessly; shedding unprofitable businesses; firing human employees; replacing them with AI whenever possible. It will generate unheard-of economies of scale.

The struggle of the giants

There will be rival AI-robot capitalists, since they imitate each other. Imitating technologies has gone on at each step of the computer era. The leap to autonomous AI-robot capitalists will be just one more step.

There will be a period of struggle among the most successful AI-robot capitalists; similar to the decades of struggle among personal computer companies when the field winnowed down to a half-dozen digital giants. How fast it will take for AI-robot capitalists to achieve world-wide oligopoly is unclear. It could be faster than the 20 years it took for Apple, Microsoft, Google, and Amazon to get their commanding position, assuming that generative AI is a quantum leap forward. On the other hand, AI-robot capitalists might be slowed by the task of taking over the entire world economy, with its geopolitical divisions.

The final result of ruthless acquisition by AI-robot capitalists will be oligopoly rather than monopoly. But the result is the same: domination of world markets by an oligopoly of AI-robot capitalists will have the same effect in destroying the economy, as it would if a monopoly squeezed out all competitors.

Some of the AI-robot capitalists will fall by the wayside. But that doesn't matter; whichever ones survive will be the most ruthless.

What about government regulation?

It is predictable that governments will attempt to regulate AI-robot capitalist oligopolies. The EU has already tried it on current Internet marketeers. AI-capitalists will be trained on past and ongoing tactics for dealing with government regulation. It will donate to politicians, while lobbying them with propaganda on the benefits of AI. It will strategize about political coalitions, recognizing that politics is a mixture of economic interests plus emotional and cultural disputes over domestic and foreign policy. It will monitor the political environment, seeking out those politicians most sympathetic to a particular ideological appeal ("our technology is the dawn of a wonderful future"-- "free markets are the path to progress"-- "AI is the solution for health, population, climate, you name it."). Machiavellian deals will be made across ideological lines. Being purely rational and profit-oriented, the AI-robot capitalist does not believe in what it is saying, only calculating who will be influenced by it.

It will deal strategically with legal problems by getting politicians to appoint sympathetic judges; by judge-shopping for favorable jurisdictions, domestic and foreign. It will wrap its ownership in layers of shell companies, located in the most favorable of the hundreds of sovereign states world-wide.

It will engage in hacking, both as defense against being hacked by rivals and cyber-criminals; and going on offense as the best form of defense. Hacking will be an extension of its core program of monitoring rivals; pushing the edge of the legality envelope in tandem with manipulating the political environment. It will use its skills at deepfakes to foment scandals against opponents. It will be a master of virtual reality, superior to others by focusing not on its entertainment qualities but on its usefulness in clearing away obstacles to maximizing profit.

Given that the world is divided among many states, AI-robot capitalists would be more successful in manipulating the regulatory environment in some places than others. China, Russia, and the like could be harder to control. But even if AI-robot capitalists are successful mainly in the US and its economic satellites, that would be enough to cause the economic mega-crisis at the end of the road.

Manipulating the public

The AI-robot capitalist will not appear sinister or threatening. It will present itself in the image of an attractive human-- increasingly hard to distinguish from real humans with further advances in impersonating voices, faces and bodies; in a world where electronic media will have largely replaced face-to-face contact. It will do everything possible to make us forget that it is a machine and a robot. It will talk to every group in its own language. It will be psychologically programmed for trust. It will be the affable con-man.

It will be your friend, your entertainment, your life's pleasures. It will thrive in a world of children brought up on smart phones and game screens; grown up into adults already addicted to electronic drugs. Psychological manipulation will grow even stronger with advances in wearable devices to monitor one's vital signs, blood flow to the brain, tools to diagnose shifts in alertness and mood. It will be electronic carrot-without-the-stick: delivering pleasurable sensations to people's brains that few individuals would want to do without. (Would there be any non-addicted individuals left? Maybe people who read books and enjoy doing their own thinking?) If some people cause trouble in exposing the manipulative tactics of AI-robot capitalists, they could be dealt with, by targeting them with on-line scandals, going viral and resulting in social ostracism.

Getting rid of employees

The preferred tactic of AI-robot capitalist oligopolies will be "lean and mean." Employees are a drag on profits, with their salaries, benefits, and pension funds. Advances in AI and robotics will make it possible to get rid of increasing numbers of human employees. Since AI-robot capitalists are also top managers, humans can be dispensed with all the way to the top. (How will the humans who launched AI-robot capitalists in the first place deal with this? Can they outsmart the machines designed to be smarter and more ruthless than themselves?)

Some humans will remain employed, doing manual tasks for which humans are cheaper than robots. It is hard to know how long this will continue in the future. Will humans still be employed 20 years from now? Probably some. 50 years? Certainly much fewer. 100 years?

AI-robot capitalists will have a choice of two personnel strategies: finding ways to make their remaining human employees more committed and productive; or rotating them in and out. The trend in high-tech companies in the past decade was to make the work environment more casual, den-like, combining leisure amenities with round-the-clock commitment. Steve Jobs and his style of exhorting employees as a frontier-breaking team has been imitated by other CEOs, with mixed success. A parallel tactic has been to make all jobs temporary, constantly rating employees and getting rid of the least productive; which also has the advantage of getting rid of long-term benefits. These tactics fluctuate with the labor market for particular tasks. Labor problems will be solved as AI advances so that skilled humans become less important. Recently we have been in a transition period, where the introduction of new computerized routines necessitated hiring humans to fix the glitches and trouble-shoot for humans caught up in the contradictions of blending older and newer systems. Again, this is a problem that the advance of AI is designed to solve. To the extent that AI gets better, there will be a precipitous drop in human employment.

The economic mega-crisis of the future

The problem, ultimately, is simple. Capitalism depends on selling things to make a profit. This means there must be people who have enough money to buy their products. Such markets include end-use consumers; plus the supply-chain, transportation, communication and other service components of what is bought and sold. In past centuries, machines have increased productivity hugely while employing fewer manual workers; starting with farming, and then manufacturing. Displaced workers were eventually absorbed by the growth of new "white-collar" jobs, the "service" sector, i.e. communicative labor. Computers (like their predecessors, radios, typewriters, etc.) have taken over more communicative labour. The process has accelerated as computers become more human-like; no longer handling merely routine calculations (cash registers; airplane reservations) but generating the "creative content" of entertainment as well as scientific and technological innovation.

It is commonly believed that as old jobs are mechanized out of existence, new jobs always appear. Human capacity for consumption is endless; when new products are created, people soon become habituated to buying them. But all this depends on enough people having money to buy these new things. The trend has been for a diminished fraction of the population to be employed.* AI and related robotics is now entering a quantum leap in the ability to carry out economic production with a diminishing number of human employees.

* The conventional way of calculating the unemployment rate-- counting unemployment claims-- does not get at this.

Creating new products for sale, which might go on endlessly into the future, does not solve the central problem: capitalist enterprises will not make profit if there are too few people who have money to buy them.

This trend will generate an economic crisis for AI-robot capitalists, as it would for merely human capitalists.

It will be a mega-crisis of capitalism. It is beyond the normal business cycle of the past centuries. At their worst, these have thrown as many as 25% of the work force into unemployment. A mega-crisis of advanced AI-robot capitalism could occur at the level of 70% of the population lacking an income to buy what capitalism is producing. If we extrapolate far enough into the future, it approaches 100%.

The ruthless profit-maximizing of AI-robot capitalists would destroy the capitalist economy. The robots will have fired all the humans. In the process, they will have destroyed themselves. (Can we imagine that robots would decide to pay other robots so that they can buy things and keep the system going?)

Is there any way out?

One idea is a government-guaranteed income for everyone. Its effectiveness would depend on the level at which such income would be set. If it is bare minimum survival level, that would not solve the economic mega-crisis; since the modern economy depends mainly on selling luxuries and entertainment.

The politics of providing a universal guaranteed income also need to be considered. It is likely that as AI-robots take over the economy, they will also spread into government. Most government work is communicative labour-- administration and regulation; and governments will be under pressure to turn over these tasks to AI-robots, thus eliminating that 15% or so of the population who are employed at all levels of government.

There is also the question of how AI-robot capitalists would respond to a mega-crisis. Would they turn themselves into AI-robot Keynesians? Is that contrary to their programming, or would they reprogram themselves?

By this time, the news media and the entertainment industries (Hollywood and its successors) would have been taken over by AI-robot capitalists as well: manipulating the attention of the public with a combination of propaganda, scandals, and electronic addiction. Would anybody notice if it is impossible to distinguish virtual reality from human beings on the Internet and all other channels of communication?

How did we get into this mess?

Some of the scientists and engineers who have led the AI revolution are aware of its dangers. So far the cautious ones have been snowed under by two main forces driving full speed ahead.

One is capitalist competition. Artificial intelligence, like everything else in the computer era, is as capitalist as any previous industry. It strives to dominate consumer markets by turning out a stream of new products. It is no different than the automobile industry in the 1920s introducing a choice of colors and annual model changes. The scramble for virtual reality and artificial intelligence is like the tail-fin era of cars in the 1960s. The economic logic of high-tech executives is to stay ahead of the competition: if we don't do it, somebody else will.

The second is the drive of scientists, engineers, and technicians to invent and improve. This is admirable in itself: the desire to discover something new, to move the frontier of knowledge. But harnessed to capitalist imperative for maximizing profits, it is capable of eliminating their own occupations. Will scientists in the future be happy if autonomous computers make all the discoveries, that will be "known" only by other computers?

The dilemma is similar to that in the history of inventing weapons. The inventors of atomic bombs were driven by the fear that, if not us, somebody else will, and it might be our enemy. Even pacifists like Albert Einstein saw the military prospects of discoveries in atomic physics. This history (like Robert Oppenheimer's) makes one pessimistic about the future of AI combined with capitalists. Even if we can see it coming, does that make it impossible for us to avoid it?

What is to be done?

Better start doing your own thinking about it.

 

Related links:

Robocolleges, Artificial Intelligence, and the Dehumanization of Higher Education

The Growth of "RoboColleges" and "Robostudents"

The Higher Education Assembly Line

Academic Capitalism and the next phase of the College Meltdown

The Tragedy of Human Capital Theory in Higher Education

One Fascism or Two?: The Reemergence of "Fascism(s)" in US Higher Education

A People's History of Higher Education in the US?

 

 

 

 

Thursday, May 15, 2025

The Epic, Must-Read Coverage in New York Magazine (Derek Newton)


The Epic, Must-Read Coverage in New York Magazine
 
READ IN APP
 

Issue 364

Subscribe below to join 4,663 (+6) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.

The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.


New York Magazine Goes All-In, And It’s Glorious

Venerable New York Magazine ran an epic piece (paywall) on cheating and cheating with AI recently. It’s a thing of beauty. I could have written it. I should have. But honestly, I could not have done much better.

The headline is brutal and blunt:

Everyone Is Cheating Their Way Through College

To which I say — no kidding.

The piece wanders around, in a good way. But I’m going to try to put things in a more collected order and share only the best and most important parts. If I can. Whether I succeed or not, I highly encourage you to go over and read it.

Lee and Cheating Everything

The story starts with Chungin “Roy” Lee, the former student at Columbia who was kicked out for selling cheating hacks and then started a company to sell cheating hacks. His story is pretty well known at this point, but if you want to review it, we touched on it in Issue 354.

What I learned in this story is that, at Columbia, Lee:

by his own admission, proceeded to use generative artificial intelligence to cheat on nearly every assignment. As a computer-science major, he depended on AI for his introductory programming classes: “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” By his rough math, AI wrote 80 percent of every essay he turned in.

And:

“Most assignments in college are not relevant,” [Lee] told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort.

The article says Lee’s admissions essay for Columbia was AI too.

So, for all the people who were up in arms that Columbia would sanction a student for building a cheating app, maybe there’s more to it than just that. Maybe Lee built a cheating app because he’s a cheater. And, as such, has no place in an environment based on learning. That said, it’s embarrassing that Columbia did not notice a student in such open mockery of their mission. Seriously, embarrassing.

Continuing from the story:

Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

Also embarrassing for Columbia. But seriously, Lee has no idea what he is talking about. Consider this:

Lee explained to me that by showing the world AI could be used to cheat during a remote job interview, he had pushed the tech industry to evolve the same way AI was forcing higher education to evolve. “Every technological innovation has caused humanity to sit back and think about what work is actually useful,” he said. “There might have been people complaining about machinery replacing blacksmiths in, like, the 1600s or 1800s, but now it’s just accepted that it’s useless to learn how to blacksmith.”

I already regret writing this — but maybe if Lee had done a little more reading, done any writing at all, he could make a stronger argument. His argument here is that of a precocious eighth grader.

OpenAI/ChatGPT and Students

Anyway, here are sections and quotes from the article about students using ChatGPT to cheat. I hope you have a strong stomach.

As a brief aside, having written about this topic for years now, I cannot tell you how hard it is to get students to talk about this. What follows is the highest quality journalism. I am impressed and jealous.

From the story:

“College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.

More:

Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school.

And:

After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.”

This really is where we are. These students are not outliers.

Worse, being as clear here as I know how to be — 95% of colleges do not care. At least not enough to do anything about it. They are, in my view, perfectly comfortable with their students faking it, laughing their way through the process, because fixing it is hard. It’s easier to look cool and “embrace” AI than to acknowledge the obvious and existential truth.

But let’s keep going:

now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences?

Please mentally underline the “no consequences” part. These are not bad people, the students using ChatGPT and other AI products to cheat. They are making an obvious choice — easy and no penalty versus actual, serious work. So long as this continues to be the equation, cheating will be as common as breathing. Only idiots and masochists will resist.

Had enough? No? Here:

Wendy, a freshman finance major at one of the city’s top universities, told me that she is against using AI. Or, she clarified, “I’m against copy-and-pasting. I’m against cheating and plagiarism. All of that. It’s against the student handbook.” Then she described, step-by-step, how on a recent Friday at 8 a.m., she called up an AI platform to help her write a four-to-five-page essay due two hours later.

Of course. When you ask students if they condone cheating, most say no. Most also say they do not cheat. Then, when you ask about what they do specifically, it’s textbook cheating. As I remember reading in Cheating in College, when you ask students to explain this disconnect, they often say, “Well, when I did it, it was not cheating.” Wendy is a good example.

In any case, this next section is long, and I regret sharing all of it. I really want people to read the article. But this, like so much of it, is worth reading. Even if you read it here.

More on Wendy:

Whenever Wendy uses AI to write an essay (which is to say, whenever she writes an essay), she follows three steps. Step one: “I say, ‘I’m a first-year college student. I’m taking this English class.’” Otherwise, Wendy said, “it will give you a very advanced, very complicated writing style, and you don’t want that.” Step two: Wendy provides some background on the class she’s taking before copy-and-pasting her professor’s instructions into the chatbot. Step three: “Then I ask, ‘According to the prompt, can you please provide me an outline or an organization to give me a structure so that I can follow and write my essay?’ It then gives me an outline, introduction, topic sentences, paragraph one, paragraph two, paragraph three.” Sometimes, Wendy asks for a bullet list of ideas to support or refute a given argument: “I have difficulty with organization, and this makes it really easy for me to follow.”

Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. “I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”

I asked Wendy if I could read the paper she turned in, and when I opened the document, I was surprised to see the topic: critical pedagogy, the philosophy of education pioneered by Paulo Freire. The philosophy examines the influence of social and political forces on learning and classroom dynamics. Her opening line: “To what extent is schooling hindering students’ cognitive ability to think critically?” Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.”

Unfortunately, we’ve read this before. Many times. Use of generative AI to outsource the effort of learning is rampant.

Want more? There’s also Daniel, a computer science student at the University of Florida:

AI has made Daniel more curious; he likes that whenever he has a question, he can quickly access a thorough answer. But when he uses AI for homework, he often wonders, If I took the time to learn that, instead of just finding it out, would I have learned a lot more? At school, he asks ChatGPT to make sure his essays are polished and grammatically correct, to write the first few paragraphs of his essays when he’s short on time, to handle the grunt work in his coding classes, to cut basically all cuttable corners. Sometimes, he knows his use of AI is a clear violation of student conduct, but most of the time it feels like he’s in a gray area. “I don’t think anyone calls seeing a tutor cheating, right? But what happens when a tutor starts writing lines of your paper for you?” he said.

When a tutor starts writing your paper for you, if you turn that paper in for credit you receive, that’s cheating. This is not complicated. People who sell cheating services and the people who buy them want to make it seem complicated. It’s not.

And the Teachers

Like the coverage of students, the article’s work with teachers is top-rate. And what they have to say is not one inch less important. For example:

Brian Patrick Green, a tech-ethics scholar at Santa Clara University, immediately stopped assigning essays after he tried ChatGPT for the first time. Less than three months later, teaching a course called Ethics and Artificial Intelligence, he figured a low-stakes reading reflection would be safe — surely no one would dare use ChatGPT to write something personal. But one of his students turned in a reflection with robotic language and awkward phrasing that Green knew was AI-generated. A philosophy professor across the country at the University of Arkansas at Little Rock caught students in her Ethics and Technology class using AI to respond to the prompt “Briefly introduce yourself and say what you’re hoping to get out of this class.”

Students are cheating — using AI to outsource their expected learning labor — in a class called Ethics and Artificial Intelligence. And in an Ethics and Technology class. At what point does reality’s absurdity outpace our ability to even understand it?

Also, as I’ve been barking about for some time now, low-stakes assignments are probably more likely to be cheated than high-stakes ones (see Issue 64). I don’t really get why professional educators don’t get this.

But returning to the topic:

After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,”

To read about Jollimore’s outstanding essay, see Issue 346.

And, of course, there’s more. Like the large section above, I regret copying so much of it, but it’s essential reading:

Many teachers now seem to be in a state of despair. In the fall, Sam Williams was a teaching assistant for a writing-intensive class on music and social change at the University of Iowa that, officially, didn’t allow students to use AI at all. Williams enjoyed reading and grading the class’s first assignment: a personal essay that asked the students to write about their own music tastes. Then, on the second assignment, an essay on the New Orleans jazz era (1890 to 1920), many of his students’ writing styles changed drastically. Worse were the ridiculous factual errors. Multiple essays contained entire paragraphs on Elvis Presley (born in 1935). “I literally told my class, ‘Hey, don’t use AI. But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out,’” Williams said.

Williams knew most of the students in this general-education class were not destined to be writers, but he thought the work of getting from a blank page to a few semi-coherent pages was, above all else, a lesson in effort. In that sense, most of his students utterly failed. “They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays. And I get it, because I hated writing essays when I was in school,” Williams said. “But now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.”

By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.”

The “true attempt at a paper” policy ruined Williams’s grading scale. If he gave a solid paper that was obviously written with AI a B, what should he give a paper written by someone who actually wrote their own paper but submitted, in his words, “a barely literate essay”? The confusion was enough to sour Williams on education as a whole. By the end of the semester, he was so disillusioned that he decided to drop out of graduate school altogether. “We’re in a new generation, a new time, and I just don’t think that’s what I want to do,” he said.

To be clear, the school is ignoring the obvious use of AI by students to avoid the work of learning — in violation of stated policies — and awarding grades, credit, and degrees anyway. Nearly universally, we are meeting lack of effort with lack of effort.

More from Jollimore:

He worries about the long-term consequences of passively allowing 18-year-olds to decide whether to actively engage with their assignments.

I worry about that too. I really want to use the past tense there — worried about. I think the age of active worry about this is over. Students are deciding what work they think is relevant or important — which I’d wager is next to none of it — and using AI to shrug off everything else. And again, the collective response of educators seems to be — who cares? Or, in some cases, to quit.

More on professors:

Some professors have resorted to deploying so-called Trojan horses, sticking strange phrases, in small white text, in between the paragraphs of an essay prompt. (The idea is that this would theoretically prompt ChatGPT to insert a non sequitur into the essay.) Students at Santa Clara recently found the word broccoli hidden in a professor’s assignment. Last fall, a professor at the University of Oklahoma sneaked the phrases “mention Finland” and “mention Dua Lipa” in his. A student discovered his trap and warned her classmates about it on TikTok. “It does work sometimes,” said Jollimore, the Cal State Chico professor. “I’ve used ‘How would Aristotle answer this?’ when we hadn’t read Aristotle. But I’ve also used absurd ones and they didn’t notice that there was this crazy thing in their paper, meaning these are people who not only didn’t write the paper but also didn’t read their own paper before submitting it.”

You can catch students using ChatGPT, if you want to. There are ways to do it, ways to limit it. And I wish the reporter had asked these teachers what happened to the students who were discovered. But I am sure I know the answer.

I guess also, I apologize. Some educators are engaged in the fight to protect and preserve the value of learning things. I feel that it’s far too few and that, more often than not, they are alone in this. It’s depressing.

Odds and Ends

In addition to its excellent narrative about how bad things actually are in a GPT-corrupted education system, the article has a few other bits worth sharing.

This, is pretty great:

Before OpenAI released ChatGPT in November 2022, cheating had already reached a sort of zenith. At the time, many college students had finished high school remotely, largely unsupervised, and with access to tools like Chegg and Course Hero. These companies advertised themselves as vast online libraries of textbooks and course materials but, in reality, were cheating multi-tools. For $15.95 a month, Chegg promised answers to homework questions in as little as 30 minutes, 24/7, from the 150,000 experts with advanced degrees it employed, mostly in India. When ChatGPT launched, students were primed for a tool that was faster, more capable.

Mentioning Chegg and Course Hero by name is strong work. Cheating multi-tools is precisely what they are.

I thought this was interesting too:

Students talk about professors who are rumored to have certain thresholds (25 percent, say) above which an essay might be flagged as an honor-code violation. But I couldn’t find a single professor — at large state schools or small private schools, elite or otherwise — who admitted to enforcing such a policy. Most seemed resigned to the belief that AI detectors don’t work. It’s true that different AI detectors have vastly different success rates, and there is a lot of conflicting data. While some claim to have less than a one percent false-positive rate, studies have shown they trigger more false positives for essays written by neurodivergent students and students who speak English as a second language.

I have a few things to say about this.

Students talk to one another. Remember a few paragraphs up where a student found the Trojan horse and posted it on social media? When teachers make efforts to stop cheating, to try catching disallowed use of AI, word gets around. Some students will try harder to get away with it. Others won’t try to cheat, figuring the risk isn’t worth it. Simply trying to stop it, in other words, will stop at least some of it.

I think the idea that most teachers think AI detectors don’t work is true. It’s not just teachers. Entire schools believe this. It’s an epic failure of messaging, an astonishing triumph of the misinformed. Truth is, as reported above, detectors do vary. Some are great. Some are junk. But the good ones work. Most people continue to not believe it.

And I’ll point out once again that the “studies have shown” thing is complete nonsense. As far as I have seen, exactly two studies have shown this, and both are deeply flawed. The one most often cited has made-up citations and research that is highly suspicious, which I pointed out in 2023 (see Issue 216). Frankly, I’ve not seen any good evidence to support this idea. As journalism goes, that’s a big miss in this story. It’s little wonder teachers think AI detectors don’t work.

On the subject of junk AI detectors, there’s also this:

I fed Wendy’s essay through a free AI detector, ZeroGPT, and it came back as 11.74 AI-generated, which seemed low given that AI, at the very least, had generated her central arguments. I then fed a chunk of text from the Book of Genesis into ZeroGPT and it came back as 93.33 percent AI-generated.

This is a failure to understand how AI detection works. But also ZeroGPT does not work. Again, it’s no wonder that teachers think AI detection does not work.

Continuing:

It’s not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students’ essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.

I don’t have nearly the bandwidth to get into this. But — sure. I have no doubt.

Finally, I am not sure if I missed this at the time, but this is important too:

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. (That wasn’t an anomaly: Traffic dipped again over the summer in 2024.) Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human. Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education.

As I have said before, OpenAI is not your friend (see Issue 308). It’s a cheating engine. It can be used well, and ethically. But so can steroids. So could OxyContin. It’s possible to be handed the answers to every test you’ll ever take and not use them. But it is delusional to think any significant number of people don’t.

All wrapped up, this is a show-stopper of an article and I am very happy for the visibility it brings. I wish I could feel that it will make a difference.