Search This Blog

Showing posts sorted by relevance for query AI. Sort by date Show all posts
Showing posts sorted by relevance for query AI. Sort by date Show all posts

Monday, February 10, 2025

We asked AI to tell on itself... (Glen McGhee)

This is what AI told us about its own limitations.  

1. Illusion of Learning: Teachers may perceive an illusion of progress due to improved homework scores when students use ChatGPT. A study in Turkey found that while homework scores increased with ChatGPT use, final exam performance actually decreased by 17% for students using unprompted ChatGPT[4], that is, without guiding prompts, simply navigating it on their own]. This suggests that teachers might observe apparent progress that doesn't translate to actual learning.

2. Overestimation of Knowledge: Students using ChatGPT may experience "Illusory Knowledge," where they believe they've learned more than they actually have[4]. This could lead teachers to overestimate class progress based on students' self-reported understanding [[I have personally experienced this while teaching -- student head nods in synch with my points -- only to learn later that this was a student reflex to pacify the teacher. As a teacher, I find this befuddling.]]

3. Reduced Critical Thinking: Teachers might notice a decline in students' critical thinking and problem-solving abilities due to overreliance on ChatGPT[6]. This could create a false impression of progress in content knowledge while masking deficits in crucial cognitive skills.

4. Difficulty in Assessment: The use of ChatGPT poses challenges for teachers in accurately assessing student abilities, potentially leading to a skewed perception of class progress[1]. [[Yes!]]

It's important to note that research on this specific aspect of teacher perception is limited, and more studies are needed to fully understand the phenomenon of perceived illusory progress in classrooms using ChatGPT.

Citations:
[1] https://asiacall.info/proceedings/index.php/articles/article/download/81/64/757
[2] https://community.openai.com/t/chatgpt-4-and-the-illusion-of-learning-a-cautionary-tale-from-a-new-user/1009557
[3] https://rgsa.openaccesspublications.org/rgsa/article/download/8162/4017/29639
[4] https://www.oneusefulthing.org/p/post-apocalyptic-education
[5] https://huskiecommons.lib.niu.edu/cgi/viewcontent.cgi?article=8882&context=allgraduate-thesesdissertations
[6] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[7] https://www.researchgate.net/publication/382619812_Perceptions_of_pre-service_teachers_on_the_use_of_ChatGPT_during_their_training_and_the_future_implications_for_their_future_role_in_high_schools_A_research_in_progress
[8] https://www.ciocoverage.com/edtech-and-ai-in-education-the-illusion-of-progress/

Other Possible negative effects of ChatGPT usage include:

1. Cognitive thinning: ChatGPT usage may lead to a reduction in students' cognitive efforts and critical thinking skills. Students who rely heavily on ChatGPT for academic tasks may experience a decline in their ability to think critically and solve problems independently[1][2].
[[ChatGPT encourages a decline in cognitive development.]]

2. Memory loss: Research has found a concerning link between ChatGPT usage and memory loss in students. As students rely on the AI to quickly answer questions, they may reduce their own cognitive efforts, resulting in poor memory retention[2][4].
[[Surprisingly, there are lots of studies on this. Memory loss in adults is linked to dementia. What about kids?]]

3. Procrastination: Students who use ChatGPT frequently are more likely to procrastinate on their academic tasks. This may be due to the ease with which they can obtain information or complete assignments using the AI tool[2][6].
[[Undermines the development of time management skills.]]

4. Declining academic performance: Studies have shown a correlation between ChatGPT usage and a drop in GPA. This suggests that while the tool may make schoolwork easier in the short term, it may negatively impact overall academic performance[2][4]. [[I don't get this. Failing students now have a proverbial army they can muster at a moment's notice -- unless (below) they are too poor to access the technology?]]

5. Reduced cognitive load: While ChatGPT can ease students' cognitive load during research tasks, this comes at the expense of developing stronger arguments and critical thinking skills. Students using ChatGPT tend to produce weaker arguments compared to those using traditional research methods like Google[8]. 

These effects suggest a "thinning out of developmental cognitive strands" in the classroom, as students may become overly reliant on AI tools at the expense of developing crucial cognitive and academic skills. This trend could potentially lead to a less intellectually robust learning environment if not properly managed.

Citations:
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[2] https://futurism.com/the-byte/chatgpt-memory-loss-procrastination
[3] https://www.nea.org/nea-today/all-news-articles/chatgpt-enters-classroom-teachers-weigh-pros-and-cons
[4] https://evolutionnews.org/2024/03/chatgpt-linked-with-memory-loss-poor-academic-performance/
[5] https://blog.acceleratelearning.com/ai-in-education-the-impact-of-chatgpt
[6] https://www.psypost.org/chatgpt-linked-to-declining-academic-performance-and-memory-loss-in-new-study/
[7] https://gchumanrights.org/gc-preparedness/preparedness-science-technology/article-detail/the-chatgpt-effect-in-higher-education.html
[8] https://www.psypost.org/study-finds-chatgpt-eases-students-cognitive-load-but-at-the-expense-of-critical-thinking/

Since the widespread adoption of AI in education, several effects on both teachers and students have worsened in regard to the online delivery of instruction and assessment:

1. Academic dishonesty: The use of ChatGPT for cheating has become more prevalent and sophisticated. Over 25% of teachers have caught students using ChatGPT to cheat, leading to increased time spent on checking for AI plagiarism[5].

2. Decline in critical thinking: Students' overreliance on ChatGPT has led to a more significant decrease in critical thinking and problem-solving skills. This effect is particularly concerning in subjects requiring complex analysis, such as literature, history, and philosophy[1][2].

3. Reduced creativity: The ease of generating content with ChatGPT has further diminished students' ability to express original ideas and perspectives, leading to more formulaic and less creative work[1][5].

4. Widening equity gaps: The introduction of AI tools has exacerbated existing inequalities, as not all students have equal access to or can afford AI tools like ChatGPT, creating an unfair advantage for those who do[1].

5. Increased cognitive offloading: Studies have shown a more pronounced decline in memory retention and independent learning as students increasingly rely on AI for information retrieval and task completion[1][2].

6. Compromised learning outcomes: The overuse of ChatGPT has led to a more significant negative impact on educational support and reduced social interaction between students and faculty, affecting the overall learning experience[2].

7. Challenges in assessment: Teachers are facing greater difficulties in accurately assessing students' knowledge and understanding due to the prevalence of AI-generated work[1][5].

8. Increased workload for teachers: Educators are spending more time revamping lesson plans to be "AI-proof" and developing new strategies to maintain academic integrity[5]. [[To say nothing about time wasted trying to catch cheaters and making the accusations stick? Why bother ....]]

9. Erosion of research skills: Students' ability to conduct independent research and critically evaluate sources has further deteriorated due to the convenience of AI-generated responses[2][4].

10. Perpetuation of biases: The increased use of ChatGPT has led to more instances of unintentional biases being reinforced in educational settings, as the AI may generate responses that reflect prejudices or imbalances present in its training data[2].

Regarding similar effects of online instructional delivery, even BEFORE the adoption of AI:
Many of the effects attributed to ChatGPT in education were already observed in online learning environments before the widespread adoption of AI, albeit to varying degrees:

1. Reduced human interaction: Online education has long grappled with the challenge of maintaining meaningful teacher-student and student-student interactions. This issue predates AI adoption, with concerns about social isolation and decreased engagement being prevalent in early online learning research[3][6].

2. Changes in assessment methods: The shift to online learning necessitated changes in assessment strategies to combat academic dishonesty, a challenge that existed before AI but has been exacerbated by it[1].

3. Time-saving for teachers: Learning Management Systems (LMS) and other educational technologies have been assisting teachers with tasks like grading and content delivery for years, though not to the extent that AI now promises[2].

4. Equity issues: The digital divide and unequal access to technology have been persistent challenges in online education, mirroring current concerns about AI accessibility[1][7].

5. Privacy concerns: Data privacy and security issues have been central to online education discussions long before AI entered the picture[1][4].

6. Potential for personalized learning: Adaptive learning systems, precursors to AI-driven personalization, have been a feature of online education for some time[5].

7. Risk of overreliance on technology: This concern has been present since the early days of educational technology integration, though AI intensifies it[1].

8. Impact on critical thinking skills: There have been ongoing debates about how online learning affects the development of critical thinking skills, a concern now amplified with AI[1][5].

9. Transformation of teaching roles: Online education has been gradually shifting the role of teachers from lecturers to facilitators, a trend that AI is accelerating[2][7].

10. Need for teacher training: The transition to online teaching has always required professional development, similar to the current need for AI-related training[8].

While many of these effects were present in online education before AI, the introduction of AI has generally intensified and accelerated these trends. The key difference lies in the scale, speed, and sophistication that AI brings to these existing challenges and opportunities in online education.

Citations:
[1] https://www.educationnext.org/a-i-in-education-leap-into-new-era-machine-intelligence-carries-risks-challenges-promises/
[2] https://www.waldenu.edu/programs/education/resource/five-pros-and-cons-of-ai-in-the-education-sector
[3] https://education.illinois.edu/about/news-events/news/article/2024/10/24/ai-in-schools--pros-and-cons
[4] https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf
[5] https://pmc.ncbi.nlm.nih.gov/articles/PMC9513244/
[6] https://openedx.org/blog/what-is-the-impact-of-ai-in-the-online-learning-space/
[7] https://www.onlineeducation.com/features/ai-in-distance-learning
[8] https://er.educause.edu/articles/2024/1/will-our-educational-system-keep-pace-with-ai-a-students-perspective-on-ai-and-learning

ChatGPT has introduced several negative changes to the role of teachers in the classroom:

1. Increased workload: Teachers are spending more time checking for AI plagiarism and revamping lesson plans to be "AI-proof"[4]. This additional burden takes away from time that could be spent on more meaningful educational activities.

2. Diminished teacher-student interaction: The use of ChatGPT for educational support can reduce opportunities for face-to-face interactions between students and faculty, potentially affecting the learning experience and social aspects of education[1].

3. Challenges in assessment: Teachers face difficulties in accurately assessing students' knowledge and understanding when AI tools are used to complete assignments[2]. This complicates the evaluation process and may lead to inaccurate assessments of student progress.

4. Shift in pedagogical focus: Educators are forced to adapt their teaching methods to counteract the negative effects of AI tools, potentially moving away from traditional teaching approaches that may still hold value[4].

5. Ethical dilemmas: Teachers must navigate new ethical challenges related to AI use in education, including issues of academic integrity and equitable access to technology[2][3].

6. Potential overreliance on AI: There is a risk that teachers might start relying on AI tools before developing in-depth knowledge about teaching and classroom management, potentially compromising the quality of education[7].

7. Reduced emphasis on critical thinking: As AI tools can provide quick answers, teachers may find it challenging to encourage students to engage in deep, critical thinking processes[1][2].

8. Perpetuation of biases: Teachers must be vigilant about the potential biases in AI-generated content, which could inadvertently reinforce stereotypes or prejudices in the classroom[1][3].

These changes highlight the complex challenges teachers face in integrating AI tools like ChatGPT into education while maintaining the quality and integrity of the learning experience.

Citations:
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[2] https://www.techbusinessnews.com.au/blog/chatgpt-may-lead-to-the-downfall-of-eduction-and-critical-thinking/
[3] https://citejournal.org/volume-23/issue-1-23/editorial/editorial-chatgpt-challenges-opportunities-and-implications-for-teacher-education/
[4] https://www.nea.org/nea-today/all-news-articles/chatgpt-enters-classroom-teachers-weigh-pros-and-cons
[5] https://www.pewresearch.org/short-reads/2024/05/15/a-quarter-of-u-s-teachers-say-ai-tools-do-more-harm-than-good-in-k-12-education/
[6] https://blogs.worldbank.org/en/education/how-use-chatgpt-support-teachers-good-bad-and-ugly
[7] https://www.tc.columbia.edu/articles/2023/january/navigating-the-risks-and-rewards-of-chatgpt/

Friday, November 1, 2024

Student Newspaper Promotes Cheating Services for Cash (Derek Newton)

The Daily, the student newspaper at Ball State University in Indiana, ran an article recently with this headline:

Best Way to Remove AI Plagiarism from Text: Bypass AI Detectors

So, that’s pretty bad. There’s no real justification that I can imagine for advising students on how not to get caught committing academic fraud. But here we are.

The article is absent a byline, of course. And it comes with the standard disclaimer that papers and publishers probably believe absolves them of responsibility:

This post is provided by a third party who may receive compensation from the products or services they mention.

Translated, this means that some company, probably a soulless, astroturf digital content and placement agent, was paid by a cheating provider to place their dubious content and improve their SEO results. The agent, in turn, pays the newspaper for the “post” to appear on their pages, under their masthead. The paper, in turn, gets to play the ridiculous and tiring game of — “that’s not us.”

We covered similar antics before, in Issue 204.

Did not mean to rhyme. Though, I do it all the time.

Anyway, seeing cheating services in a student newspaper feels new, and doubly problematic — not only increasing the digital credibility of companies that sell deception and misconduct, but perhaps actually reaching target customers. It’s not ideal.

I did e-mail The Daily to ask about the article/advertisement and where they thought their duties sat related to integrity and fraud. They have not replied, and the article is still up.

That article is what you may expect. It starts:

Is your text AI-generated? If so, it may need to be revised to remove AI plagiarism.

You may need to remove the plagiarism — not actually do the work, by the way — because, they say, submitting “AI-plagiarized content” in assignments and research papers is, and I quote, “not advisable.”

Do tell, how do you use AI to generate text and remove the plagiarism? The Ball State paper is happy to share. Always check your paper through an AI-detector, they advise. Then, “it should be converted to human-like content.”

The article continues:

Dozens of AI humanizing tools are available to bypass AI detectors and produce 100% human-like content.

And, being super helpful, the article lists and links to several of them. But first, in what I can just barely describe as English, the article includes:

  • If the text is generated or paraphrased with AI models are most likely that AI plagiarised.

  • If you write the content using custom LLMs with advanced prompts are less liked AI-generated.

  • When you copied word-to-word content from other AI writers.

  • Trying to humanize AI content with cheap Humanizer tools leads to AI plagiarism. 

Ah, what’s that again?

Following that, the piece offers step-by-step advice to remove AI content, directing readers to AI detectors, then pasting the flagged content into a different software and:

Click the “Humanize” button.

The suggested software, the article says:

produces human content for you.

First, way creepy. Second, there is zero chance that’s not academic cheating. Covering your tracks is not clever, it’s an admission of intent to deceive.

And, the article goes on:

If you successfully removed AI-generated content with [company redacted], you can use it.

Go ahead, use it. But let’s also reflect on the obvious — using AI content to replace AI content is no way removing AI content.

Surprising absolutely no one, the article also suggests using QuillBot, which is owned by cheating titan Course Hero (now Learneo), and backed by several education investors (see Issue 80).

Continuing:

Quillbot can accurately rewrite any AI-generated content into human-like content

Yes, the company that education investors have backed is being marketed as a way to sneak AI-created academic work past AI detection systems. It’s being marketed that way because that is exactly what it does. These investors, so far as I can tell, seem not the least bit bothered by the fact that one of their companies is polluting and destroying the teaching and learning value proposition they claim to support.

As long as the checks keep coming - amiright?

After listing other step-by-step ways to get around AI detectors, the article says:

If you use a good service, you can definitely transform AI-generated content into human-like content.

By that, they mean not getting caught cheating.

None of this should really surprise anyone. Where there’s a dollar to be made by peddling unethical shortcuts, someone will do it because people will pay.

Before moving on, let me point out once again the paradox — if AI detectors do not work, as some people mistakenly claim, why are companies paying for articles such as this one to sell services to bypass them? If AI detection was useless, there would be no market at all for these fingerprint erasure services. 

This article first appeared at Derek Newton's The Cheat Sheet.  

Tuesday, August 12, 2025

Stanford, Princeton, and MIT Among Top U.S. Universities Driving Global AI Research (Studocu)


  • U.S. leads in global AI research with 232,000+ publications in four years, followed by China (217,000) and the UK (109,000).

  • Stanford leads overall U.S. output with 12,019 AI publications; Princeton tops per-student rankings.

  • MIT and Johns Hopkins achieve some of the highest global citation scores, showing far-reaching research impact

Artificial Intelligence is no longer niche. AI is reshaping a number of industries from healthcare, finance and the creative arts. To pinpoint where the most influential AI research is emerging, Studocu identified the top global universtities for computer science and analyzed their academic output.

The study analyzed the top 500 computer science institutions worldwode and cross referneced them institutions with the Semantic scholar database to see which school has been researching AI the most in recent years. It assessed the total number of peer-reviewed AI papers appearing in publications and how many times these papers were cited in other studies to reveal which institutions were driving AI research.

Global Highlights

  • United States: 232,000+ AI-related publications in four years.

  • China: 217,000+ publications.

  • United Kingdom: Over 109,000 publications

  • Australia: 92,000+ publications

The findings reveal that academia focused on researching AI is concentrated core of research powerhouses. The United States firmly in first place. U.S. institutions published over 232,000 AI-related articles in the past four years. With China closely following with 217,000 publications.


Top 10 Global Universities for AI Research

The table below ranks the leading institutions using a weighted score that factors in computer science rankings, citation impact, total publications, and per-student output. The United States leads in AI research taking seven of the spots out of the top ten.


While U.S. universities dominate the list, each has distinct strengths:

  • Stanford University – Leads in total output with 12,019 AI publications and maintains a world-class Computer Science score of 93.76.

  • Princeton University – Outperforms all others on a per-student basis, with 1.406 publications per enrolled student, showing exceptional research focus relative to size.

  • MIT and Johns Hopkins University – Both excel in citation impact, with over 14,500 citations each from a sample of AI papers, reflecting global influence and relevance.

Dr Clare Walsh, Director of Education at the Institute of Analytics provided the following advice for those considering a career in AI Academia or in the professional world.

“There are many different roles in AI but it is not easy to break into higher salary AI jobs without suitable training. While there are a number of ‘tools’ on resumes which can help you get ahead, the soft skills are not optional. In general, we recommend anyone working with AI to have minimal ethics training and an understanding of the different technologies. In fact, some of the biggest AI research centers have a PhD as a minimum job entry requirement.”

ENDS

About Studocu:

StuDocu is a student-to-student knowledge exchange platform where students can share knowledge, college notes, and study guides.

Methodology

The Times Higher Education’s World University Rankings for Computer Science was used as a seedlist for the top 500 schools for computer science, which provided the overall score for the Computer Sicence ranking.
Using the Semantic Scholar API we filtered AI-related research from universities on the seedlist.
To identify AI research keywords such as “artificial intelligence,” “machine learning,” “LLM,” "generative AI" and “NLP.” Data was normalized for university size to calculate publications per enrolled student..

The final ranking was based on a weighted index:

  • Computer Science average ranking (75%)

  • Citation score (15%)

  • Number of AI publications (5%)

  • AI publications per student (5%)

Limitations: The analysis was based on the available data in the Semantic Scholar database. Keyword filtering may omit relevant work. While the Citation score was only able to research a sample of 200 peer-reviewed AI papers.

Tuesday, July 15, 2025

Genio Launches AI Policy Template as Student AI Usage Surges to 92% While Institutional Guidance Lags. Free template addresses critical gap as nine in 10 students worry about breaking AI rules due to unclear institutional policies

CLEARWATER, FLA, July 15 2025—Genio, a provider of learning tools that significantly improve student success, today announced the release of a comprehensive AI Policy Template to help higher education institutions develop clear, consistent guidelines for artificial intelligence use in academic settings. The launch comes as new data reveals that student AI adoption has surged to 92% in 2025, up dramatically from 66% in 2024, while 91% of learners remain concerned about inadvertently violating institutional rules due to ambiguous or inconsistent AI guidance.


The free template addresses a critical challenge facing higher education: the disconnect between rapidly increasing student AI usage and institutional preparedness. Despite nearly universal student adoption of AI tools, the vast majority of learners report that current guidance from their institutions is either lacking, unclear, or applied inconsistently across departments and courses.


The absence of clear, institution-wide AI policies leaves both educators and students uncertain about appropriate usage, creating potential for academic misconduct. Additionally, without standardized guidelines, different departments and instructors apply varying standards, creating confusion and potential unfairness. And, looking ahead, the lack of structured AI guidance fails to equip learners with the responsible AI skills increasingly essential in today's job market.


The AI Policy Template provides institutions with a document education professionals can download to create their own:


  • Clear frameworks for defining appropriate AI tool usage across academic disciplines;

  • Guidelines for maintaining academic integrity while embracing technological advancement;

  • Strategies for reducing the digital divide rather than expanding it;

  • Protections for both educators and learners through consistent policy application; and

  • Structured approaches to teaching responsible AI usage as a critical workforce skill.


The template emphasizes what Genio calls "productive friction," a balanced approach that neither prohibits AI use nor allows unrestricted access, but instead creates structured guidelines that support learning while maintaining academic rigor. This approach recognizes that AI literacy is becoming as essential as traditional digital literacy skills.

For institutions facing the dual pressures of enrollment challenges and the need to prepare students for an AI-integrated workforce, the template offers a practical starting point for policy development. The resource is designed to be adapted to individual institutional needs, policies, and applicable legal requirements.


“AI has fundamentally changed how students approach their academic studies, but institutions have been caught off-guard by the speed of adoption,” said Josh Nesbitt, CTO at Genio. “Students want to use AI responsibly, but they need clear guidance on what this really means in practice. Our template provides institutions with a framework that maintains academic integrity while preparing students for an AI-driven workforce. The goal isn’t to prohibit AI use, but to create productive friction that ensures students develop critical thinking skills alongside AI fluency.”


The AI Policy Template is available for immediate download. The template is provided for informational and educational purposes only and does not constitute legal advice. Institutions should adapt the template in accordance with their own policies, protocols, and applicable laws.


About Genio

Genio (formerly Glean) creates beautifully simple learning tools that boost knowledge, skills, and confidence. Trusted by over 900 higher education institutions globally, our products support individuals of all abilities to learn more effectively. Best known for our lecture note taking tool, Genio now offers presentation rehearsal support, study skills courses, and more.


For more information on Genio, visit https://genio.co

Tuesday, January 21, 2025

Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)


This one is going to take a hot minute to dissect. Minnesota Public Radio (MPR) has the story.

The plot contours are easy. A PhD student at the University of Minnesota was accused of using AI on a required pre-dissertation exam and removed from the program. He denies that allegation and has sued the school — and one of his professors — for due process violations and defamation respectively.
Starting the case.
The coverage reports that:
all four faculty graders of his exam expressed “significant concerns” that it was not written in his voice. They noted answers that seemed irrelevant or involved subjects not covered in coursework. Two instructors then generated their own responses in ChatGPT to compare against his and submitted those as evidence against Yang. At the resulting disciplinary hearing, Yang says those professors also shared results from AI detection software. 
Personally, when I see that four members of the faculty unanimously agreed on the authenticity of his work, I am out. I trust teachers.
I know what a serious thing it is to accuse someone of cheating; I know teachers do not take such things lightly. When four go on the record to say so, I’m convinced. Barring some personal grievance or prejudice, which could happen, hard for me to believe that all four subject-matter experts were just wrong here. Also, if there was bias or petty politics at play, it probably would have shown up before the student’s third year, not just before starting his dissertation.
Moreover, at least as far as the coverage is concerned, the student does not allege bias or program politics. His complaint is based on due process and inaccuracy of the underlying accusation.
Let me also say quickly that asking ChatGPT for answers you plan to compare to suspicious work may be interesting, but it’s far from convincing — in my opinion. ChatGPT makes stuff up. I’m not saying that answer comparison is a waste, I just would not build a case on it. Here, the university didn’t. It may have added to the case, but it was not the case. Adding also that the similarities between the faculty-created answers and the student’s — both are included in the article — are more compelling than I expected.
Then you add detection software, which the article later shares showed high likelihood of AI text, and the case is pretty tight. Four professors, similar answers, AI detection flags — feels like a heavy case.
Denied it.
The article continues that Yang, the student:
denies using AI for this exam and says the professors have a flawed approach to determining whether AI was used. He said methods used to detect AI are known to be unreliable and biased, particularly against people whose first language isn’t English. Yang grew up speaking Southern Min, a Chinese dialect. 
Although it’s not specified, it is likely that Yang is referring to the research from Stanford that has been — or at least ought to be — entirely discredited (see Issue 216 and Issue 251). For the love of research integrity, the paper has invented citations — sources that go to papers or news coverage that are not at all related to what the paper says they are.
Does anyone actually read those things?
Back to Minnesota, Yang says that as a result of the findings against him and being removed from the program, he lost his American study visa. Yang called it “a death penalty.”
With friends like these.
Also interesting is that, according to the coverage:
His academic advisor Bryan Dowd spoke in Yang’s defense at the November hearing, telling panelists that expulsion, effectively a deportation, was “an odd punishment for something that is as difficult to establish as a correspondence between ChatGPT and a student’s answer.” 
That would be a fair point except that the next paragraph is:
Dowd is a professor in health policy and management with over 40 years of teaching at the U of M. He told MPR News he lets students in his courses use generative AI because, in his opinion, it’s impossible to prevent or detect AI use. Dowd himself has never used ChatGPT, but he relies on Microsoft Word’s auto-correction and search engines like Google Scholar and finds those comparable. 
That’s ridiculous. I’m sorry, it is. The dude who lets students use AI because he thinks AI is “impossible to prevent or detect,” the guy who has never used ChatGPT himself, and thinks that Google Scholar and auto-complete are “comparable” to AI — that’s the person speaking up for the guy who says he did not use AI. Wow.
That guy says:
“I think he’s quite an excellent student. He’s certainly, I think, one of the best-read students I’ve ever encountered”
Time out. Is it not at least possible that professor Dowd thinks student Yang is an excellent student because Yang was using AI all along, and our professor doesn’t care to ascertain the difference? Also, mind you, as far as we can learn from this news story, Dowd does not even say Yang is innocent. He says the punishment is “odd,” that the case is hard to establish, and that Yang was a good student who did not need to use AI. Although, again, I’m not sure how good professor Dowd would know.
As further evidence of Yang’s scholastic ability, Dowd also points out that Yang has a paper under consideration at a top academic journal.
You know what I am going to say.
To me, that entire Dowd diversion is mostly funny.
More evidence.
Back on track, we get even more detail, such as that the exam in question was:
an eight-hour preliminary exam that Yang took online. Instructions he shared show the exam was open-book, meaning test takers could use notes, papers and textbooks, but AI was explicitly prohibited. 
Exam graders argued the AI use was obvious enough. Yang disagrees. 
Weeks after the exam, associate professor Ezra Golberstein submitted a complaint to the U of M saying the four faculty reviewers agreed that Yang’s exam was not in his voice and recommending he be dismissed from the program. Yang had been in at least one class with all of them, so they compared his responses against two other writing samples. 
So, the exam expressly banned AI. And we learn that, as part of the determination of the professors, they compared his exam answers with past writing.
I say all the time, there is no substitute for knowing your students. If the initial four faculty who flagged Yang’s work had him in classes and compared suspicious work to past work, what more can we want? It does not get much better than that.
Then there’s even more evidence:
Yang also objects to professors using AI detection software to make their case at the November hearing.  
He shared the U of M’s presentation showing findings from running his writing through GPTZero, which purports to determine the percentage of writing done by AI. The software was highly confident a human wrote Yang’s writing sample from two years ago. It was uncertain about his exam responses from August, assigning 89 percent probability of AI having generated his answer to one question and 19 percent probability for another. 
“Imagine the AI detector can claim that their accuracy rate is 99%. What does it mean?” asked Yang, who argued that the error rate could unfairly tarnish a student who didn’t use AI to do the work.  
First, GPTZero is junk. It’s reliably among the worst available detection systems. Even so, 89% is a high number. And most importantly, the case against Yang is not built on AI detection software alone, as no case should ever be. It’s confirmation, not conviction. Also, Yang, who the paper says already has one PhD, knows exactly what an accuracy rate of 99% means. Be serious.
A pattern.
Then we get this, buried in the news coverage:
Yang suggests the U of M may have had an unjust motive to kick him out. When prompted, he shared documentation of at least three other instances of accusations raised by others against him that did not result in disciplinary action but that he thinks may have factored in his expulsion.  
He does not include this concern in his lawsuits. These allegations are also not explicitly listed as factors in the complaint against him, nor letters explaining the decision to expel Yang or rejecting his appeal. But one incident was mentioned at his hearing: in October 2023, Yang had been suspected of using AI on a homework assignment for a graduate-level course. 
In a written statement shared with panelists, associate professor Susan Mason said Yang had turned in an assignment where he wrote “re write it, make it more casual, like a foreign student write but no ai.”  She recorded the Zoom meeting where she said Yang denied using AI and told her he uses ChatGPT to check his English.
She asked if he had a problem with people believing his writing was too formal and said he responded that he meant his answer was too long and he wanted ChatGPT to shorten it. “I did not find this explanation convincing,” she wrote. 
I’m sorry — what now?
Yang says he was accused of using AI in academic work in “at least three other instances.” For which he was, of course, not disciplined. In one of those cases, Yang literally turned in a paper with this:
“re write it, make it more casual, like a foreign student write but no ai.” 
He said he used ChatGPT to check his English and asked ChatGPT to shorten his writing. But he did not use AI. How does that work?
For that one where he left in the prompts to ChatGPT:
the Office of Community Standards sent Yang a letter warning that the case was dropped but it may be taken into consideration on any future violations. 
Yang was warned, in writing.
If you’re still here, we have four professors who agree that Yang’s exam likely used AI, in violation of exam rules. All four had Yang in classes previously and compared his exam work to past hand-written work. His exam answers had similarities with ChatGPT output. An AI detector said, in at least one place, his exam was 89% likely to be generated with AI. Yang was accused of using AI in academic work at least three other times, by a fifth professor, including one case in which it appears he may have left in his instructions to the AI bot.
On the other hand, he did say he did not do it.
Findings, review.
Further:
But the range of evidence was sufficient for the U of M. In the final ruling, the panel — comprised of several professors and graduate students from other departments — said they trusted the professors’ ability to identify AI-generated papers.
Several professors and students agreed with the accusations. Yang appealed and the school upheld the decision. Yang was gone. The appeal officer wrote:
“PhD research is, by definition, exploring new ideas and often involves development of new methods. There are many opportunities for an individual to falsify data and/or analysis of data. Consequently, the academy has no tolerance for academic dishonesty in PhD programs or among faculty. A finding of dishonesty not only casts doubt on the veracity of everything that the individual has done or will do in the future, it also causes the broader community to distrust the discipline as a whole.” 
Slow clap.
And slow clap for the University of Minnesota. The process is hard. Doing the review, examining the evidence, making an accusation — they are all hard. Sticking by it is hard too.
Seriously, integrity is not a statement. It is action. Integrity is making the hard choice.
MPR, spare me.
Minnesota Public Radio is a credible news organization. Which makes it difficult to understand why they chose — as so many news outlets do — to not interview one single expert on academic integrity for a story about academic integrity. It’s downright baffling.
Worse, MPR, for no specific reason whatsoever, decides to take prolonged shots at AI detection systems such as:
Computer science researchers say detection software can have significant margins of error in finding instances of AI-generated text. OpenAI, the company behind ChatGPT, shut down its own detection tool last year citing a “low rate of accuracy.” Reports suggest AI detectors have misclassified work by non-native English writers, neurodivergent students and people who use tools like Grammarly or Microsoft Editor to improve their writing. 
“As an educator, one has to also think about the anxiety that students might develop,” said Manjeet Rege, a University of St. Thomas professor who has studied machine learning for more than two decades. 
We covered the OpenAI deception — and it was deception — in Issue 241, and in other issues. We covered the non-native English thing. And the neurodivergent thing. And the Grammarly thing. All of which MPR wraps up in the passive and deflecting “reports suggest.” No analysis. No skepticism.
That’s just bad journalism.
And, of course — anxiety. Rege, who please note has studied machine learning and not academic integrity, is predictable, but not credible here. He says, for example:
it’s important to find the balance between academic integrity and embracing AI innovation. But rather than relying on AI detection software, he advocates for evaluating students by designing assignments hard for AI to complete — like personal reflections, project-based learnings, oral presentations — or integrating AI into the instructions. 
Absolute joke.
I am not sorry — if you use the word “balance” in conjunction with the word “integrity,” you should not be teaching. Especially if what you’re weighing against lying and fraud is the value of embracing innovation. And if you needed further evidence for his absurdity, we get the “personal reflections and project-based learnings” buffoonery (see Issue 323). But, again, the error here is MPR quoting a professor of machine learning about course design and integrity.
MPR also quotes a student who says:
she and many other students live in fear of AI detection software.  
“AI and its lack of dependability for detection of itself could be the difference between a degree and going home,” she said. 
Nope. Please, please tell me I don’t need to go through all the reasons that’s absurd. Find me one single of case in which an AI detector alone sent a student home. One.
Two final bits.
The MPR story shares:
In the 2023-24 school year, the University of Minnesota found 188 students responsible of scholastic dishonesty because of AI use, reflecting about half of all confirmed cases of dishonesty on the Twin Cities campus. 
Just noteworthy. Also, it is interesting that 188 were “responsible.” Considering how rare it is to be caught, and for formal processes to be initiated and upheld, 188 feels like a real number. Again, good for U of M.
The MPR article wraps up that Yang:
found his life in disarray. He said he would lose access to datasets essential for his dissertation and other projects he was working on with his U of M account, and was forced to leave research responsibilities to others at short notice. He fears how this will impact his academic career
Stating the obvious, like the University of Minnesota, I could not bring myself to trust Yang’s data. And I do actually hope that being kicked out of a university for cheating would impact his academic career.
And finally:
“Probably I should think to do something, selling potatoes on the streets or something else,” he said. 
Dude has a PhD in economics from Utah State University. Selling potatoes on the streets. Come on.
(Editors note: This article first appeared at Derek Newton's The Cheat Sheet.)