Higher Education Without Illusions. #accountability #adjunct #AI #AImeltdown #algo #alienation #anomie #anxiety #austerity #BDR #bot #boycott #BRICS #climate #collegemania #collegemeltdown #crypto #divest #doomloop #edugrift #enshittification #FAFSA #greed #incel #jobless #kleptocracy #medugrift #moralcapital #nokings #nonviolence #PSLF #QOL #rehumanization #resistance #robocollege #robostudent #roboworker #solidarity #strikedebt #surveillance #temperance #TPUSA #transparency #Trump #veritas
Search This Blog
Sunday, December 14, 2025
Why the AI Bubble Is Actually a $60T Black Hole (GEN)
Friday, December 5, 2025
Tuesday, September 16, 2025
Should Elites Get Bailed Out Again?
In 1929, when the stock market crashed, millions of Americans were plunged into unemployment, hunger, and despair. Yet the elites of Wall Street—whose reckless speculation fueled the disaster—often landed softly. By 1933, as the Great Depression deepened, nearly a quarter of the U.S. workforce was unemployed, thousands of banks had failed, and working families bore the brunt of the collapse. Ordinary people endured soup lines, Dust Bowl migration, and generational poverty. The government of Franklin D. Roosevelt eventually stepped in with reforms and safeguards like the FDIC and Glass-Steagall, but not before working-class Americans had paid the heaviest price.
Fast forward to 2008, when the global financial system once again teetered on collapse. This time, instead of letting the failures run their course, the U.S. government rushed to bail out Wall Street banks, auto manufacturers, and other corporate giants deemed “too big to fail.” Banks survived, CEOs kept their bonuses, and investors were shielded. Meanwhile, millions of working-class families lost their homes, jobs, and savings. Student loan borrowers, particularly those from working-class and minority backgrounds, never got a bailout. Adjunct faculty, contract workers, and gig laborers were left to navigate economic insecurity without systemic relief.
The pandemic brought the same story in a new form. Corporate bailouts, Federal Reserve interventions, and stimulus packages stabilized markets far more effectively than they stabilized households. Wall Street bounced back faster than Main Street. By 2021, the wealth of America’s billionaires had surged by more than $1.8 trillion, while ordinary workers struggled with eviction threats, childcare crises, and medical debt.
But the stakes are even higher today. U.S. elites are not only repeating past mistakes—they are doubling down on mass speculation across Artificial Intelligence, crypto, real estate, and equity markets. The rise and collapse of speculative cryptocurrencies revealed how wealth can be created and destroyed almost overnight, with everyday investors bearing the losses while venture capitalists and insiders cashed out early. Real estate speculation has driven housing prices beyond the reach of millions of working families, fueling homelessness and displacement. Equity markets, inflated by cheap debt and stock buybacks, have become disconnected from the real economy, rewarding executives while leaving workers behind.
This speculative frenzy is not just an economic issue—it is an environmental one. Artificial Intelligence requires enormous data farms that use lots of energy. Fossil fuel corporations and their financiers continue to reap profits from industries that accelerate climate change, deforestation, and resource depletion. The destruction of ecosystems, the intensification of climate disasters, and the burden of environmental cleanup all fall disproportionately on working-class and marginalized communities. Yet when markets wobble, it is these same polluting elites who position themselves first in line for government protection.
The Federal Reserve has played a decisive role in this cycle. By keeping interest rates artificially low for years, it fueled debt-driven speculation in housing, equities, and corporate borrowing. When inflation spiked, the Fed shifted gears, raising rates at the fastest pace in decades. This brought pain to households through higher mortgage costs, rising credit card balances, and job insecurity—but banks and investment firms continued to receive lifelines through emergency lending facilities. The Fed’s interventions have too often prioritized elite stability over working-class survival.
Political leadership has compounded the problem. Under Donald Trump's first term, deregulation accelerated, with key provisions of the Dodd-Frank Act rolled back in 2018. Banks gained greater leeway to take risks, and oversight of mid-sized institutions weakened—a decision that later contributed to the collapse of Silicon Valley Bank in 2023. Trump’s tax cuts overwhelmingly favored corporations and the wealthy, further concentrating wealth at the top while leaving the federal government less able to respond to future crises. In his second term, Trump and his allies signal that they would pressure the Fed to prioritize markets over workers and strip down remaining regulatory guardrails.
The logic of endless bailouts assumes that the survival of elites ensures the survival of the economy. But history proves otherwise. Whether in 1929, 2008, or 2020, the repeated subsidization of corporations and financial elites entrenches inequality, fuels reckless risk-taking, and leaves working families with the bill. The banks, crypto funds, and private equity firms that profit most during boom times rarely share their gains, yet they demand protection in busts.
And the problem is no longer just domestic—it is geopolitical. While U.S. elites depend on bailouts, rival powers are recalibrating. China is building alternative banking systems through the Asian Infrastructure Investment Bank and the Belt and Road Initiative. Russia, sanctioned by the West, is tightening its economic ties with China and other non-Western states. India and Brazil, key players in the BRICS bloc, are exploring alternatives to U.S. dollar dominance. If the U.S. continues to subsidize private failure with public money, it risks undermining its own global credibility and ceding economic leadership to rivals.
National security is directly tied to economic and environmental stability. A U.S. that repeatedly bails out elites while leaving ordinary citizens vulnerable erodes trust not only at home but abroad. Allies may question American leadership, while adversaries see opportunity in its fragility. If the U.S. financial system is perceived as permanently rigged—propping up elites while disempowering its workforce—it will accelerate the shift of global influence toward China, Russia, India, and Brazil.
Perhaps it’s time to let the system fail—not in the sense of mass suffering for ordinary people, but in the sense of refusing to cushion elites from the consequences of their own decisions. If banks gamble recklessly, let them face bankruptcy. If private equity firms strip-mine industries, let them collapse under their own weight. If universities chase speculative growth with predatory lending and overpriced credentials, let them answer for it in the courts of law and public opinion.
Failure, though painful, can also be cleansing. Without bailouts, institutions would be forced to reckon with structural flaws instead of papering them over. Alternatives could emerge: community-based credit unions, worker-owned cooperatives, public higher education funded for the public good rather than private profit, and serious investment in green energy and sustainable development.
The real question is not whether elites deserve another bailout. The real question is whether the United States can afford to keep subsidizing them while undermining its working class, its environment, and its national security. For too long, workers, students, and families have shouldered the costs of elite failure. The survival of the U.S. economy—and its place in the world—may depend not on saving elites, but on building something stronger and fairer in their place.
Sources:
-
Congressional Budget Office, The 2008 Financial Crisis and Federal Response
-
Federal Deposit Insurance Corporation, Bank Failures During the Great Depression
-
Institute for Policy Studies, Billionaire Wealth Surge During COVID-19
-
Federal Reserve, Monetary Policy and Emergency Lending Facilities
-
Brookings Institution, Bailouts and Moral Hazard
-
BRICS Policy Center, Alternative Financial Governance Structures
-
Intergovernmental Panel on Climate Change (IPCC), Climate Change 2023 Synthesis Report
-
National Association of Realtors, Housing Affordability Data
-
Public Law 115-174, Economic Growth, Regulatory Relief, and Consumer Protection Act (2018)
Friday, August 15, 2025
The Rise of Ghost Students: AI-Fueled Fraud in Higher Education
Colleges across the United States are facing an alarming increase in "ghost students"—fraudulent applicants who infiltrate online enrollment systems, collect financial aid, and vanish before delivering any academic engagement. The problem, fueled by advances in artificial intelligence and weaknesses in identity verification processes, is undermining trust, misdirecting resources, and placing real students at risk.
What Is a Ghost Student?
A ghost student is not simply someone who drops out. These are fully fabricated identities—sometimes based on stolen personal information, sometimes entirely synthetic—created to fraudulently enroll in colleges. Fraudsters use AI tools to generate admissions essays, forge transcripts, and even produce deepfake images and videos for identity verification.
Once enrolled, ghost students typically sign up for online courses, complete minimal coursework to stay active long enough to qualify for financial aid, and then disappear once funds are disbursed.
Scope and Impact
The scale of the problem is significant and growing:
-
California community colleges flagged approximately 460,000 suspicious applications in a single year—nearly 20% of the total—resulting in more than $11 million in fraudulent aid disbursements.
-
The College of Southern Nevada reported losing $7.4 million to ghost student fraud in one semester.
-
At Century College in Minnesota, instructors discovered that roughly 15% of students in a single course were fake enrollees.
-
California's overall community college system reported over $13 million in financial aid losses in a single year due to such schemes—a 74% increase from the previous year.
The consequences extend beyond financial loss. Course seats are blocked from legitimate students. Faculty spend hours identifying and reporting ghost students. Institutional data becomes unreliable. Most importantly, public trust in higher education systems is eroded.
Why Now?
Several developments have enabled this rise in fraud:
-
The shift to online learning during the pandemic decreased opportunities for in-person identity verification.
-
AI tools—such as large language models, AI voice generators, and synthetic video platforms—allow fraudsters to create highly convincing fake identities at scale.
-
Open-access policies at many institutions, particularly community colleges, allow applications to be submitted with minimal verification.
-
Budget cuts and staff shortages have left many colleges without the resources to identify and remove fake students in a timely manner.
How Institutions Are Responding
Colleges and universities are implementing multiple strategies to fight back:
Identity Verification Tools
Some institutions now require government-issued IDs matched with biometric verification—such as real-time selfies with liveness detection—to confirm applicants' identities.
Faculty-Led Screening
Instructors are being encouraged to require early student engagement via Zoom, video introductions, or synchronous activities to confirm that enrolled students are real individuals.
Policy and Federal Support
The U.S. Department of Education will soon require live ID verification for flagged FAFSA applicants. Some states, such as California, are considering application fees or more robust identity checks at the enrollment stage.
AI-Driven Pattern Detection
Tools like LightLeap.AI and ID.me are helping institutions track unusual behaviors such as duplicate IP addresses, linguistic patterns, and inconsistent documentation to detect fraud attempts.
Recommendations for HEIs
To mitigate the risk of ghost student infiltration, higher education institutions should:
-
Implement digital identity verification systems before enrollment or aid disbursement.
-
Train faculty and staff to recognize and report suspicious activity early in the semester.
-
Deploy AI tools to detect patterns in application and login data.
-
Foster collaboration across institutions to share data on emerging fraud trends.
-
Communicate transparently with students about new verification procedures and the reasons behind them.
Why It Matters
Ghost student fraud is more than a financial threat—it is a systemic risk to educational access, operational efficiency, and institutional credibility. With AI-enabled fraud growing in sophistication, higher education must act decisively to safeguard the integrity of enrollment, instruction, and student support systems.
Sources
-
CrossHackers: https://www.crosshackers.com/fake-enrollments
-
SF Chronicle: https://www.sfchronicle.com/bayarea/article/community-college-financial-aid-fraud-20325192.php
-
SFGate: https://www.sfgate.com/bayarea/article/ghost-students-creating-problem-calif-colleges-20311708.php
-
Eye on Yavapai College: https://www.eyeonyavapaicollege.com/administration/fraudsters-attacking-colleges-using-ghost-online-enrollment-schemes
-
Star Tribune: https://www.startribune.com/ghost-students-the-new-enrollment-fraud-scheme-minnesota-two-year-colleges-are-fighting/601329524
-
University Business: https://universitybusiness.com/what-can-institutions-do-about-the-rise-of-ghost-students
-
JD Supra: https://www.jdsupra.com/legalnews/steps-colleges-should-take-to-bust-3949980
-
Get VerifiNow: https://www.getverifinow.com/the-ghost-student-problem
-
Intellicheck: https://www.intellicheck.com/resource-library/the-rise-in-ghost-students-how-fraudsters-use-fake-identities
Sunday, July 6, 2025
College grad unemployment surges as employers replace new hires with AI (CBS News)
Friday, July 4, 2025
Monday, June 30, 2025
Will Maximus and Its Subsidiary AidVantage See Cuts?
Maximus Inc., the parent company of federal student loan servicer Aidvantage, is facing growing financial and existential threats as the Trump administration completes a radical budget proposal that would slash Medicaid by hundreds of billions of dollars and cut the U.S. Department of Education in half. These proposed changes could gut the very federal contracts that have fueled Maximus's revenue and investor confidence over the last two decades. Once seen as a steady player in the outsourcing of public services, Maximus now stands at the edge of a political and technological cliff.
The proposed Trump budget includes a plan to eliminate the Office of Federal Student Aid and transfer the $1.6 trillion federal student loan portfolio to the Small Business Administration. This proposed restructuring would remove Aidvantage and other servicers from their current roles, replacing them with yet-unnamed alternatives. While Maximus has profited enormously from servicing loans through Aidvantage—one of the major federal loan servicers—it is unclear whether the company has any role in this new Trump-led student loan regime. The SBA, which lacks experience managing consumer lending and repayment infrastructure, could subcontract to politically favored firms or simply allow artificial intelligence to replace human collectors altogether.
This possibility is not far-fetched. A 2023 study by Yale Insights explored how AI systems are already outperforming human debt collectors in efficiency, compliance, and scalability. The report examined the growing use of bots to handle borrower communication, account resolution, and payment tracking. These developments could render Maximus’s human-heavy servicing model obsolete. If the federal government shifts toward automated collection, it could bypass Maximus entirely, either through privatized tech-driven firms or through internal platforms that require fewer labor-intensive contracts.
On the health and human services side of the business, Maximus is also exposed. The company has long served as a contractor for Medicaid programs across several states, managing call centers and eligibility support. But with Medicaid facing potentially devastating cuts in the proposed Trump budget, Maximus’s largest and most stable contracts could disappear. The company’s TES-RCM division has already shown signs of unraveling, with anonymous reports suggesting a steep drop-off in clients and the departure of long-time employees. One insider claimed, “Customers are dropping like flies as are longtime employees. Not enough people to do the little work we have.”
Remote Maximus employees are also reporting layoffs and instability, particularly in Iowa, where 34 remote workers were terminated after two decades of contract work on state Medicaid programs. Anxiety is spreading across internal forums and layoff boards, as workers fear they may soon be out of a job in a shrinking and increasingly automated industry. Posts on TheLayoff.com and in investor forums indicate growing unease about the company’s long-term viability, particularly in light of the federal budget priorities now taking shape in Washington.
While Maximus stock (MMS) continues to trade with relative strength and still appears profitable on paper, it is increasingly reliant on government spending that may no longer exist under a Trump administration intent on dismantling large parts of the federal bureaucracy. If student loan servicing is eliminated, transferred, or automated, and Medicaid contracts dry up due to funding cuts, Maximus could lose two of its biggest revenue streams in a matter of months. The company’s contract with the Department of Education, once seen as a long-term asset, may become a political liability in a system being restructured to reward loyalty and reduce regulatory oversight.
The question now is not whether Maximus will be forced to downsize—it already is—but whether it will remain a relevant player in the new federal landscape at all. As artificial intelligence, austerity, and ideological realignment converge, Maximus may be remembered less for its dominance and more for how quickly it became unnecessary.
The Higher Education Inquirer will continue tracking developments affecting federal student loan servicers, government contractors, and the broader collapse of the administrative state.
Monday, June 2, 2025
“The Obsolete Man”: A Twilight Zone Warning for the Trump Era and the Age of AI
Rod Serling’s classic 1961 episode of The Twilight Zone, “The Obsolete Man,” offers a timeless meditation on authoritarianism, conformity, and the erasure of humanity. In it, a quiet librarian, Romney Wordsworth (played by Burgess Meredith), is deemed “obsolete” by a dystopian state for believing in books and God—symbols of individual thought and spiritual meaning. Condemned by a totalitarian chancellor and scheduled for execution, Wordsworth calmly exposes the cruelty and contradictions of the regime, ultimately reclaiming his dignity by refusing to bow to tyranny.
Over 60 years later, “The Obsolete Man” feels less like fiction and more like a documentary. The Trump era, supercharged by the rise of artificial intelligence and a war on truth, has brought Serling’s chilling parable into sharper focus.
The Authoritarian Impulse
President Donald Trump’s presidency—and his ongoing influence—has been marked by a deep antagonism toward democratic institutions, intellectual life, and perceived “elites.” Journalists were labeled “enemies of the people.” Scientists and educators were dismissed or silenced. Books were banned in schools and libraries, and curricula were stripped of “controversial” topics like systemic racism or gender identity.
Like the chancellor in The Obsolete Man, Trump and his allies seek not just to discredit dissenters but to erase their very legitimacy. In this worldview, librarians, teachers, and independent thinkers are expendable. What matters is loyalty to the regime, conformity to its ideology, and performance of power.
Wordsworth’s crime—being a librarian and a believer—is mirrored in real-life purges of professionals deemed out of step with a hardline political agenda. Public educators and college faculty who challenge reactionary narratives have been targeted by state legislatures, right-wing activists, and billionaire-backed think tanks. In higher education, departments of the humanities are being defunded or eliminated entirely. Faculty governance is undermined. The university, once a space for critical inquiry, is increasingly treated as an instrument for ideological control—or as a business to be stripped for parts.
The Age of AI and the Erasure of the Human
While authoritarianism silences the human spirit, artificial intelligence threatens to replace it. AI tools, now embedded in everything from hiring algorithms to classroom assessments, are reshaping how knowledge is produced, disseminated, and controlled. In the rush to adopt these technologies, questions about ethics, bias, and human purpose are often sidelined.
AI systems do not “believe” in anything. They do not feel awe, doubt, or moral anguish. They calculate, replicate, and optimize. In the hands of authoritarian regimes or profit-driven institutions, AI becomes a tool not of liberation, but of surveillance, censorship, and disposability. Workers are replaced. Students are reduced to data points. Librarians—like Wordsworth—are no longer needed in a world where books are digitized and curated by opaque algorithms.
This is not merely a future problem. It's here. Algorithms already determine who gets hired, who receives financial aid, and which students are flagged as “at risk.” Predictive policing, automated grading, and AI-generated textbooks are not the stuff of science fiction. They are reality. And those who question their fairness or legitimacy risk being labeled as backwards, inefficient—obsolete.
A Culture of Disposability
At the heart of “The Obsolete Man” is a question about value: Who decides what is worth keeping? In Trump’s America and in the AI-driven economy, people are judged by their utility to the system. If you're not producing profit, performing loyalty, or conforming to power, you can be cast aside.
This is especially true for the working class, contingent academics, and the so-called “educated underclass”—a growing population of debt-laden degree holders trapped in precarious jobs or no jobs at all. Their degrees are now questioned, their labor devalued, and their futures uncertain. They are told that if they can’t “pivot” or “reskill” for the next technological shift, they too may be obsolete.
The echoes of The Twilight Zone are deafening.
Resistance and Redemption
Yet, as Wordsworth demonstrates in his final moments, resistance is possible. Dignity lies in refusing to surrender the soul to the machine—or the regime. In his quiet defiance, Wordsworth forces the chancellor to confront his own cowardice, exposing the hollow cruelty of the system.
In our time, that resistance takes many forms: educators who continue to teach truth despite political pressure; librarians who fight book bans; whistleblowers who challenge surveillance technologies; and students who organize for justice. These acts of courage and conscience remind us that obsolescence is not a matter of utility—it’s a judgment imposed by those in power, and it can be rejected.
Rod Serling ended his episode with a reminder: “Any state, any entity, any ideology that fails to recognize the worth, the dignity, the rights of man—that state is obsolete.”
The question now is whether we will heed the warning. In an age where authoritarianism and AI threaten to render us all obsolete, will we remember what it means to be human?
The Higher Education Inquirer welcomes responses and reflections on how pop culture can illuminate our present crises. Contact us with your thoughts or your own essay proposals.
Monday, February 10, 2025
We asked AI to tell on itself... (Glen McGhee)
This is what AI told us about its own limitations.
1. Illusion of Learning: Teachers may perceive an illusion of progress due to improved homework scores when students use ChatGPT. A study in Turkey found that while homework scores increased with ChatGPT use, final exam performance actually decreased by 17% for students using unprompted ChatGPT[4], that is, without guiding prompts, simply navigating it on their own]. This suggests that teachers might observe apparent progress that doesn't translate to actual learning.2. Overestimation of Knowledge: Students using ChatGPT may experience "Illusory Knowledge," where they believe they've learned more than they actually have[4]. This could lead teachers to overestimate class progress based on students' self-reported understanding [[I have personally experienced this while teaching -- student head nods in synch with my points -- only to learn later that this was a student reflex to pacify the teacher. As a teacher, I find this befuddling.]]
3. Reduced Critical Thinking: Teachers might notice a decline in students' critical thinking and problem-solving abilities due to overreliance on ChatGPT[6]. This could create a false impression of progress in content knowledge while masking deficits in crucial cognitive skills.
4. Difficulty in Assessment: The use of ChatGPT poses challenges for teachers in accurately assessing student abilities, potentially leading to a skewed perception of class progress[1]. [[Yes!]]
It's important to note that research on this specific aspect of teacher perception is limited, and more studies are needed to fully understand the phenomenon of perceived illusory progress in classrooms using ChatGPT.
Citations:
[1] https://asiacall.info/proceedings/index.php/articles/article/download/81/64/757
[2] https://community.openai.com/t/chatgpt-4-and-the-illusion-of-learning-a-cautionary-tale-from-a-new-user/1009557
[3] https://rgsa.openaccesspublications.org/rgsa/article/download/8162/4017/29639
[4] https://www.oneusefulthing.org/p/post-apocalyptic-education
[5] https://huskiecommons.lib.niu.edu/cgi/viewcontent.cgi?article=8882&context=allgraduate-thesesdissertations
[6] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[7] https://www.researchgate.net/publication/382619812_Perceptions_of_pre-service_teachers_on_the_use_of_ChatGPT_during_their_training_and_the_future_implications_for_their_future_role_in_high_schools_A_research_in_progress
[8] https://www.ciocoverage.com/edtech-and-ai-in-education-the-illusion-of-progress/
Other Possible negative effects of ChatGPT usage include:
1. Cognitive thinning: ChatGPT usage may lead to a reduction in students' cognitive efforts and critical thinking skills. Students who rely heavily on ChatGPT for academic tasks may experience a decline in their ability to think critically and solve problems independently[1][2].
[[ChatGPT encourages a decline in cognitive development.]]
2. Memory loss: Research has found a concerning link between ChatGPT usage and memory loss in students. As students rely on the AI to quickly answer questions, they may reduce their own cognitive efforts, resulting in poor memory retention[2][4].
[[Surprisingly, there are lots of studies on this. Memory loss in adults is linked to dementia. What about kids?]]
3. Procrastination: Students who use ChatGPT frequently are more likely to procrastinate on their academic tasks. This may be due to the ease with which they can obtain information or complete assignments using the AI tool[2][6].
[[Undermines the development of time management skills.]]
4. Declining academic performance: Studies have shown a correlation between ChatGPT usage and a drop in GPA. This suggests that while the tool may make schoolwork easier in the short term, it may negatively impact overall academic performance[2][4]. [[I don't get this. Failing students now have a proverbial army they can muster at a moment's notice -- unless (below) they are too poor to access the technology?]]
5. Reduced cognitive load: While ChatGPT can ease students' cognitive load during research tasks, this comes at the expense of developing stronger arguments and critical thinking skills. Students using ChatGPT tend to produce weaker arguments compared to those using traditional research methods like Google[8].
These effects suggest a "thinning out of developmental cognitive strands" in the classroom, as students may become overly reliant on AI tools at the expense of developing crucial cognitive and academic skills. This trend could potentially lead to a less intellectually robust learning environment if not properly managed.
Citations:
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[2] https://futurism.com/the-byte/chatgpt-memory-loss-procrastination
[3] https://www.nea.org/nea-today/all-news-articles/chatgpt-enters-classroom-teachers-weigh-pros-and-cons
[4] https://evolutionnews.org/2024/03/chatgpt-linked-with-memory-loss-poor-academic-performance/
[5] https://blog.acceleratelearning.com/ai-in-education-the-impact-of-chatgpt
[6] https://www.psypost.org/chatgpt-linked-to-declining-academic-performance-and-memory-loss-in-new-study/
[7] https://gchumanrights.org/gc-preparedness/preparedness-science-technology/article-detail/the-chatgpt-effect-in-higher-education.html
[8] https://www.psypost.org/study-finds-chatgpt-eases-students-cognitive-load-but-at-the-expense-of-critical-thinking/
Since the widespread adoption of AI in education, several effects on both teachers and students have worsened in regard to the online delivery of instruction and assessment:
6. Compromised learning outcomes: The overuse of ChatGPT has led to a more significant negative impact on educational support and reduced social interaction between students and faculty, affecting the overall learning experience[2].
7. Challenges in assessment: Teachers are facing greater difficulties in accurately assessing students' knowledge and understanding due to the prevalence of AI-generated work[1][5].
8. Increased workload for teachers: Educators are spending more time revamping lesson plans to be "AI-proof" and developing new strategies to maintain academic integrity[5]. [[To say nothing about time wasted trying to catch cheaters and making the accusations stick? Why bother ....]]
9. Erosion of research skills: Students' ability to conduct independent research and critically evaluate sources has further deteriorated due to the convenience of AI-generated responses[2][4].
10. Perpetuation of biases: The increased use of ChatGPT has led to more instances of unintentional biases being reinforced in educational settings, as the AI may generate responses that reflect prejudices or imbalances present in its training data[2].
Regarding similar effects of online instructional delivery, even BEFORE the adoption of AI:
Many of the effects attributed to ChatGPT in education were already observed in online learning environments before the widespread adoption of AI, albeit to varying degrees:
1. Reduced human interaction: Online education has long grappled with the challenge of maintaining meaningful teacher-student and student-student interactions. This issue predates AI adoption, with concerns about social isolation and decreased engagement being prevalent in early online learning research[3][6].
2. Changes in assessment methods: The shift to online learning necessitated changes in assessment strategies to combat academic dishonesty, a challenge that existed before AI but has been exacerbated by it[1].
3. Time-saving for teachers: Learning Management Systems (LMS) and other educational technologies have been assisting teachers with tasks like grading and content delivery for years, though not to the extent that AI now promises[2].
4. Equity issues: The digital divide and unequal access to technology have been persistent challenges in online education, mirroring current concerns about AI accessibility[1][7].
5. Privacy concerns: Data privacy and security issues have been central to online education discussions long before AI entered the picture[1][4].
6. Potential for personalized learning: Adaptive learning systems, precursors to AI-driven personalization, have been a feature of online education for some time[5].
7. Risk of overreliance on technology: This concern has been present since the early days of educational technology integration, though AI intensifies it[1].
8. Impact on critical thinking skills: There have been ongoing debates about how online learning affects the development of critical thinking skills, a concern now amplified with AI[1][5].
9. Transformation of teaching roles: Online education has been gradually shifting the role of teachers from lecturers to facilitators, a trend that AI is accelerating[2][7].
10. Need for teacher training: The transition to online teaching has always required professional development, similar to the current need for AI-related training[8].
While many of these effects were present in online education before AI, the introduction of AI has generally intensified and accelerated these trends. The key difference lies in the scale, speed, and sophistication that AI brings to these existing challenges and opportunities in online education.
Citations:
[1] https://www.educationnext.org/a-i-in-education-leap-into-new-era-machine-intelligence-carries-risks-challenges-promises/
[2] https://www.waldenu.edu/programs/education/resource/five-pros-and-cons-of-ai-in-the-education-sector
[3] https://education.illinois.edu/about/news-events/news/article/2024/10/24/ai-in-schools--pros-and-cons
[4] https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf
[5] https://pmc.ncbi.nlm.nih.gov/articles/PMC9513244/
[6] https://openedx.org/blog/what-is-the-impact-of-ai-in-the-online-learning-space/
[7] https://www.onlineeducation.com/features/ai-in-distance-learning
[8] https://er.educause.edu/articles/2024/1/will-our-educational-system-keep-pace-with-ai-a-students-perspective-on-ai-and-learning
ChatGPT has introduced several negative changes to the role of teachers in the classroom:
1. Increased workload: Teachers are spending more time checking for AI plagiarism and revamping lesson plans to be "AI-proof"[4]. This additional burden takes away from time that could be spent on more meaningful educational activities.
2. Diminished teacher-student interaction: The use of ChatGPT for educational support can reduce opportunities for face-to-face interactions between students and faculty, potentially affecting the learning experience and social aspects of education[1].
3. Challenges in assessment: Teachers face difficulties in accurately assessing students' knowledge and understanding when AI tools are used to complete assignments[2]. This complicates the evaluation process and may lead to inaccurate assessments of student progress.
4. Shift in pedagogical focus: Educators are forced to adapt their teaching methods to counteract the negative effects of AI tools, potentially moving away from traditional teaching approaches that may still hold value[4].
5. Ethical dilemmas: Teachers must navigate new ethical challenges related to AI use in education, including issues of academic integrity and equitable access to technology[2][3].
6. Potential overreliance on AI: There is a risk that teachers might start relying on AI tools before developing in-depth knowledge about teaching and classroom management, potentially compromising the quality of education[7].
7. Reduced emphasis on critical thinking: As AI tools can provide quick answers, teachers may find it challenging to encourage students to engage in deep, critical thinking processes[1][2].
8. Perpetuation of biases: Teachers must be vigilant about the potential biases in AI-generated content, which could inadvertently reinforce stereotypes or prejudices in the classroom[1][3].
These changes highlight the complex challenges teachers face in integrating AI tools like ChatGPT into education while maintaining the quality and integrity of the learning experience.
Citations:
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[2] https://www.techbusinessnews.com.au/blog/chatgpt-may-lead-to-the-downfall-of-eduction-and-critical-thinking/
[3] https://citejournal.org/volume-23/issue-1-23/editorial/editorial-chatgpt-challenges-opportunities-and-implications-for-teacher-education/
[4] https://www.nea.org/nea-today/all-news-articles/chatgpt-enters-classroom-teachers-weigh-pros-and-cons
[5] https://www.pewresearch.org/short-reads/2024/05/15/a-quarter-of-u-s-teachers-say-ai-tools-do-more-harm-than-good-in-k-12-education/
[6] https://blogs.worldbank.org/en/education/how-use-chatgpt-support-teachers-good-bad-and-ugly
[7] https://www.tc.columbia.edu/articles/2023/january/navigating-the-risks-and-rewards-of-chatgpt/
Monday, January 27, 2025
Tuesday, January 21, 2025
Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)
![]() |
This one is going to take a hot minute to dissect. Minnesota Public Radio (MPR) has the story.
all four faculty graders of his exam expressed “significant concerns” that it was not written in his voice. They noted answers that seemed irrelevant or involved subjects not covered in coursework. Two instructors then generated their own responses in ChatGPT to compare against his and submitted those as evidence against Yang. At the resulting disciplinary hearing, Yang says those professors also shared results from AI detection software.
denies using AI for this exam and says the professors have a flawed approach to determining whether AI was used. He said methods used to detect AI are known to be unreliable and biased, particularly against people whose first language isn’t English. Yang grew up speaking Southern Min, a Chinese dialect.
His academic advisor Bryan Dowd spoke in Yang’s defense at the November hearing, telling panelists that expulsion, effectively a deportation, was “an odd punishment for something that is as difficult to establish as a correspondence between ChatGPT and a student’s answer.”
Dowd is a professor in health policy and management with over 40 years of teaching at the U of M. He told MPR News he lets students in his courses use generative AI because, in his opinion, it’s impossible to prevent or detect AI use. Dowd himself has never used ChatGPT, but he relies on Microsoft Word’s auto-correction and search engines like Google Scholar and finds those comparable.
“I think he’s quite an excellent student. He’s certainly, I think, one of the best-read students I’ve ever encountered”
an eight-hour preliminary exam that Yang took online. Instructions he shared show the exam was open-book, meaning test takers could use notes, papers and textbooks, but AI was explicitly prohibited.Exam graders argued the AI use was obvious enough. Yang disagrees.Weeks after the exam, associate professor Ezra Golberstein submitted a complaint to the U of M saying the four faculty reviewers agreed that Yang’s exam was not in his voice and recommending he be dismissed from the program. Yang had been in at least one class with all of them, so they compared his responses against two other writing samples.
Yang also objects to professors using AI detection software to make their case at the November hearing.He shared the U of M’s presentation showing findings from running his writing through GPTZero, which purports to determine the percentage of writing done by AI. The software was highly confident a human wrote Yang’s writing sample from two years ago. It was uncertain about his exam responses from August, assigning 89 percent probability of AI having generated his answer to one question and 19 percent probability for another.“Imagine the AI detector can claim that their accuracy rate is 99%. What does it mean?” asked Yang, who argued that the error rate could unfairly tarnish a student who didn’t use AI to do the work.
Yang suggests the U of M may have had an unjust motive to kick him out. When prompted, he shared documentation of at least three other instances of accusations raised by others against him that did not result in disciplinary action but that he thinks may have factored in his expulsion.He does not include this concern in his lawsuits. These allegations are also not explicitly listed as factors in the complaint against him, nor letters explaining the decision to expel Yang or rejecting his appeal. But one incident was mentioned at his hearing: in October 2023, Yang had been suspected of using AI on a homework assignment for a graduate-level course.In a written statement shared with panelists, associate professor Susan Mason said Yang had turned in an assignment where he wrote “re write it, make it more casual, like a foreign student write but no ai.” She recorded the Zoom meeting where she said Yang denied using AI and told her he uses ChatGPT to check his English.She asked if he had a problem with people believing his writing was too formal and said he responded that he meant his answer was too long and he wanted ChatGPT to shorten it. “I did not find this explanation convincing,” she wrote.
“re write it, make it more casual, like a foreign student write but no ai.”
the Office of Community Standards sent Yang a letter warning that the case was dropped but it may be taken into consideration on any future violations.
But the range of evidence was sufficient for the U of M. In the final ruling, the panel — comprised of several professors and graduate students from other departments — said they trusted the professors’ ability to identify AI-generated papers.
“PhD research is, by definition, exploring new ideas and often involves development of new methods. There are many opportunities for an individual to falsify data and/or analysis of data. Consequently, the academy has no tolerance for academic dishonesty in PhD programs or among faculty. A finding of dishonesty not only casts doubt on the veracity of everything that the individual has done or will do in the future, it also causes the broader community to distrust the discipline as a whole.”
Computer science researchers say detection software can have significant margins of error in finding instances of AI-generated text. OpenAI, the company behind ChatGPT, shut down its own detection tool last year citing a “low rate of accuracy.” Reports suggest AI detectors have misclassified work by non-native English writers, neurodivergent students and people who use tools like Grammarly or Microsoft Editor to improve their writing.“As an educator, one has to also think about the anxiety that students might develop,” said Manjeet Rege, a University of St. Thomas professor who has studied machine learning for more than two decades.
it’s important to find the balance between academic integrity and embracing AI innovation. But rather than relying on AI detection software, he advocates for evaluating students by designing assignments hard for AI to complete — like personal reflections, project-based learnings, oral presentations — or integrating AI into the instructions.
she and many other students live in fear of AI detection software.“AI and its lack of dependability for detection of itself could be the difference between a degree and going home,” she said.
In the 2023-24 school year, the University of Minnesota found 188 students responsible of scholastic dishonesty because of AI use, reflecting about half of all confirmed cases of dishonesty on the Twin Cities campus.
found his life in disarray. He said he would lose access to datasets essential for his dissertation and other projects he was working on with his U of M account, and was forced to leave research responsibilities to others at short notice. He fears how this will impact his academic career
“Probably I should think to do something, selling potatoes on the streets or something else,” he said.
Sunday, December 22, 2024
Friday, November 1, 2024
Student Newspaper Promotes Cheating Services for Cash (Derek Newton)
The Daily, the student newspaper at Ball State University in Indiana, ran an article recently with this headline:
Best Way to Remove AI Plagiarism from Text: Bypass AI Detectors
So, that’s pretty bad. There’s no real justification that I can imagine for advising students on how not to get caught committing academic fraud. But here we are.
The article is absent a byline, of course. And it comes with the standard disclaimer that papers and publishers probably believe absolves them of responsibility:
This post is provided by a third party who may receive compensation from the products or services they mention.
Translated, this means that some company, probably a soulless, astroturf digital content and placement agent, was paid by a cheating provider to place their dubious content and improve their SEO results. The agent, in turn, pays the newspaper for the “post” to appear on their pages, under their masthead. The paper, in turn, gets to play the ridiculous and tiring game of — “that’s not us.”
We covered similar antics before, in Issue 204.
Did not mean to rhyme. Though, I do it all the time.
Anyway, seeing cheating services in a student newspaper feels new, and doubly problematic — not only increasing the digital credibility of companies that sell deception and misconduct, but perhaps actually reaching target customers. It’s not ideal.
I did e-mail The Daily to ask about the article/advertisement and where they thought their duties sat related to integrity and fraud. They have not replied, and the article is still up.
That article is what you may expect. It starts:
Is your text AI-generated? If so, it may need to be revised to remove AI plagiarism.
You may need to remove the plagiarism — not actually do the work, by the way — because, they say, submitting “AI-plagiarized content” in assignments and research papers is, and I quote, “not advisable.”
Do tell, how do you use AI to generate text and remove the plagiarism? The Ball State paper is happy to share. Always check your paper through an AI-detector, they advise. Then, “it should be converted to human-like content.”
The article continues:
Dozens of AI humanizing tools are available to bypass AI detectors and produce 100% human-like content.
And, being super helpful, the article lists and links to several of them. But first, in what I can just barely describe as English, the article includes:
If the text is generated or paraphrased with AI models are most likely that AI plagiarised.
If you write the content using custom LLMs with advanced prompts are less liked AI-generated.
When you copied word-to-word content from other AI writers.
Trying to humanize AI content with cheap Humanizer tools leads to AI plagiarism.
Ah, what’s that again?
Following that, the piece offers step-by-step advice to remove AI content, directing readers to AI detectors, then pasting the flagged content into a different software and:
Click the “Humanize” button.
The suggested software, the article says:
produces human content for you.
First, way creepy. Second, there is zero chance that’s not academic cheating. Covering your tracks is not clever, it’s an admission of intent to deceive.
And, the article goes on:
If you successfully removed AI-generated content with [company redacted], you can use it.
Go ahead, use it. But let’s also reflect on the obvious — using AI content to replace AI content is no way removing AI content.
Surprising absolutely no one, the article also suggests using QuillBot, which is owned by cheating titan Course Hero (now Learneo), and backed by several education investors (see Issue 80).
Continuing:
Quillbot can accurately rewrite any AI-generated content into human-like content
Yes, the company that education investors have backed is being marketed as a way to sneak AI-created academic work past AI detection systems. It’s being marketed that way because that is exactly what it does. These investors, so far as I can tell, seem not the least bit bothered by the fact that one of their companies is polluting and destroying the teaching and learning value proposition they claim to support.
As long as the checks keep coming - amiright?
After listing other step-by-step ways to get around AI detectors, the article says:
If you use a good service, you can definitely transform AI-generated content into human-like content.
By that, they mean not getting caught cheating.
None of this should really surprise anyone. Where there’s a dollar to be made by peddling unethical shortcuts, someone will do it because people will pay.
Before moving on, let me point out once again the paradox — if AI detectors do not work, as some people mistakenly claim, why are companies paying for articles such as this one to sell services to bypass them? If AI detection was useless, there would be no market at all for these fingerprint erasure services.
This article first appeared at Derek Newton's The Cheat Sheet.


