Search This Blog
Sunday, July 6, 2025
College grad unemployment surges as employers replace new hires with AI (CBS News)
Friday, July 4, 2025
Monday, June 30, 2025
Will Maximus and Its Subsidiary AidVantage See Cuts?
Maximus Inc., the parent company of federal student loan servicer Aidvantage, is facing growing financial and existential threats as the Trump administration completes a radical budget proposal that would slash Medicaid by hundreds of billions of dollars and cut the U.S. Department of Education in half. These proposed changes could gut the very federal contracts that have fueled Maximus's revenue and investor confidence over the last two decades. Once seen as a steady player in the outsourcing of public services, Maximus now stands at the edge of a political and technological cliff.
The proposed Trump budget includes a plan to eliminate the Office of Federal Student Aid and transfer the $1.6 trillion federal student loan portfolio to the Small Business Administration. This proposed restructuring would remove Aidvantage and other servicers from their current roles, replacing them with yet-unnamed alternatives. While Maximus has profited enormously from servicing loans through Aidvantage—one of the major federal loan servicers—it is unclear whether the company has any role in this new Trump-led student loan regime. The SBA, which lacks experience managing consumer lending and repayment infrastructure, could subcontract to politically favored firms or simply allow artificial intelligence to replace human collectors altogether.
This possibility is not far-fetched. A 2023 study by Yale Insights explored how AI systems are already outperforming human debt collectors in efficiency, compliance, and scalability. The report examined the growing use of bots to handle borrower communication, account resolution, and payment tracking. These developments could render Maximus’s human-heavy servicing model obsolete. If the federal government shifts toward automated collection, it could bypass Maximus entirely, either through privatized tech-driven firms or through internal platforms that require fewer labor-intensive contracts.
On the health and human services side of the business, Maximus is also exposed. The company has long served as a contractor for Medicaid programs across several states, managing call centers and eligibility support. But with Medicaid facing potentially devastating cuts in the proposed Trump budget, Maximus’s largest and most stable contracts could disappear. The company’s TES-RCM division has already shown signs of unraveling, with anonymous reports suggesting a steep drop-off in clients and the departure of long-time employees. One insider claimed, “Customers are dropping like flies as are longtime employees. Not enough people to do the little work we have.”
Remote Maximus employees are also reporting layoffs and instability, particularly in Iowa, where 34 remote workers were terminated after two decades of contract work on state Medicaid programs. Anxiety is spreading across internal forums and layoff boards, as workers fear they may soon be out of a job in a shrinking and increasingly automated industry. Posts on TheLayoff.com and in investor forums indicate growing unease about the company’s long-term viability, particularly in light of the federal budget priorities now taking shape in Washington.
While Maximus stock (MMS) continues to trade with relative strength and still appears profitable on paper, it is increasingly reliant on government spending that may no longer exist under a Trump administration intent on dismantling large parts of the federal bureaucracy. If student loan servicing is eliminated, transferred, or automated, and Medicaid contracts dry up due to funding cuts, Maximus could lose two of its biggest revenue streams in a matter of months. The company’s contract with the Department of Education, once seen as a long-term asset, may become a political liability in a system being restructured to reward loyalty and reduce regulatory oversight.
The question now is not whether Maximus will be forced to downsize—it already is—but whether it will remain a relevant player in the new federal landscape at all. As artificial intelligence, austerity, and ideological realignment converge, Maximus may be remembered less for its dominance and more for how quickly it became unnecessary.
The Higher Education Inquirer will continue tracking developments affecting federal student loan servicers, government contractors, and the broader collapse of the administrative state.
Monday, June 2, 2025
“The Obsolete Man”: A Twilight Zone Warning for the Trump Era and the Age of AI
Rod Serling’s classic 1961 episode of The Twilight Zone, “The Obsolete Man,” offers a timeless meditation on authoritarianism, conformity, and the erasure of humanity. In it, a quiet librarian, Romney Wordsworth (played by Burgess Meredith), is deemed “obsolete” by a dystopian state for believing in books and God—symbols of individual thought and spiritual meaning. Condemned by a totalitarian chancellor and scheduled for execution, Wordsworth calmly exposes the cruelty and contradictions of the regime, ultimately reclaiming his dignity by refusing to bow to tyranny.
Over 60 years later, “The Obsolete Man” feels less like fiction and more like a documentary. The Trump era, supercharged by the rise of artificial intelligence and a war on truth, has brought Serling’s chilling parable into sharper focus.
The Authoritarian Impulse
President Donald Trump’s presidency—and his ongoing influence—has been marked by a deep antagonism toward democratic institutions, intellectual life, and perceived “elites.” Journalists were labeled “enemies of the people.” Scientists and educators were dismissed or silenced. Books were banned in schools and libraries, and curricula were stripped of “controversial” topics like systemic racism or gender identity.
Like the chancellor in The Obsolete Man, Trump and his allies seek not just to discredit dissenters but to erase their very legitimacy. In this worldview, librarians, teachers, and independent thinkers are expendable. What matters is loyalty to the regime, conformity to its ideology, and performance of power.
Wordsworth’s crime—being a librarian and a believer—is mirrored in real-life purges of professionals deemed out of step with a hardline political agenda. Public educators and college faculty who challenge reactionary narratives have been targeted by state legislatures, right-wing activists, and billionaire-backed think tanks. In higher education, departments of the humanities are being defunded or eliminated entirely. Faculty governance is undermined. The university, once a space for critical inquiry, is increasingly treated as an instrument for ideological control—or as a business to be stripped for parts.
The Age of AI and the Erasure of the Human
While authoritarianism silences the human spirit, artificial intelligence threatens to replace it. AI tools, now embedded in everything from hiring algorithms to classroom assessments, are reshaping how knowledge is produced, disseminated, and controlled. In the rush to adopt these technologies, questions about ethics, bias, and human purpose are often sidelined.
AI systems do not “believe” in anything. They do not feel awe, doubt, or moral anguish. They calculate, replicate, and optimize. In the hands of authoritarian regimes or profit-driven institutions, AI becomes a tool not of liberation, but of surveillance, censorship, and disposability. Workers are replaced. Students are reduced to data points. Librarians—like Wordsworth—are no longer needed in a world where books are digitized and curated by opaque algorithms.
This is not merely a future problem. It's here. Algorithms already determine who gets hired, who receives financial aid, and which students are flagged as “at risk.” Predictive policing, automated grading, and AI-generated textbooks are not the stuff of science fiction. They are reality. And those who question their fairness or legitimacy risk being labeled as backwards, inefficient—obsolete.
A Culture of Disposability
At the heart of “The Obsolete Man” is a question about value: Who decides what is worth keeping? In Trump’s America and in the AI-driven economy, people are judged by their utility to the system. If you're not producing profit, performing loyalty, or conforming to power, you can be cast aside.
This is especially true for the working class, contingent academics, and the so-called “educated underclass”—a growing population of debt-laden degree holders trapped in precarious jobs or no jobs at all. Their degrees are now questioned, their labor devalued, and their futures uncertain. They are told that if they can’t “pivot” or “reskill” for the next technological shift, they too may be obsolete.
The echoes of The Twilight Zone are deafening.
Resistance and Redemption
Yet, as Wordsworth demonstrates in his final moments, resistance is possible. Dignity lies in refusing to surrender the soul to the machine—or the regime. In his quiet defiance, Wordsworth forces the chancellor to confront his own cowardice, exposing the hollow cruelty of the system.
In our time, that resistance takes many forms: educators who continue to teach truth despite political pressure; librarians who fight book bans; whistleblowers who challenge surveillance technologies; and students who organize for justice. These acts of courage and conscience remind us that obsolescence is not a matter of utility—it’s a judgment imposed by those in power, and it can be rejected.
Rod Serling ended his episode with a reminder: “Any state, any entity, any ideology that fails to recognize the worth, the dignity, the rights of man—that state is obsolete.”
The question now is whether we will heed the warning. In an age where authoritarianism and AI threaten to render us all obsolete, will we remember what it means to be human?
The Higher Education Inquirer welcomes responses and reflections on how pop culture can illuminate our present crises. Contact us with your thoughts or your own essay proposals.
Monday, February 10, 2025
We asked AI to tell on itself... (Glen McGhee)
This is what AI told us about its own limitations.
1. Illusion of Learning: Teachers may perceive an illusion of progress due to improved homework scores when students use ChatGPT. A study in Turkey found that while homework scores increased with ChatGPT use, final exam performance actually decreased by 17% for students using unprompted ChatGPT[4], that is, without guiding prompts, simply navigating it on their own]. This suggests that teachers might observe apparent progress that doesn't translate to actual learning.2. Overestimation of Knowledge: Students using ChatGPT may experience "Illusory Knowledge," where they believe they've learned more than they actually have[4]. This could lead teachers to overestimate class progress based on students' self-reported understanding [[I have personally experienced this while teaching -- student head nods in synch with my points -- only to learn later that this was a student reflex to pacify the teacher. As a teacher, I find this befuddling.]]
3. Reduced Critical Thinking: Teachers might notice a decline in students' critical thinking and problem-solving abilities due to overreliance on ChatGPT[6]. This could create a false impression of progress in content knowledge while masking deficits in crucial cognitive skills.
4. Difficulty in Assessment: The use of ChatGPT poses challenges for teachers in accurately assessing student abilities, potentially leading to a skewed perception of class progress[1]. [[Yes!]]
It's important to note that research on this specific aspect of teacher perception is limited, and more studies are needed to fully understand the phenomenon of perceived illusory progress in classrooms using ChatGPT.
Citations:
[1] https://asiacall.info/proceedings/index.php/articles/article/download/81/64/757
[2] https://community.openai.com/t/chatgpt-4-and-the-illusion-of-learning-a-cautionary-tale-from-a-new-user/1009557
[3] https://rgsa.openaccesspublications.org/rgsa/article/download/8162/4017/29639
[4] https://www.oneusefulthing.org/p/post-apocalyptic-education
[5] https://huskiecommons.lib.niu.edu/cgi/viewcontent.cgi?article=8882&context=allgraduate-thesesdissertations
[6] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[7] https://www.researchgate.net/publication/382619812_Perceptions_of_pre-service_teachers_on_the_use_of_ChatGPT_during_their_training_and_the_future_implications_for_their_future_role_in_high_schools_A_research_in_progress
[8] https://www.ciocoverage.com/edtech-and-ai-in-education-the-illusion-of-progress/
Other Possible negative effects of ChatGPT usage include:
1. Cognitive thinning: ChatGPT usage may lead to a reduction in students' cognitive efforts and critical thinking skills. Students who rely heavily on ChatGPT for academic tasks may experience a decline in their ability to think critically and solve problems independently[1][2].
[[ChatGPT encourages a decline in cognitive development.]]
2. Memory loss: Research has found a concerning link between ChatGPT usage and memory loss in students. As students rely on the AI to quickly answer questions, they may reduce their own cognitive efforts, resulting in poor memory retention[2][4].
[[Surprisingly, there are lots of studies on this. Memory loss in adults is linked to dementia. What about kids?]]
3. Procrastination: Students who use ChatGPT frequently are more likely to procrastinate on their academic tasks. This may be due to the ease with which they can obtain information or complete assignments using the AI tool[2][6].
[[Undermines the development of time management skills.]]
4. Declining academic performance: Studies have shown a correlation between ChatGPT usage and a drop in GPA. This suggests that while the tool may make schoolwork easier in the short term, it may negatively impact overall academic performance[2][4]. [[I don't get this. Failing students now have a proverbial army they can muster at a moment's notice -- unless (below) they are too poor to access the technology?]]
5. Reduced cognitive load: While ChatGPT can ease students' cognitive load during research tasks, this comes at the expense of developing stronger arguments and critical thinking skills. Students using ChatGPT tend to produce weaker arguments compared to those using traditional research methods like Google[8].
These effects suggest a "thinning out of developmental cognitive strands" in the classroom, as students may become overly reliant on AI tools at the expense of developing crucial cognitive and academic skills. This trend could potentially lead to a less intellectually robust learning environment if not properly managed.
Citations:
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[2] https://futurism.com/the-byte/chatgpt-memory-loss-procrastination
[3] https://www.nea.org/nea-today/all-news-articles/chatgpt-enters-classroom-teachers-weigh-pros-and-cons
[4] https://evolutionnews.org/2024/03/chatgpt-linked-with-memory-loss-poor-academic-performance/
[5] https://blog.acceleratelearning.com/ai-in-education-the-impact-of-chatgpt
[6] https://www.psypost.org/chatgpt-linked-to-declining-academic-performance-and-memory-loss-in-new-study/
[7] https://gchumanrights.org/gc-preparedness/preparedness-science-technology/article-detail/the-chatgpt-effect-in-higher-education.html
[8] https://www.psypost.org/study-finds-chatgpt-eases-students-cognitive-load-but-at-the-expense-of-critical-thinking/
Since the widespread adoption of AI in education, several effects on both teachers and students have worsened in regard to the online delivery of instruction and assessment:
6. Compromised learning outcomes: The overuse of ChatGPT has led to a more significant negative impact on educational support and reduced social interaction between students and faculty, affecting the overall learning experience[2].
7. Challenges in assessment: Teachers are facing greater difficulties in accurately assessing students' knowledge and understanding due to the prevalence of AI-generated work[1][5].
8. Increased workload for teachers: Educators are spending more time revamping lesson plans to be "AI-proof" and developing new strategies to maintain academic integrity[5]. [[To say nothing about time wasted trying to catch cheaters and making the accusations stick? Why bother ....]]
9. Erosion of research skills: Students' ability to conduct independent research and critically evaluate sources has further deteriorated due to the convenience of AI-generated responses[2][4].
10. Perpetuation of biases: The increased use of ChatGPT has led to more instances of unintentional biases being reinforced in educational settings, as the AI may generate responses that reflect prejudices or imbalances present in its training data[2].
Regarding similar effects of online instructional delivery, even BEFORE the adoption of AI:
Many of the effects attributed to ChatGPT in education were already observed in online learning environments before the widespread adoption of AI, albeit to varying degrees:
1. Reduced human interaction: Online education has long grappled with the challenge of maintaining meaningful teacher-student and student-student interactions. This issue predates AI adoption, with concerns about social isolation and decreased engagement being prevalent in early online learning research[3][6].
2. Changes in assessment methods: The shift to online learning necessitated changes in assessment strategies to combat academic dishonesty, a challenge that existed before AI but has been exacerbated by it[1].
3. Time-saving for teachers: Learning Management Systems (LMS) and other educational technologies have been assisting teachers with tasks like grading and content delivery for years, though not to the extent that AI now promises[2].
4. Equity issues: The digital divide and unequal access to technology have been persistent challenges in online education, mirroring current concerns about AI accessibility[1][7].
5. Privacy concerns: Data privacy and security issues have been central to online education discussions long before AI entered the picture[1][4].
6. Potential for personalized learning: Adaptive learning systems, precursors to AI-driven personalization, have been a feature of online education for some time[5].
7. Risk of overreliance on technology: This concern has been present since the early days of educational technology integration, though AI intensifies it[1].
8. Impact on critical thinking skills: There have been ongoing debates about how online learning affects the development of critical thinking skills, a concern now amplified with AI[1][5].
9. Transformation of teaching roles: Online education has been gradually shifting the role of teachers from lecturers to facilitators, a trend that AI is accelerating[2][7].
10. Need for teacher training: The transition to online teaching has always required professional development, similar to the current need for AI-related training[8].
While many of these effects were present in online education before AI, the introduction of AI has generally intensified and accelerated these trends. The key difference lies in the scale, speed, and sophistication that AI brings to these existing challenges and opportunities in online education.
Citations:
[1] https://www.educationnext.org/a-i-in-education-leap-into-new-era-machine-intelligence-carries-risks-challenges-promises/
[2] https://www.waldenu.edu/programs/education/resource/five-pros-and-cons-of-ai-in-the-education-sector
[3] https://education.illinois.edu/about/news-events/news/article/2024/10/24/ai-in-schools--pros-and-cons
[4] https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf
[5] https://pmc.ncbi.nlm.nih.gov/articles/PMC9513244/
[6] https://openedx.org/blog/what-is-the-impact-of-ai-in-the-online-learning-space/
[7] https://www.onlineeducation.com/features/ai-in-distance-learning
[8] https://er.educause.edu/articles/2024/1/will-our-educational-system-keep-pace-with-ai-a-students-perspective-on-ai-and-learning
ChatGPT has introduced several negative changes to the role of teachers in the classroom:
1. Increased workload: Teachers are spending more time checking for AI plagiarism and revamping lesson plans to be "AI-proof"[4]. This additional burden takes away from time that could be spent on more meaningful educational activities.
2. Diminished teacher-student interaction: The use of ChatGPT for educational support can reduce opportunities for face-to-face interactions between students and faculty, potentially affecting the learning experience and social aspects of education[1].
3. Challenges in assessment: Teachers face difficulties in accurately assessing students' knowledge and understanding when AI tools are used to complete assignments[2]. This complicates the evaluation process and may lead to inaccurate assessments of student progress.
4. Shift in pedagogical focus: Educators are forced to adapt their teaching methods to counteract the negative effects of AI tools, potentially moving away from traditional teaching approaches that may still hold value[4].
5. Ethical dilemmas: Teachers must navigate new ethical challenges related to AI use in education, including issues of academic integrity and equitable access to technology[2][3].
6. Potential overreliance on AI: There is a risk that teachers might start relying on AI tools before developing in-depth knowledge about teaching and classroom management, potentially compromising the quality of education[7].
7. Reduced emphasis on critical thinking: As AI tools can provide quick answers, teachers may find it challenging to encourage students to engage in deep, critical thinking processes[1][2].
8. Perpetuation of biases: Teachers must be vigilant about the potential biases in AI-generated content, which could inadvertently reinforce stereotypes or prejudices in the classroom[1][3].
These changes highlight the complex challenges teachers face in integrating AI tools like ChatGPT into education while maintaining the quality and integrity of the learning experience.
Citations:
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[2] https://www.techbusinessnews.com.au/blog/chatgpt-may-lead-to-the-downfall-of-eduction-and-critical-thinking/
[3] https://citejournal.org/volume-23/issue-1-23/editorial/editorial-chatgpt-challenges-opportunities-and-implications-for-teacher-education/
[4] https://www.nea.org/nea-today/all-news-articles/chatgpt-enters-classroom-teachers-weigh-pros-and-cons
[5] https://www.pewresearch.org/short-reads/2024/05/15/a-quarter-of-u-s-teachers-say-ai-tools-do-more-harm-than-good-in-k-12-education/
[6] https://blogs.worldbank.org/en/education/how-use-chatgpt-support-teachers-good-bad-and-ugly
[7] https://www.tc.columbia.edu/articles/2023/january/navigating-the-risks-and-rewards-of-chatgpt/
Monday, January 27, 2025
Tuesday, January 21, 2025
Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)
![]() |
This one is going to take a hot minute to dissect. Minnesota Public Radio (MPR) has the story.
all four faculty graders of his exam expressed “significant concerns” that it was not written in his voice. They noted answers that seemed irrelevant or involved subjects not covered in coursework. Two instructors then generated their own responses in ChatGPT to compare against his and submitted those as evidence against Yang. At the resulting disciplinary hearing, Yang says those professors also shared results from AI detection software.
denies using AI for this exam and says the professors have a flawed approach to determining whether AI was used. He said methods used to detect AI are known to be unreliable and biased, particularly against people whose first language isn’t English. Yang grew up speaking Southern Min, a Chinese dialect.
His academic advisor Bryan Dowd spoke in Yang’s defense at the November hearing, telling panelists that expulsion, effectively a deportation, was “an odd punishment for something that is as difficult to establish as a correspondence between ChatGPT and a student’s answer.”
Dowd is a professor in health policy and management with over 40 years of teaching at the U of M. He told MPR News he lets students in his courses use generative AI because, in his opinion, it’s impossible to prevent or detect AI use. Dowd himself has never used ChatGPT, but he relies on Microsoft Word’s auto-correction and search engines like Google Scholar and finds those comparable.
“I think he’s quite an excellent student. He’s certainly, I think, one of the best-read students I’ve ever encountered”
an eight-hour preliminary exam that Yang took online. Instructions he shared show the exam was open-book, meaning test takers could use notes, papers and textbooks, but AI was explicitly prohibited.Exam graders argued the AI use was obvious enough. Yang disagrees.Weeks after the exam, associate professor Ezra Golberstein submitted a complaint to the U of M saying the four faculty reviewers agreed that Yang’s exam was not in his voice and recommending he be dismissed from the program. Yang had been in at least one class with all of them, so they compared his responses against two other writing samples.
Yang also objects to professors using AI detection software to make their case at the November hearing.He shared the U of M’s presentation showing findings from running his writing through GPTZero, which purports to determine the percentage of writing done by AI. The software was highly confident a human wrote Yang’s writing sample from two years ago. It was uncertain about his exam responses from August, assigning 89 percent probability of AI having generated his answer to one question and 19 percent probability for another.“Imagine the AI detector can claim that their accuracy rate is 99%. What does it mean?” asked Yang, who argued that the error rate could unfairly tarnish a student who didn’t use AI to do the work.
Yang suggests the U of M may have had an unjust motive to kick him out. When prompted, he shared documentation of at least three other instances of accusations raised by others against him that did not result in disciplinary action but that he thinks may have factored in his expulsion.He does not include this concern in his lawsuits. These allegations are also not explicitly listed as factors in the complaint against him, nor letters explaining the decision to expel Yang or rejecting his appeal. But one incident was mentioned at his hearing: in October 2023, Yang had been suspected of using AI on a homework assignment for a graduate-level course.In a written statement shared with panelists, associate professor Susan Mason said Yang had turned in an assignment where he wrote “re write it, make it more casual, like a foreign student write but no ai.” She recorded the Zoom meeting where she said Yang denied using AI and told her he uses ChatGPT to check his English.She asked if he had a problem with people believing his writing was too formal and said he responded that he meant his answer was too long and he wanted ChatGPT to shorten it. “I did not find this explanation convincing,” she wrote.
“re write it, make it more casual, like a foreign student write but no ai.”
the Office of Community Standards sent Yang a letter warning that the case was dropped but it may be taken into consideration on any future violations.
But the range of evidence was sufficient for the U of M. In the final ruling, the panel — comprised of several professors and graduate students from other departments — said they trusted the professors’ ability to identify AI-generated papers.
“PhD research is, by definition, exploring new ideas and often involves development of new methods. There are many opportunities for an individual to falsify data and/or analysis of data. Consequently, the academy has no tolerance for academic dishonesty in PhD programs or among faculty. A finding of dishonesty not only casts doubt on the veracity of everything that the individual has done or will do in the future, it also causes the broader community to distrust the discipline as a whole.”
Computer science researchers say detection software can have significant margins of error in finding instances of AI-generated text. OpenAI, the company behind ChatGPT, shut down its own detection tool last year citing a “low rate of accuracy.” Reports suggest AI detectors have misclassified work by non-native English writers, neurodivergent students and people who use tools like Grammarly or Microsoft Editor to improve their writing.“As an educator, one has to also think about the anxiety that students might develop,” said Manjeet Rege, a University of St. Thomas professor who has studied machine learning for more than two decades.
it’s important to find the balance between academic integrity and embracing AI innovation. But rather than relying on AI detection software, he advocates for evaluating students by designing assignments hard for AI to complete — like personal reflections, project-based learnings, oral presentations — or integrating AI into the instructions.
she and many other students live in fear of AI detection software.“AI and its lack of dependability for detection of itself could be the difference between a degree and going home,” she said.
In the 2023-24 school year, the University of Minnesota found 188 students responsible of scholastic dishonesty because of AI use, reflecting about half of all confirmed cases of dishonesty on the Twin Cities campus.
found his life in disarray. He said he would lose access to datasets essential for his dissertation and other projects he was working on with his U of M account, and was forced to leave research responsibilities to others at short notice. He fears how this will impact his academic career
“Probably I should think to do something, selling potatoes on the streets or something else,” he said.
Sunday, December 22, 2024
Friday, November 1, 2024
Student Newspaper Promotes Cheating Services for Cash (Derek Newton)
The Daily, the student newspaper at Ball State University in Indiana, ran an article recently with this headline:
Best Way to Remove AI Plagiarism from Text: Bypass AI Detectors
So, that’s pretty bad. There’s no real justification that I can imagine for advising students on how not to get caught committing academic fraud. But here we are.
The article is absent a byline, of course. And it comes with the standard disclaimer that papers and publishers probably believe absolves them of responsibility:
This post is provided by a third party who may receive compensation from the products or services they mention.
Translated, this means that some company, probably a soulless, astroturf digital content and placement agent, was paid by a cheating provider to place their dubious content and improve their SEO results. The agent, in turn, pays the newspaper for the “post” to appear on their pages, under their masthead. The paper, in turn, gets to play the ridiculous and tiring game of — “that’s not us.”
We covered similar antics before, in Issue 204.
Did not mean to rhyme. Though, I do it all the time.
Anyway, seeing cheating services in a student newspaper feels new, and doubly problematic — not only increasing the digital credibility of companies that sell deception and misconduct, but perhaps actually reaching target customers. It’s not ideal.
I did e-mail The Daily to ask about the article/advertisement and where they thought their duties sat related to integrity and fraud. They have not replied, and the article is still up.
That article is what you may expect. It starts:
Is your text AI-generated? If so, it may need to be revised to remove AI plagiarism.
You may need to remove the plagiarism — not actually do the work, by the way — because, they say, submitting “AI-plagiarized content” in assignments and research papers is, and I quote, “not advisable.”
Do tell, how do you use AI to generate text and remove the plagiarism? The Ball State paper is happy to share. Always check your paper through an AI-detector, they advise. Then, “it should be converted to human-like content.”
The article continues:
Dozens of AI humanizing tools are available to bypass AI detectors and produce 100% human-like content.
And, being super helpful, the article lists and links to several of them. But first, in what I can just barely describe as English, the article includes:
If the text is generated or paraphrased with AI models are most likely that AI plagiarised.
If you write the content using custom LLMs with advanced prompts are less liked AI-generated.
When you copied word-to-word content from other AI writers.
Trying to humanize AI content with cheap Humanizer tools leads to AI plagiarism.
Ah, what’s that again?
Following that, the piece offers step-by-step advice to remove AI content, directing readers to AI detectors, then pasting the flagged content into a different software and:
Click the “Humanize” button.
The suggested software, the article says:
produces human content for you.
First, way creepy. Second, there is zero chance that’s not academic cheating. Covering your tracks is not clever, it’s an admission of intent to deceive.
And, the article goes on:
If you successfully removed AI-generated content with [company redacted], you can use it.
Go ahead, use it. But let’s also reflect on the obvious — using AI content to replace AI content is no way removing AI content.
Surprising absolutely no one, the article also suggests using QuillBot, which is owned by cheating titan Course Hero (now Learneo), and backed by several education investors (see Issue 80).
Continuing:
Quillbot can accurately rewrite any AI-generated content into human-like content
Yes, the company that education investors have backed is being marketed as a way to sneak AI-created academic work past AI detection systems. It’s being marketed that way because that is exactly what it does. These investors, so far as I can tell, seem not the least bit bothered by the fact that one of their companies is polluting and destroying the teaching and learning value proposition they claim to support.
As long as the checks keep coming - amiright?
After listing other step-by-step ways to get around AI detectors, the article says:
If you use a good service, you can definitely transform AI-generated content into human-like content.
By that, they mean not getting caught cheating.
None of this should really surprise anyone. Where there’s a dollar to be made by peddling unethical shortcuts, someone will do it because people will pay.
Before moving on, let me point out once again the paradox — if AI detectors do not work, as some people mistakenly claim, why are companies paying for articles such as this one to sell services to bypass them? If AI detection was useless, there would be no market at all for these fingerprint erasure services.
This article first appeared at Derek Newton's The Cheat Sheet.
Thursday, October 10, 2024
Labor, Big Tech, and A.I.: The Big Picture (CUNY School of Labor and Urban Studies)
1:00pm - 2:30pm
Lunch will be served. Free and open to all.25 West 43rd Street, 18th floor, New York, NY 10036 (map)
*In-person* only in Midtown Manhattan.
REGISTER:
https://slucuny.swoogo.com/30October2024/register
Join us for a conversation with Alex N. Press, staff writer at Jacobin magazine and Edward Ongweso Jr., senior researcher at Security in Context and a co-host of the podcast This Machine Kills; moderated by New Labor Forum Editor-at-Large Micah Uetricht.
The discussion will address major issues confronting the labor movement with the development and use of artificial intelligence, surveillance, automation of work generally, and the rise of Big Tech’s control over large segments of the U.S. workforce. This conversation is the first in what will be an ongoing series focusing on the impact of Big Tech and AI on the labor movement and strategies for organizing to build worker power.
Presented in collaboration with New Labor Forum (NLF), this program connects to the fall 2024 issue of NLF, which features the special section, “Labor and the Uncertain Future of Artificial Intelligence,” and includes the article, “How the U.S. Labor Movement Is Confronting A.I.,” by Alex N. Press.
Speaker Bios:
Edward Ongweso Jr. is a senior researcher at Security in Context and a co-host of This Machine Kills, a podcast about the political economy of technology. His work has appeared in The Guardian, Baffler, Logic(s), Nation, Dissent, Vice, and elsewhere.
Alex N. Press is a staff writer at Jacobin magazine. Her writing has appeared in New Labor Forum, the New York Times, the Washington Post, and the Nation, among other places, and she is currently writing her first book, What We Will: How American Labor Woke Up.
Micah Uetricht is Editor-at-Large of New Labor Forum, a national labor journal produced by the Murphy Institute at CUNY School of Labor and Urban Studies and host of SLU’s podcast Reinventing Solidarity. Uetricht is also the editor of Jacobin and the author of two books: Strike for America: Chicago Teachers Against Austerity; and Bigger than Bernie: How We Go from the Sanders Campaign to Democratic Socialism (co-authored by Meagan Day).
REGISTER:
https://slucuny.swoogo.com/30October2024/register
Saturday, September 14, 2024
Credential Inflation Makes College Degree Not Worth The Cost (Randall Collins)
The first big hit was the 2008 recession, when graduates found it hard to get jobs. But even as the economy recovered and grew, faith in college degrees has steadily declined.
In 2013, 53% of the population—a slim majority, agreed that a 4-year degree gives “a better chance to get a good job and earn more income over their lifetime.” In 2023, education-believers had fallen to 42%, while 56% said it was not worth the cost. Both women and men had turned negative in the latest survey—even though women had overtaken men in college enrollments in previous decades. The youngest generation was the most negative, 60% of those aged 18-34. Not surprisingly; they are the ones who had to apply to dozens of schools, a rat-race of test scores, scrambling for grades, and amassing extra-curricular activities; most not getting into their school of choice, while paying constantly rising tuition and fees, and burdened with student-loan debt into middle age. Not to mention the near-impossibility of buying a house at hugely inflated prices, many still living with their parents; while all generations now agree that the younger will not enjoy the standard of living of their parents.
The only demographic that still thinks college has career value are men with a college degree or higher, who earn over $100,000 a year. They are the only winners in the tournament. Every level of education—high school, junior college, 4-year college, M.B.A. or PhD or professional credential in law, medicine, etc.—has value as an entry ticket to the next level of competition for credentials. The financial payoff comes when you get to the big time, the Final Four so to speak; striving through the lower levels is motivated by a combination of American cultural habits and wishful thinking.
The boom-or-bust pattern of rising education makes more sense in long-term perspective. For 100 years, the USA has led the world in the proportion of the population in schools at all levels. In 1900, 6% of the youth cohort finished high school, and less than 2% had a college degree. High school started taking off in the 1920s, and after a big push in the 1950s to keep kids in school, reached 77% in 1970. Like passing the baton, as high school became commonplace, college attendance rocketed, jumping to 53% at the end of the 1960s—there was a reason for all those student protests of the Sixties: they were suddenly a big slice of the American population. By 2017, 30% over age 24 had a college degree; another 27% had some years of college. It has been a long-time pattern that only about half of all college students finish their degree—dropping out of college has always been prevalent, and still is.
The growing number of students at all levels has been a process of credential inflation. The value of any particular diploma—high school, college, M.A., PhD—is not constant; it depends on the labor market at the time, the amount of competition from others who have the same degree. In the 1930s, only 12% of employers required a college degree for managers; by the late 1960s, it was up to 40%. By the 1990s, an M.B.A. was the preferred degree for managerial employment; and even police departments were hiring college-educated cops. In other words, as college attendance has become almost as common as high school, it no longer conveys much social status. To get ahead in the elite labor market, one needs advanced and specialized degrees. In the medical professions, the process of credential-seeking goes on past age 30; for scientists, a PhD needs to be supplemented by a couple of years in a post-doctoral fellowship, doing grunt-work in somebody else’s laboratory. In principle, credential inflation has no end in sight.
An educational diploma is like money: a piece of paper whose value depends inversely on how much of it is in circulation. In the monetary world, printing more money reduces its purchasing power. The same thing happens with turning out more educational credentials—with one important difference. Printing money is relatively cheap (and so is the equivalent process of changing banking policies so that more credit is issued). But minting a college degree is expensive: someone has to pay for the teachers, the administrators, the buildings, and whatever entertainments and luxuries (such as sports and student activities) the school offers—and which make up a big part of its attraction for American students. And all this degree-printing apparatus has been becoming more expensive over the decades, far outpacing the amount of monetary inflation since the 1980s. Colleges and universities (as well as high schools and elementary schools) keep increasing the proportion of administrators and staff. At the top end of the college market, the professors who give the school its reputation by their research command top salaries.
Credential-minting institutions have been able to charge whatever they can get away with, because of the high level of competition among students for admission. Not all families can afford it; but enough of them can so that schools can charge many multiples of what they charged (in constant dollars) even 30 years ago. The result has been a huge expansion in student debt: averaging $38,000 among 45 million borrowers; and including 70% of all holders of B.A. degrees. Total student debt tripled between 2007 and 2022.
These three different kinds of inflation reinforce each other: inflation in the amount of credential currency chasing jobs in the job market; inflation in the cost of getting a degree; inflation in student debt. We could add grade inflation as a fourth part of the spiral: intensifying pressure to get into college and if possible beyond, has motivated students to put pressure on their teachers to grade more easily; in public schools, to pass them along to the next grade no matter their performance (retardation in grade, which in the 1900s was common, has virtually disappeared); in college, GPA-striving has a similar effect. Grades are higher than ever but the measured value of the contents of education, ranging from writing skills to how long the course material is remembered after the course is over is low (Arum and Roksa 2011, 2014). College degrees are not only inflated as to job-purchasing power; they are also inflated as a measure of what skills they actually represent.
The remedies suggested for some of these problems--- such as canceling student debt by government action—would temporarily relieve some ex-students of the burden of paying for not-so-valuable degrees. But canceling student debt would not solve the underlying dynamic of credential inflation, but exacerbate it. If college education became free (either by government directly picking up the tab; or by canceling student debts), we can expect even more students to seek higher degrees. If 100% of the population has a college degree, its advantage on the labor market is exactly zero; you would have to get some further degree to get a competitive edge.
Scandals in college admissions are just one more sign of the pressures corroding the value of education. College employees collude with wealthy parents to create fake athletic skills, in a time when students apply to dozens of schools, and even top grades don’t guarantee admission. Since athletics are a big part of schools’ prestige, and are considered a legitimate pathway to admission outside the grade-inflation tournament, it is hardly surprising that some try that side-door entry. There is not only grade inflation, but inflation in competition over the pseudo-credentials of extracurricular activites and community service. Efforts at increasing race and class equity in admissions increase the pressure among the affluent and the non-minority populations. Since sociological evidence shows that tests and grades favour children of the higher classes (whose families provide them with what Bourdieu called cultural capital), there are moves to eliminate test scores and/or grades as criteria of admission. What is left may be letters of recommendation and self-extolling essays--- what we might call “rhetorical inflation”, plus skin color or other demographic markers; but the result will do nothing to reduce the inflation of credentials. The underlying hope is that giving everybody a college degree will somehow bring about social equality. In reality, it will just add another chapter to the history of credential inflation.
P.S. The advent of Artificial Intelligence to write students’ papers, and other AI to grade them (not to mention to write their application essays and read them for admission) will do nothing to raise the honesty and status of the educational credential chase.
References
“More Say Colleges Aren’t Worth the Cost.” Wall Street Journal April 1, 2023 (NORC-Wall St. Journal survey)
U.S. Bureau of the Census
Randall Collins. 2019. The Credential Society. 2nd edition. Columbia Univ. Press.
Richard Arum and Josipa Roksa. 2011. Academically Adrift: Limited Learning on College Campuses. Chicago: University of Chicago Press.
Richard Arum and Josipa Roksa. 2014. Aspiring Adults Adrift: Tentative Transitions of College Graduates. Chicago: University of Chicago Press.