FOR IMMEDIATE RELEASE:
|
|
#adjunct #AI #BDR #collegemania #collegemeltdown #edugrift #medugrift #nonviolence #QOL #rehumanization #robocollege #veritas
FOR IMMEDIATE RELEASE:
|
|
![]() |
Issue 364
Subscribe below to join 4,663 (+6) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
Venerable New York Magazine ran an epic piece (paywall) on cheating and cheating with AI recently. It’s a thing of beauty. I could have written it. I should have. But honestly, I could not have done much better.
The headline is brutal and blunt:
Everyone Is Cheating Their Way Through College
To which I say — no kidding.
The piece wanders around, in a good way. But I’m going to try to put things in a more collected order and share only the best and most important parts. If I can. Whether I succeed or not, I highly encourage you to go over and read it.
Lee and Cheating Everything
The story starts with Chungin “Roy” Lee, the former student at Columbia who was kicked out for selling cheating hacks and then started a company to sell cheating hacks. His story is pretty well known at this point, but if you want to review it, we touched on it in Issue 354.
What I learned in this story is that, at Columbia, Lee:
by his own admission, proceeded to use generative artificial intelligence to cheat on nearly every assignment. As a computer-science major, he depended on AI for his introductory programming classes: “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” By his rough math, AI wrote 80 percent of every essay he turned in.
And:
“Most assignments in college are not relevant,” [Lee] told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort.
The article says Lee’s admissions essay for Columbia was AI too.
So, for all the people who were up in arms that Columbia would sanction a student for building a cheating app, maybe there’s more to it than just that. Maybe Lee built a cheating app because he’s a cheater. And, as such, has no place in an environment based on learning. That said, it’s embarrassing that Columbia did not notice a student in such open mockery of their mission. Seriously, embarrassing.
Continuing from the story:
Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.
Also embarrassing for Columbia. But seriously, Lee has no idea what he is talking about. Consider this:
Lee explained to me that by showing the world AI could be used to cheat during a remote job interview, he had pushed the tech industry to evolve the same way AI was forcing higher education to evolve. “Every technological innovation has caused humanity to sit back and think about what work is actually useful,” he said. “There might have been people complaining about machinery replacing blacksmiths in, like, the 1600s or 1800s, but now it’s just accepted that it’s useless to learn how to blacksmith.”
I already regret writing this — but maybe if Lee had done a little more reading, done any writing at all, he could make a stronger argument. His argument here is that of a precocious eighth grader.
OpenAI/ChatGPT and Students
Anyway, here are sections and quotes from the article about students using ChatGPT to cheat. I hope you have a strong stomach.
As a brief aside, having written about this topic for years now, I cannot tell you how hard it is to get students to talk about this. What follows is the highest quality journalism. I am impressed and jealous.
From the story:
“College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.
More:
Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school.
And:
After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.”
This really is where we are. These students are not outliers.
Worse, being as clear here as I know how to be — 95% of colleges do not care. At least not enough to do anything about it. They are, in my view, perfectly comfortable with their students faking it, laughing their way through the process, because fixing it is hard. It’s easier to look cool and “embrace” AI than to acknowledge the obvious and existential truth.
But let’s keep going:
now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences?
Please mentally underline the “no consequences” part. These are not bad people, the students using ChatGPT and other AI products to cheat. They are making an obvious choice — easy and no penalty versus actual, serious work. So long as this continues to be the equation, cheating will be as common as breathing. Only idiots and masochists will resist.
Had enough? No? Here:
Wendy, a freshman finance major at one of the city’s top universities, told me that she is against using AI. Or, she clarified, “I’m against copy-and-pasting. I’m against cheating and plagiarism. All of that. It’s against the student handbook.” Then she described, step-by-step, how on a recent Friday at 8 a.m., she called up an AI platform to help her write a four-to-five-page essay due two hours later.
Of course. When you ask students if they condone cheating, most say no. Most also say they do not cheat. Then, when you ask about what they do specifically, it’s textbook cheating. As I remember reading in Cheating in College, when you ask students to explain this disconnect, they often say, “Well, when I did it, it was not cheating.” Wendy is a good example.
In any case, this next section is long, and I regret sharing all of it. I really want people to read the article. But this, like so much of it, is worth reading. Even if you read it here.
More on Wendy:
Whenever Wendy uses AI to write an essay (which is to say, whenever she writes an essay), she follows three steps. Step one: “I say, ‘I’m a first-year college student. I’m taking this English class.’” Otherwise, Wendy said, “it will give you a very advanced, very complicated writing style, and you don’t want that.” Step two: Wendy provides some background on the class she’s taking before copy-and-pasting her professor’s instructions into the chatbot. Step three: “Then I ask, ‘According to the prompt, can you please provide me an outline or an organization to give me a structure so that I can follow and write my essay?’ It then gives me an outline, introduction, topic sentences, paragraph one, paragraph two, paragraph three.” Sometimes, Wendy asks for a bullet list of ideas to support or refute a given argument: “I have difficulty with organization, and this makes it really easy for me to follow.”
Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. “I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”
I asked Wendy if I could read the paper she turned in, and when I opened the document, I was surprised to see the topic: critical pedagogy, the philosophy of education pioneered by Paulo Freire. The philosophy examines the influence of social and political forces on learning and classroom dynamics. Her opening line: “To what extent is schooling hindering students’ cognitive ability to think critically?” Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.”
Unfortunately, we’ve read this before. Many times. Use of generative AI to outsource the effort of learning is rampant.
Want more? There’s also Daniel, a computer science student at the University of Florida:
AI has made Daniel more curious; he likes that whenever he has a question, he can quickly access a thorough answer. But when he uses AI for homework, he often wonders, If I took the time to learn that, instead of just finding it out, would I have learned a lot more? At school, he asks ChatGPT to make sure his essays are polished and grammatically correct, to write the first few paragraphs of his essays when he’s short on time, to handle the grunt work in his coding classes, to cut basically all cuttable corners. Sometimes, he knows his use of AI is a clear violation of student conduct, but most of the time it feels like he’s in a gray area. “I don’t think anyone calls seeing a tutor cheating, right? But what happens when a tutor starts writing lines of your paper for you?” he said.
When a tutor starts writing your paper for you, if you turn that paper in for credit you receive, that’s cheating. This is not complicated. People who sell cheating services and the people who buy them want to make it seem complicated. It’s not.
And the Teachers
Like the coverage of students, the article’s work with teachers is top-rate. And what they have to say is not one inch less important. For example:
Brian Patrick Green, a tech-ethics scholar at Santa Clara University, immediately stopped assigning essays after he tried ChatGPT for the first time. Less than three months later, teaching a course called Ethics and Artificial Intelligence, he figured a low-stakes reading reflection would be safe — surely no one would dare use ChatGPT to write something personal. But one of his students turned in a reflection with robotic language and awkward phrasing that Green knew was AI-generated. A philosophy professor across the country at the University of Arkansas at Little Rock caught students in her Ethics and Technology class using AI to respond to the prompt “Briefly introduce yourself and say what you’re hoping to get out of this class.”
Students are cheating — using AI to outsource their expected learning labor — in a class called Ethics and Artificial Intelligence. And in an Ethics and Technology class. At what point does reality’s absurdity outpace our ability to even understand it?
Also, as I’ve been barking about for some time now, low-stakes assignments are probably more likely to be cheated than high-stakes ones (see Issue 64). I don’t really get why professional educators don’t get this.
But returning to the topic:
After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,”
To read about Jollimore’s outstanding essay, see Issue 346.
And, of course, there’s more. Like the large section above, I regret copying so much of it, but it’s essential reading:
Many teachers now seem to be in a state of despair. In the fall, Sam Williams was a teaching assistant for a writing-intensive class on music and social change at the University of Iowa that, officially, didn’t allow students to use AI at all. Williams enjoyed reading and grading the class’s first assignment: a personal essay that asked the students to write about their own music tastes. Then, on the second assignment, an essay on the New Orleans jazz era (1890 to 1920), many of his students’ writing styles changed drastically. Worse were the ridiculous factual errors. Multiple essays contained entire paragraphs on Elvis Presley (born in 1935). “I literally told my class, ‘Hey, don’t use AI. But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out,’” Williams said.
Williams knew most of the students in this general-education class were not destined to be writers, but he thought the work of getting from a blank page to a few semi-coherent pages was, above all else, a lesson in effort. In that sense, most of his students utterly failed. “They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays. And I get it, because I hated writing essays when I was in school,” Williams said. “But now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.”
By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.”
The “true attempt at a paper” policy ruined Williams’s grading scale. If he gave a solid paper that was obviously written with AI a B, what should he give a paper written by someone who actually wrote their own paper but submitted, in his words, “a barely literate essay”? The confusion was enough to sour Williams on education as a whole. By the end of the semester, he was so disillusioned that he decided to drop out of graduate school altogether. “We’re in a new generation, a new time, and I just don’t think that’s what I want to do,” he said.
To be clear, the school is ignoring the obvious use of AI by students to avoid the work of learning — in violation of stated policies — and awarding grades, credit, and degrees anyway. Nearly universally, we are meeting lack of effort with lack of effort.
More from Jollimore:
He worries about the long-term consequences of passively allowing 18-year-olds to decide whether to actively engage with their assignments.
I worry about that too. I really want to use the past tense there — worried about. I think the age of active worry about this is over. Students are deciding what work they think is relevant or important — which I’d wager is next to none of it — and using AI to shrug off everything else. And again, the collective response of educators seems to be — who cares? Or, in some cases, to quit.
More on professors:
Some professors have resorted to deploying so-called Trojan horses, sticking strange phrases, in small white text, in between the paragraphs of an essay prompt. (The idea is that this would theoretically prompt ChatGPT to insert a non sequitur into the essay.) Students at Santa Clara recently found the word broccoli hidden in a professor’s assignment. Last fall, a professor at the University of Oklahoma sneaked the phrases “mention Finland” and “mention Dua Lipa” in his. A student discovered his trap and warned her classmates about it on TikTok. “It does work sometimes,” said Jollimore, the Cal State Chico professor. “I’ve used ‘How would Aristotle answer this?’ when we hadn’t read Aristotle. But I’ve also used absurd ones and they didn’t notice that there was this crazy thing in their paper, meaning these are people who not only didn’t write the paper but also didn’t read their own paper before submitting it.”
You can catch students using ChatGPT, if you want to. There are ways to do it, ways to limit it. And I wish the reporter had asked these teachers what happened to the students who were discovered. But I am sure I know the answer.
I guess also, I apologize. Some educators are engaged in the fight to protect and preserve the value of learning things. I feel that it’s far too few and that, more often than not, they are alone in this. It’s depressing.
Odds and Ends
In addition to its excellent narrative about how bad things actually are in a GPT-corrupted education system, the article has a few other bits worth sharing.
This, is pretty great:
Before OpenAI released ChatGPT in November 2022, cheating had already reached a sort of zenith. At the time, many college students had finished high school remotely, largely unsupervised, and with access to tools like Chegg and Course Hero. These companies advertised themselves as vast online libraries of textbooks and course materials but, in reality, were cheating multi-tools. For $15.95 a month, Chegg promised answers to homework questions in as little as 30 minutes, 24/7, from the 150,000 experts with advanced degrees it employed, mostly in India. When ChatGPT launched, students were primed for a tool that was faster, more capable.
Mentioning Chegg and Course Hero by name is strong work. Cheating multi-tools is precisely what they are.
I thought this was interesting too:
Students talk about professors who are rumored to have certain thresholds (25 percent, say) above which an essay might be flagged as an honor-code violation. But I couldn’t find a single professor — at large state schools or small private schools, elite or otherwise — who admitted to enforcing such a policy. Most seemed resigned to the belief that AI detectors don’t work. It’s true that different AI detectors have vastly different success rates, and there is a lot of conflicting data. While some claim to have less than a one percent false-positive rate, studies have shown they trigger more false positives for essays written by neurodivergent students and students who speak English as a second language.
I have a few things to say about this.
Students talk to one another. Remember a few paragraphs up where a student found the Trojan horse and posted it on social media? When teachers make efforts to stop cheating, to try catching disallowed use of AI, word gets around. Some students will try harder to get away with it. Others won’t try to cheat, figuring the risk isn’t worth it. Simply trying to stop it, in other words, will stop at least some of it.
I think the idea that most teachers think AI detectors don’t work is true. It’s not just teachers. Entire schools believe this. It’s an epic failure of messaging, an astonishing triumph of the misinformed. Truth is, as reported above, detectors do vary. Some are great. Some are junk. But the good ones work. Most people continue to not believe it.
And I’ll point out once again that the “studies have shown” thing is complete nonsense. As far as I have seen, exactly two studies have shown this, and both are deeply flawed. The one most often cited has made-up citations and research that is highly suspicious, which I pointed out in 2023 (see Issue 216). Frankly, I’ve not seen any good evidence to support this idea. As journalism goes, that’s a big miss in this story. It’s little wonder teachers think AI detectors don’t work.
On the subject of junk AI detectors, there’s also this:
I fed Wendy’s essay through a free AI detector, ZeroGPT, and it came back as 11.74 AI-generated, which seemed low given that AI, at the very least, had generated her central arguments. I then fed a chunk of text from the Book of Genesis into ZeroGPT and it came back as 93.33 percent AI-generated.
This is a failure to understand how AI detection works. But also ZeroGPT does not work. Again, it’s no wonder that teachers think AI detection does not work.
Continuing:
It’s not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students’ essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.
I don’t have nearly the bandwidth to get into this. But — sure. I have no doubt.
Finally, I am not sure if I missed this at the time, but this is important too:
In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. (That wasn’t an anomaly: Traffic dipped again over the summer in 2024.) Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human. Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education.
As I have said before, OpenAI is not your friend (see Issue 308). It’s a cheating engine. It can be used well, and ethically. But so can steroids. So could OxyContin. It’s possible to be handed the answers to every test you’ll ever take and not use them. But it is delusional to think any significant number of people don’t.
All wrapped up, this is a show-stopper of an article and I am very happy for the visibility it brings. I wish I could feel that it will make a difference.
China's higher education system is facing a profound crisis, marked by rampant credential inflation, a saturated academic job market, and growing inequality between domestic and international degree holders. A recent study published in Humanities and Social Sciences Communications provides empirical evidence of these trends, drawing from an extensive dataset of nearly 160,000 faculty resumes across 802 Chinese universities.
Credential inflation refers to the escalating academic qualifications required for positions that previously demanded less. In China, this phenomenon is particularly pronounced in elite institutions, especially those under the "Project 211" initiative. The study reveals that new faculty hires increasingly possess higher degrees and more publications than their predecessors, a trend driven by intensified competition and institutional prestige.
This inflationary pressure disproportionately affects domestically educated candidates. Despite holding advanced degrees, many find themselves overshadowed by peers with international qualifications, who are often favored for positions at top-tier universities. This preference underscores a systemic devaluation of domestic academic credentials.
The study highlights a growing bias towards candidates with overseas education. These individuals are not only more likely to secure positions at prestigious institutions but also benefit from a perception of superior academic training. This trend exacerbates existing inequalities and places additional pressure on domestic scholars to seek international credentials, often at significant personal and financial cost.
The implications of credential inflation extend beyond academia. China's youth unemployment rate has soared above 20%, leaving many graduates underemployed or reliant on parental support . This disconnect between educational attainment and employment opportunities fuels social discontent and challenges the narrative of higher education as a pathway to upward mobility.
Furthermore, the emphasis on international degrees may contribute to a brain drain, as talented individuals seek education and employment opportunities abroad. This trend could undermine China's efforts to cultivate a robust domestic academic and research environment.
Addressing this multifaceted crisis requires systemic reforms. Policymakers and educational institutions must reevaluate hiring practices, placing greater value on diverse academic experiences and competencies. Investments in domestic graduate programs, coupled with initiatives to enhance the global competitiveness of Chinese degrees, are essential.
Moreover, aligning higher education outcomes with labor market needs can help mitigate unemployment and underemployment among graduates. By fostering partnerships between academia and industry, China can ensure that its educational system produces graduates equipped with relevant skills and experiences.
The phenomenon of credential inflation in Chinese higher education reflects deeper structural challenges within the country's academic and employment landscapes. Without targeted interventions, these trends threaten to erode the value of domestic education, exacerbate social inequalities, and hinder China's aspirations for global academic leadership.
For a comprehensive understanding of this issue, refer to the full study: "Credential inflation and employment of university faculty in China"
Liberty University, one of the largest Christian universities in the world, presents a striking contrast between its largely white residential campus and a more diverse, working-class population studying online. This divide highlights ongoing questions about race, access, and culture in American higher education—especially in religious institutions that promote traditional values while navigating a changing demographic and social landscape.
As of 2021, Liberty’s Lynchburg, Virginia, residential campus remains overwhelmingly white. Seventy-four percent of students living and studying on campus are white, with only 4% identifying as Black or African American, 5% as Latino, and 2% as Asian or Pacific Islander. Less than 1% of residential students identify as Native American. In contrast to the national trend of increasing diversity on college campuses, Liberty appears to be growing whiter. In fact, the number of African American students on campus has declined in recent years, raising concerns about how welcoming the university is to students of color.
This demographic imbalance is not new. Liberty University has a long history of racial segregation and discrimination, particularly in its formative years under founder Jerry Falwell Sr., who defended segregation in the 1960s and opposed civil rights legislation. While Liberty’s public stance has changed over the decades, the legacy of those positions still casts a long shadow.
Meanwhile, Liberty University Online (LUO) paints a different picture. In 2017, only 51% of its undergraduate population identified as white, compared to 15.4% who were Black or African American. Hispanic and Latino students made up 1.7%, and students of two or more races, 2.3%. A significant 26.5% of LUO students were categorized as “race/ethnicity unknown,” potentially obscuring additional diversity. These students come from all 50 states, Washington, D.C., and 86 countries, with more than 30,000 military students and over 850 international students among them.
LUO students are also disproportionately older, more likely to be working full-time, and often seeking degrees for career advancement or personal growth rather than the traditional “college experience.” Many are first-generation college students or part of the educated working-class navigating life through faith, family, and financial constraints. In contrast to the traditional campus, LUO's virtual classrooms are where Liberty more closely resembles the multiracial and socioeconomically diverse America it often claims to serve.
This bifurcation between Liberty’s on-campus and online populations underscores a larger tension within the university: a cultural and racial divide that mirrors the broader fissures in U.S. society. The residential campus, steeped in conservative Christian traditions and a homogeneous student body, promotes a culture aligned with white evangelicalism. Meanwhile, its online division serves a more varied student population—many of whom are drawn to Liberty for its affordability, flexibility, and religious identity, but may not share in the campus culture or feel represented by its leadership and branding.
Reports of problems faced by Black students on campus—including concerns over campus climate, lack of representation among faculty, and curriculum that minimizes racial history—suggest that Liberty’s commitment to diversity is uneven at best. While the university has made modest gestures toward inclusion, critics argue that these efforts are often performative and fail to address systemic issues rooted in the institution’s founding principles.
Liberty University’s dual identity—as a white-dominated, conservative campus and a more diverse, online workforce training hub—raises difficult but necessary questions about race, class, and the role of religion in higher education. For an institution that claims to train “Champions for Christ,” the challenge remains whether it can reconcile these differences or if the divide will only grow starker in the years ahead.
After a five-year pandemic-related pause in federal student loan repayment and a temporary grace period, the student debt crisis has returned—arguably more severe than ever. According to the Federal Reserve Bank of New York’s Quarterly Report on Household Debt and Credit, nearly six million student loan borrowers—or 13.7 percent—are now seriously delinquent or in default on their loans. Even more troubling, nearly one in four borrowers required to make payments are behind, a figure masked by millions of others who remain in deferment, forbearance, or income-driven repayment plans requiring no immediate payment.
This dramatic increase in delinquency stems from the expiration of the federal "on-ramp" policy in October 2024, which had temporarily shielded missed payments from credit reporting after the repayment pause ended in September 2023. Now that reporting has resumed, the financial and personal consequences for borrowers are quickly becoming evident.
The NY Fed’s report reveals that while the total number of student loan borrowers has slightly decreased since 2020—from 44.6 million to 43.7 million—the number of borrowers behind on their payments is nearly the same. More striking is the conditional borrower delinquency rate—which excludes those without a current payment due. Among borrowers required to pay, 23.7 percent are delinquent, a reflection of a deepening affordability crisis and repayment system that continues to fail millions.
The burden is not equally distributed across the country. The highest rates of delinquency are concentrated in the South, with Mississippi leading at 44.6 percent, followed by Alabama, West Virginia, Kentucky, Oklahoma, Arkansas, and Louisiana—all states where more than 30 percent of borrowers with payments due are behind. In contrast, states like Illinois, Massachusetts, and Connecticut have delinquency rates under 15 percent.
Another notable shift is the aging of the delinquent borrower population. Delinquency is no longer confined to young graduates just entering repayment. Borrowers over age 40 now make up a significant portion of those falling behind, with more than one in four of these borrowers delinquent. The average age of a delinquent borrower rose from 38.6 in 2020 to 40.4 in 2025.
This is consistent with what higher education watchdogs have long observed: student loan debt is no longer just a young adult issue. Millions of older Americans—many of them parents who borrowed for their children or who returned to school later in life—are now in financial jeopardy.
The return of delinquency has immediate and potentially devastating impacts on borrowers’ credit health. Over 2.2 million borrowers saw their credit scores drop by more than 100 points in the first quarter of 2025. Over one million borrowers suffered drops of 150 points or more.
Of those who became newly delinquent, nearly 44 percent had credit scores above 620 before missing payments—scores that typically qualify for auto loans, mortgages, and credit cards. These borrowers now face steeply increased borrowing costs or total exclusion from credit markets, potentially compromising their ability to secure housing, transportation, and even employment in some cases.
The cascading effects of damaged credit and rising debt may not be limited to student loans. The NY Fed warns that it remains to be seen whether delinquencies will spill over into defaults in other types of debt. This is especially concerning in a macroeconomic environment marked by high interest rates and increasing cost-of-living pressures.
Adding to the pressure, federal collections have resumed. The U.S. Department of Education, working with the U.S. Treasury, began collecting on defaulted loans in May 2025, including garnishing wages, tax refunds, and Social Security payments. These harsh penalties, halted during the pandemic, are now back in full force—often hitting borrowers already in financial distress.
Millions of borrowers who once benefited from temporary protections now face permanent financial consequences, not only through collection actions but also through long-term credit damage.
The resurgence in student loan delinquency reflects not only the impact of resumed repayment but deeper systemic flaws in the American higher education and student loan systems. Despite well-publicized attempts at cancellation and reform, tens of millions remain trapped in a system that is neither affordable nor forgiving.
While much political attention has been directed toward one-time cancellation efforts and income-driven repayment plans, the growing delinquency rates suggest those efforts have not gone far enough—or fast enough. Borrowers in states with the highest delinquency rates tend to have lower incomes and fewer resources to navigate complex federal repayment options.
Without bold and comprehensive reform—including principal reduction, easier access to cancellation, and a robust safety net for vulnerable borrowers—millions of Americans will continue to suffer the consequences of educational debt they were told was an investment in their future.
We see this resurgence in delinquency not simply as a data point, but as a clear warning. The Biden administration’s incremental reforms and the Supreme Court’s rebuke of broader cancellation efforts have left the most financially vulnerable exposed.
As wage garnishment resumes and credit scores plummet, student loan debt is quickly becoming a national emergency—especially for Black borrowers, older Americans, and those in the South and Midwest. These are not isolated failures. They are structural, policy-driven failures—decades in the making.
For the U.S. to truly address its student loan crisis, it must go beyond payment pauses and cosmetic fixes. It must confront the predatory aspects of its higher education financing system, the ballooning cost of college, and the promise that higher education is a guaranteed path to prosperity.
Until then, expect these numbers—and the pain behind them—to grow.
Sources:
Federal Reserve Bank of New York, Quarterly Report on Household Debt and Credit, Q1 2025
New York Fed Center for Microeconomic Data Blog
Equifax Consumer Credit Panel data
Clayton Christensen’s theory of Disruptive Innovation—hailed by Silicon Valley executives and higher education reformers alike—presents itself as a neutral, even benevolent, framework for understanding technological and organizational change. Yet beneath its managerial gloss lies a lineage and logic deeply rooted in an (a)moral worldview: one that tolerates, if not encourages, alienation, economic insecurity, and the erosion of labor rights in the name of efficiency and market “progress.”
To understand the true implications of Disruptive Innovation, we must situate Christensen’s ideas within a broader intellectual history—one that includes Joseph Schumpeter, Frederick Winslow Taylor, and Herbert Spencer, each of whom advanced theories that exalted economic upheaval while devaluing human costs.
Christensen openly acknowledged his debt to Austrian economist Joseph Schumpeter, who coined the term “creative destruction” to describe the perpetual churn of capitalism—where new industries annihilate the old. Schumpeter viewed this cycle as the engine of economic development, but also one driven by elites: entrepreneurs and innovators were the “heroes” of economic evolution, regardless of the collateral damage.
Christensen adapted this logic but rebranded it in less violent terms. "Disruption" became the friendlier cousin of "destruction," but the underlying mechanism remained the same. When cheaper, simpler products or services overtake established incumbents, it is not just businesses that are disrupted, but the workers, communities, and public institutions tied to them. In higher education, this has meant the unbundling of the university, the rise of for-profits and MOOCs, and a managerial push for scalability over scholarship.
The ghost of Frederick Taylor—father of scientific management—also haunts Christensen’s framework. Taylor’s approach sought to maximize efficiency by breaking down labor into measurable units, stripping workers of autonomy and judgment in favor of systematized control. In Christensen’s world, similarly, incumbents are cast as bloated and inefficient, weighed down by tradition, professional norms, and tenured faculty. Disruptors are lean, data-driven, and contemptuous of established hierarchies.
This emphasis on efficiency over humanistic or moral values creates environments where workers (and students) are seen as inputs in a system, not stakeholders with rights or aspirations. The human costs—underemployment, job precarity, and burnout—are either ignored or reframed as necessary steps toward a more “innovative” future.
Christensen’s theory also carries echoes of Herbert Spencer, the 19th-century social theorist who popularized “survival of the fittest” as a way to naturalize social hierarchies under capitalism. Like Spencer, Christensen’s logic treats market competition as a force of nature rather than a human construct. Incumbents fail not because of policy failures or exploitation, but because they were not “fit” to survive disruption.
This Darwinian moral neutrality veils itself in the language of progress, but its effects are often regressive. When applied to higher education, it suggests that if small colleges close, if adjuncts replace professors, if students are reduced to customers—it is not a crisis, but evolution. But evolution, in this framework, comes without ethics, without responsibility, and without mourning for what is lost.
The consequences of this ideology are not confined to spreadsheets. They are lived out in alienation, anxiety, and a rising sense of meaninglessness in work and study alike. The relentless focus on disruption undermines stable institutions and communal knowledge, replacing them with temporary gigs and modular credentials. As careers give way to “side hustles” and degrees to “certificates,” students and workers alike are left unmoored.
This moral void is not an accident—it is intrinsic to the theory itself. Disruption is not guided by any vision of the good life, democratic values, or collective well-being. Its only metric is market success. It cannot ask whether the loss of a liberal arts college matters, whether an AI tool improves learning, or whether a precarious worker has a future. It can only ask: is it cheaper? Is it scalable?
In extreme cases, this sense of disposability has life-and-death consequences. Research across sectors shows that economic insecurity and job loss are linked to higher rates of suicide, depression, and addiction. The suicides of Uber drivers, the despair of indebted students, and the mental health crisis on campuses are not anomalies—they are the psychological toll of a system that celebrates disruption but discards the disrupted.
Against this backdrop, the weakening of labor rights is not just a policy issue—it is a direct consequence of the ideology of disruption. Tenure, unions, benefits, job security—these are seen as “barriers” to innovation. The ideal disruptor has no interest in negotiating with labor; it seeks flexibility, not fairness.
In higher education, this has meant an explosion of adjunct labor, the outsourcing of student services, and the dismantling of shared governance. Disruptive Innovation thus functions not merely as a theory, but as a strategy to sideline labor, redefine value, and transfer risk from institutions to individuals.
It is time to reckon with the (a)moral underpinnings of Christensen’s Disruptive Innovation. Behind its sleek presentation lies a worldview that rationalizes destruction, devalues dignity, and denies responsibility. Its philosophical lineage—from Schumpeter to Spencer—offers little comfort to those displaced, demoralized, or disappeared in its wake.
If higher education is to survive with its soul intact, it must reject the idea that all disruption is good, that all efficiency is progress, and that human costs are externalities. It must ask not just what works, but for whom—and at what cost.