Search This Blog

Showing posts sorted by date for query academic labor. Sort by relevance Show all posts
Showing posts sorted by date for query academic labor. Sort by relevance Show all posts

Wednesday, May 28, 2025

The Harvard Business School Paradox: Ethics, Elites, and the Theatre of Honesty

For the first time in nearly 80 years, Harvard University has taken the extraordinary step of revoking the tenure of a faculty member—Francesca Gino, a former professor at Harvard Business School (HBS) known for her widely publicized research on ethics, decision-making, and organizational behavior. The irony of her downfall—accused of academic dishonesty while researching honesty—has been noted by nearly every outlet covering the story. But a deeper question lingers: What does Gino’s story tell us about Harvard Business School and the neoliberal system it both serves and symbolizes?

Ethics as Performance in a Neoliberal Order

Gino, once celebrated for championing ethical behavior and "rebel talent," now stands accused of falsifying data in multiple academic papers. But HBS’s brand of ethics—delivered through expensive executive programs and best-selling books—is part of a larger performance in which corporate elites are taught to appear virtuous while perpetuating systems that concentrate wealth, exploit labor, and externalize harm.

In this context, ethics becomes less about justice or truth and more about managing perceptions. The fall of Francesca Gino is dramatic, but the real ethical crisis lies not in her alleged misconduct alone—it lies in the institutional contradictions embedded within HBS itself. Harvard Business School doesn’t just teach capitalism; it molds the gatekeepers of global capital. Its mission is not merely to educate but to replicate and legitimize a system that increasingly rewards the few at the expense of the many.

HBS: The Training Ground for Economic Power

From Wall Street executives to Silicon Valley disruptors, the alumni of Harvard Business School include some of the most powerful figures in global finance and industry—many of whom have presided over layoffs, environmental degradation, and financial schemes with far more damaging consequences than academic fraud.

The school’s ethos is rooted in neoliberal values: deregulation, privatization, shareholder primacy, and labor "flexibility." These principles have driven inequality to historic levels, eroded public trust in institutions, and created a permanent underclass of precarious workers—including the adjuncts and support staff who toil in the shadows of Harvard's gilded brand.

That Gino was one of Harvard’s highest paid employees, earning over $1 million a year, underscores the commercialization of academia. Her high-profile persona, media presence, and prolific publication record made her not just a scholar but a product—one the institution proudly marketed until it became inconvenient.

The Politics of Academic Accountability

The revocation of Gino’s tenure has been portrayed as a triumph of academic accountability. But it also reveals the selective nature of institutional justice. While Harvard moved swiftly to investigate and sanction Gino, other faculty members in elite institutions—some with clear ties to ethically questionable industries or discriminatory practices—remain unscathed, protected by the very power structures they serve.

Moreover, this case unfolds against a broader political backdrop in which Harvard, like other elite universities, is entangled in legal and ideological battles with the federal government. From fights over DEI initiatives and student visas to federal funding for research, the university’s moral posturing often masks a pragmatic calculus: preserving its endowment, its influence, and its brand.

A System that Rewards Deception

That Harvard Business School fostered—and then disowned—a figure like Francesca Gino should surprise no one. The institution’s most infamous alumni include architects of the 2008 financial crisis and leaders of corporations known for tax evasion, union busting, and regulatory capture. In such a system, the real problem isn’t dishonesty—it’s getting caught.

Gino’s downfall may satisfy the university’s need for a scapegoat, but it doesn't address the deeper malaise at the heart of elite business education. Harvard Business School produces managers, not moral leaders. It shapes ideologies, not communities. And in doing so, it offers up a sanitized vision of capitalism in which individual ethics can redeem systemic violence.

Conclusion: The Theatre of Respectability

Francesca Gino’s tenure revocation is a symbolic gesture—one that reinforces the illusion that elite institutions police themselves rigorously. But the real fraud is more abstract and far more consequential: it is the fraud of presenting institutions like Harvard Business School as guardians of ethical capitalism, while they actively reinforce the economic logic of exploitation.

In a just world, the moral bankruptcy of neoliberalism would be exposed not by a professor’s faked data, but by the suffering of workers laid off for shareholder gains, the communities displaced for private equity ventures, and the global inequities entrenched by the very graduates these schools send into the world.

Until then, we are left with what Gino herself once studied: the subtle science of dishonesty. Only now, the lab is Harvard—and the experiment is ongoing.


The Higher Education Inquirer continues to investigate the contradictions and inequalities embedded in American higher education, especially as they relate to labor, class, and power. Follow us for more independent, critical analysis.

Thursday, May 22, 2025

Mental Health for the Working Class: Who’s Behind the Therapy Boom?

The Affordable Care Act (ACA), commonly known as Obamacare, has significantly expanded access to mental health services in the United States, particularly for working-class individuals and families. The expansion of Medicaid and marketplace plans has made therapy and psychiatric care more accessible. However, the infrastructure supporting this mental health revolution is complex, under-resourced, and increasingly influenced by private equity. As more Americans seek care, questions arise about who is delivering that care—and whether the system prioritizes well-being or profits.

The Workforce Patchwork

The delivery of mental health services today relies on a varied network of professionals. In community clinics, federally qualified health centers, and outpatient networks, the bulk of therapeutic care comes from mid-level clinicians: Licensed Clinical Social Workers (LCSWs), Licensed Professional Counselors (LPCs), and Marriage and Family Therapists (MFTs). These are master's-level professionals who carry substantial educational and clinical training but are frequently underpaid and overworked.

Psychiatric Nurse Practitioners have also filled a critical gap, often handling medication management in lieu of psychiatrists, especially in rural and underserved areas. Meanwhile, case managers and peer support workers—some with minimal formal education—are tasked with providing wraparound services like housing support, job placement, and crisis management.

Psychiatrists and doctoral-level psychologists, though highly trained, are in short supply and are often unwilling to accept Medicaid or ACA plan reimbursements. This leaves many lower-income patients with few options for specialized care.

Enter Private Equity

In recent years, private equity (PE) firms have aggressively moved into the mental health space. Attracted by rising demand for services and relatively stable reimbursement streams from public insurance programs, PE investors have acquired numerous outpatient mental health clinics, telehealth platforms, and addiction treatment centers. Research indicates that PE firms now account for as much as a quarter of practices providing behavioral health services in some states (OHSU, 2024).

While this influx of capital has allowed for rapid expansion, it has also introduced new pressures on the workforce. To maximize returns, many PE-backed firms rely heavily on newly licensed clinicians or even graduate students under supervision. In some cases, providers are pushed into independent contractor roles to reduce labor costs and avoid benefit obligations.

Clinicians report being pressured to increase their patient loads, reduce session times, and adhere to standardized scripts or protocols designed for efficiency, not individualized care. Turnover is high, and burnout is common. A 2023 survey by the American Psychological Association found that over 60% of mental health practitioners reported experiencing symptoms of burnout (Therapy Wisdom, 2024).

The Role of Robocolleges in the Mental Health Pipeline

The rise of online, for-profit, and quasi-public "robocolleges"—such as Walden University, Purdue University Global, the University of Phoenix, Capella University, and others—has significantly shaped the labor pipeline for mental health services. These institutions mass-produce degrees in psychology, counseling, and social work, often catering to nontraditional and working adult students with limited time and financial resources.

Programs are designed for scale and efficiency, not necessarily for rigor or clinical depth. Courses are often asynchronous, adjunct-taught, and heavily standardized. Clinical placements and supervision, vital components of a therapist’s training, are sometimes outsourced or inadequately supported—leaving graduates with inconsistent real-world experience.

These institutions also disproportionately enroll students from lower-income and minority backgrounds, many of whom take on significant debt for degrees that may lead to low-paying, high-stress jobs in underfunded clinics or PE-owned mental health companies.

While robocolleges expand access to credentials, they may also contribute to a deprofessionalized, precarious workforce—one in which therapists are underprepared, underpaid, and overextended. Their graduates often fill the lower rungs of the mental health care ladder, working in environments where quality and continuity of care are compromised by systemic churn.

Quality and Equity in the Balance

The result is a mental health system that, while more accessible than in previous decades, is increasingly stratified. Working-class patients often receive care from entry-level or overburdened professionals, while wealthier clients can afford private practitioners who offer more time, continuity, and personalized care.

This imbalance is further complicated by a lack of oversight. Licensing boards and state agencies struggle to monitor the growing number of clinics and telehealth services, many of which operate across state lines or rely on algorithms to triage patients.

Meanwhile, the very people the ACA aimed to help—those juggling low-wage jobs, family stress, and systemic disadvantage—are left in a system where care may be quick, transactional, and occasionally substandard.

The Role of Traditional Higher Education

Traditional colleges and universities play a dual role: they continue to train therapists and counselors in more rigorous academic environments, but they also face growing pressure to "compete" with robocolleges in terms of cost, speed, and flexibility. At the same time, these institutions increasingly outsource student counseling services to external mental health platforms—some of them owned by private equity firms.

Thus, the cycle continues: higher education feeds the mental health system, while also adopting many of its structural compromises.

Conclusion

The expansion of mental health coverage under the ACA is a major public policy achievement. But access alone is not enough. The quality of care, the working conditions of providers, and the growing influence of profit-seeking investors and education mills all demand greater scrutiny.

For working-class Americans, mental health has become another arena where the promise of care often collides with the reality of austerity and privatization. And for those training to enter the profession, especially through robocolleges, the path forward may be just as precarious.


References:

Monday, May 19, 2025

Degrees of Discontent: Credentialism, Inflation, and the Global Education Crisis

In an era defined by rapid technological change, globalization, and economic precarity, the promise of higher education as a reliable path to social mobility is being questioned around the world. At the heart of this reckoning are two interrelated forces: credentialism and credential inflation. Together, they have helped fuel a crisis of discontent that spans continents, demographics, and generations.

The Age of Credentialism

Credentialism refers to the increasing reliance on educational qualifications—often formal degrees or certificates—as a measure of skill, value, and worth in the labor market. What was once a gateway to opportunity has, for many, become a gatekeeper.

In countries as diverse as the United States, Nigeria, South Korea, and Brazil, employers increasingly demand college degrees for jobs that previously required only a high school diploma or no formal education at all. These “degree requirements” often serve more as filters than as real indicators of competence. In the U.S., for example, nearly two-thirds of new jobs require a college degree, yet only around 38% of the adult population holds one. This creates a built-in exclusionary mechanism that hits working-class, first-generation, and minority populations hardest.

Credential Inflation: The Diminishing Value of Degrees

As more people earn degrees in hopes of improving their employment prospects, the relative value of those credentials declines—a phenomenon known as credential inflation. Where a bachelor’s degree once opened doors to managerial or professional roles, it now often leads to underemployment or precarious gig work. In response, students seek advanced degrees, fueling a “credential arms race” with diminishing returns.

In India and China, massive expansions of higher education have led to millions of graduates chasing a finite number of white-collar jobs. In places like Egypt, university graduates have higher unemployment rates than those with only a secondary education. In South Korea, a hyper-competitive education culture pushes students through years of tutoring and testing, only to graduate into a job market with limited high-status roles.

Tragedy in Tunisia: The Human Cost of Unemployment

Few stories illustrate the devastating impact of credentialism and mass youth unemployment more than that of Mohammed Bouazizi, a 26-year-old Tunisian university graduate whose life and death sparked a revolution.

Unable to find formal employment, Bouazizi resorted to selling fruit and vegetables illegally in the town of Sidi Bouzid. In December 2010, after police confiscated his produce for lacking a permit, he set himself on fire in front of a local government building in a final act of desperation.

Bouazizi succumbed to his injuries weeks later, but not before igniting a firestorm of protests across Tunisia. His self-immolation became the catalyst for mass demonstrations against economic injustice, corruption, and authoritarianism—culminating in the Tunisian Revolution and inspiring uprisings throughout the Arab world.

At his funeral, an estimated 5,000 mourners marched, chanting: “Farewell, Mohammed, we will avenge you.” Bouazizi’s uncle said, “Mohammed gave his life to draw attention to his condition and that of his brothers.”

His act was not just a protest against police abuse, but a powerful indictment of a system that had produced thousands of educated but unemployed young people, whose degrees had become symbols of broken promises.

Global Discontent and Backlash

This dynamic of broken promises and rising discontent is global. In China, the “lying flat” movement reflects a rejection of endless striving in a system that offers diminishing returns on educational achievement. In South Korea, the “N-po” generation has opted out of traditional life goals, seeing little reward for their academic sacrifices.

In the U.S., distrust in higher education is mounting, with many questioning whether the cost of a degree is worth it. At the same time, a growing number of companies are dropping degree requirements altogether in favor of skills-based hiring.

Yet these moves often come too late for millions already trapped in a debt-fueled system, forced to chase credentials just to qualify for basic employment.

The Future of Work, the Future of Education

As automation and AI disrupt industries, the link between formal education and stable employment continues to fray. Policymakers call for "lifelong learning" and “upskilling,” but these strategies often place the burden back on workers without addressing the deeper failures of economic and educational systems.

To move forward, we must consider:

  • Decoupling jobs from unnecessary credential requirements

  • Investing in vocational and technical education with real career pathways

  • Recognizing nontraditional forms of knowledge and skill

  • Reframing education as a public good, not a consumer transaction

Reclaiming the Meaning of Education

Mohammed Bouazizi's story is a tragic reminder that the crisis of credentialism is not theoretical—it’s lived, felt, and fought over in the streets. Around the world, millions of young people feel abandoned by systems that promised opportunity but delivered anxiety, debt, and instability.

Unless global societies reimagine the relationship between education, work, and human dignity, the "degrees of discontent" will only continue to deepen. And as Bouazizi’s legacy shows, discontent—when ignored—can become revolutionary.


Sources and References

  • BBC News. “Tunisia suicide protester Mohammed Bouazizi dies.” January 5, 2011. https://www.bbc.com/news/world-africa-12120228

  • Pew Research Center. “Public Trust in Higher Education is Eroding.” August 2023.

  • Brown, Phillip. The Global Auction: The Broken Promises of Education, Jobs, and Incomes. Oxford University Press, 2011.

  • Marginson, Simon. “The Worldwide Trend to High Participation Higher Education: Dynamics of Social Stratification in Inclusive Systems.” Higher Education, 2016.

  • The World Bank. “Education and the Labor Market.”

  • The Guardian. “Lying Flat: China's Youth Protest Culture Grows.” June 2021.

  • Korea Herald. “'N-po Generation' Gives Up on Marriage, Children, and More.” October 2022.

The Higher Education Racket

 "Every great cause begins as a movement, becomes a business, and turns into a racket." Eric Hoffer

American higher education, once a ladder to opportunity, has become a vast machine of wealth extraction. Debt burdens students for decades. Professors and campus workers are trapped in precarious jobs. Entire communities are pushed out by campus expansions. And a select few elite universities sit atop fortunes that rival hedge funds—all while claiming tax-exempt status and public goodwill.

This is the higher education racket: a sector that has turned away from its public mission and now operates with the logic of capital accumulation, enabled by deregulation, political influence, and privatization.


From Movement to Market: Postwar Expansion and Privatization

The 1944 G.I. Bill launched a golden age of public higher education, providing veterans access to tuition-free college and transforming American society. Enrollment surged, inequality shrank, and community colleges became lifelines for working-class students. Colleges were seen as civic institutions, essential to democratic life.

That vision began to erode in the 1980s, as neoliberal policymakers slashed state funding, forcing institutions to raise tuition, court corporate donors, and cut labor costs. By 2020, public universities received less than half the state funding (per student) they did in 1980, adjusted for inflation (Center on Budget and Policy Priorities).


Trump Administration: Deregulating the Racket

Under Donald Trump, the Department of Education, led by billionaire Betsy DeVos, launched an all-out campaign to roll back protections for students and favor the worst actors in higher ed:

  • Gutted Borrower Defense rules, making it harder for defrauded students to cancel loans.

  • Eliminated the Gainful Employment rule, allowing for-profit colleges to peddle useless degrees.

  • Weakened accreditors' oversight, enabling bad schools to access federal aid with little accountability.

  • Backed anti-union efforts, including trying to strip grad students at private universities of their employee status.

This deregulatory spree enriched predatory schools, student loan servicers, and debt collectors—while stripping students and workers of protections.


The Academic Underclass

While university presidents earn seven-figure salaries, and campuses build luxury dorms and biotech labs, the people doing the teaching are increasingly disposable. More than 70% of college faculty now work off the tenure track, many as adjuncts earning below minimum wage on a per-course basis (AAUP).

Campus workers—grad students, maintenance staff, food service employees—are organizing for better wages and benefits, but often face union-busting tactics. From Columbia to the University of California, administrators stall negotiations and outsource labor to avoid union contracts (The Guardian, 2022).


Universities as Urban Developers

Historian Davarian Baldwin has documented how universities function as engines of gentrification in cities like New Haven, Chicago, and Philadelphia. In In the Shadow of the Ivory Tower, Baldwin argues that universities have become "shadow governments", gobbling up real estate, policing their neighborhoods, and reshaping urban economies—all without democratic accountability.

These “anchor institutions” claim to uplift communities, but their expansion often displaces low-income Black and brown residents, raises housing costs, and erodes the local tax base—since universities are typically exempt from property taxes.

“Higher education is not just about learning anymore. It’s about real estate, policing, health care, and urban planning—all under the control of tax-exempt institutions.” —Davarian Baldwin


Endowment Empires

Nowhere is the inequality of U.S. higher education more glaring than in university endowments. Harvard, Yale, Stanford, and Princeton each have endowments exceeding $30 billion, managed like hedge funds with investments in private equity, real estate, and offshore accounts (NACUBO 2023 Endowment Study).

Despite their wealth:

  • These universities often provide limited financial aid to working-class students.

  • They pay no federal taxes on endowment income under $500,000 per student.

  • They resist efforts to contribute to municipal budgets, even as they consume city resources.

During the COVID-19 pandemic, many elite institutions furloughed workers and froze wages—despite posting strong investment returns and sitting on endowments worth more than the GDP of some nations.

Critics argue that these funds should be tapped for student debt relief, housing support, or public education reinvestment—not hoarded like private wealth.


The Price of the Racket

The numbers are staggering:

  • $1.7 trillion in student debt

  • Tens of thousands of adjuncts living in poverty

  • Campus police forces more militarized than local law enforcement

  • Communities displaced by campus-led gentrification

  • Universities with endowments larger than some countries' national budgets

The higher education racket isn’t just an economic problem. It’s a betrayal of public trust.


Reclaiming the Public Good

If higher education is to serve the people—not private interests—structural reforms are necessary:

  • Cancel student debt and offer tuition-free public college

  • Mandate living wages and fair contracts for all campus workers

  • Tax large endowments and require community reinvestment

  • Reinstate regulations to hold predatory institutions accountable

Higher education once expanded opportunity. It can again—but only if we dismantle the racket.


Sources:

Friday, May 16, 2025

The Watchdogs of Higher Education: Journalists Covering the College Meltdown

In an era of propaganda, PR masquerading as reporting, and shrinking newsroom budgets, a small cohort of journalists continues to ask the difficult questions about U.S. higher education. These writers are the watchdogs, skeptics, and truth-tellers who probe the system's contradictions—exposing corruption, inequality, and the commodification of learning.

While many mainstream outlets have reduced their education desks or opted for click-friendly content, these journalists persist in a more thankless task: investigating the deeper structures that shape college access, affordability, and legitimacy. Their work is essential in this Digital Dark Age, where universities are marketed like tech products and student debt chains millions to futures they did not choose.


Current Watchdogs

  • Josh Moody (Inside Higher Ed)
    Steady and detail-oriented, Moody explores enrollment cliffs, closures, and the survival of regional public colleges.

  • Natalie Schwartz (Higher Ed Dive)
    A sharp analyst of the robocollege sector, Schwartz highlights OPM contracts, predatory recruitment, and accountability gaps.

  • Michael Vasquez (The Chronicle)
    Known for hard-hitting investigations into for-profit schemes and enrollment deceptions.

  • Stephanie Saul (The New York Times)
    Tackles elite admissions, racial bias, and the mechanisms of legacy advantage.

  • Chris Quintana (USA Today)
    Examines the hidden costs of student debt, accreditation breakdowns, and federal oversight failure.

  • Derek Newton (Forbes)
    Unflinching in his critiques of online education scams, weak accreditation, and credential inflation.

  • David Halperin (Republic Report)
    Legal-minded and relentless, Halperin holds the Department of Education and the for-profit lobby to account.

  • Jon Marcus (Hechinger Report / NPR / The Atlantic)
    A veteran storyteller who humanizes systemic crises—affordability, public disinvestment, and policy drift.

  • Rick Seltzer (Chronicle of Higher Education)
    A seasoned reporter, Seltzer has focused on the intersection of state and federal policy, accreditation issues, and the financialization of higher education. His investigative pieces often highlight how policy shifts impact institutions serving the most vulnerable students, particularly in the community college sector. Seltzer’s ability to distill complex policy changes into accessible reporting has made him an essential voice in higher ed journalism.


Those Who’ve Left the Beat (But Not Forgotten)

  • Eric Kelderman (formerly The Chronicle of Higher Education)
    Kelderman offered deeply researched policy analysis and was one of the few who bridged the world of federal education policy and on-the-ground campus effects. His departure leaves a vacuum in longform institutional memory.

  • Katherine Mangan (formerly The Chronicle)
    Known for profiling marginalized students and faculty, Mangan brought empathy and nuance to her reporting. Her stories exposed how abstract policies hit real people—and her absence is deeply felt.

  • Jesse Singal (formerly The Chronicle / NY Mag)
    Though now better known for controversial takes in broader cultural debates, Singal once wrote incisively about the psychology of higher ed policy and the unproven assumptions behind new academic models.

  • Paul Fain (formerly Inside Higher Ed)
    A go-to source for OPMs and workforce ed, Fain had a unique grasp of the tension between labor markets and academic missions. He now writes the The Job newsletter for Work Shift, with a narrower focus.

  • Kelly Field (formerly The Chronicle / freelance)
    Field’s reporting on federal financial aid and for-profit lobbying was some of the most thorough in the industry. Her exit reflects a broader trend: that deeply informed journalists are often priced or pushed out.

  • Goldie Blumenstyk (semi-retired, The Chronicle)
    A longtime chronicler of innovation narratives and public-private partnerships, Blumenstyk now writes occasionally but is no longer on the frontlines. Her absence from regular coverage marks the end of an era.


Why This Matters

Many of these journalists left not because they lost interest—but because media economics, editorial shifts, or burnout drove them out. The result? Fewer people holding institutions accountable. Fewer watchdogs sniffing out robocollege fraud. Fewer investigations into how DEI is dismantled under political pressure. Less public understanding of how tens of millions became student loan serfs.

In their absence, we see the rise of sponsored content, consultant-driven “thought leadership,” and university propaganda dressed as reporting.

At The Higher Education Inquirer, we believe journalism is not just about reporting the news—it’s about building public memory and resisting amnesia. That’s what these current and former reporters have done. And that’s why we honor both those still in the trenches and those who left with their integrity intact.

If this is truly the Digital Dark Age, then we owe everything to those who kept the lights on—even if only for a while.

Thursday, May 15, 2025

The Epic, Must-Read Coverage in New York Magazine (Derek Newton)


The Epic, Must-Read Coverage in New York Magazine
 
READ IN APP
 

Issue 364

Subscribe below to join 4,663 (+6) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.

The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.


New York Magazine Goes All-In, And It’s Glorious

Venerable New York Magazine ran an epic piece (paywall) on cheating and cheating with AI recently. It’s a thing of beauty. I could have written it. I should have. But honestly, I could not have done much better.

The headline is brutal and blunt:

Everyone Is Cheating Their Way Through College

To which I say — no kidding.

The piece wanders around, in a good way. But I’m going to try to put things in a more collected order and share only the best and most important parts. If I can. Whether I succeed or not, I highly encourage you to go over and read it.

Lee and Cheating Everything

The story starts with Chungin “Roy” Lee, the former student at Columbia who was kicked out for selling cheating hacks and then started a company to sell cheating hacks. His story is pretty well known at this point, but if you want to review it, we touched on it in Issue 354.

What I learned in this story is that, at Columbia, Lee:

by his own admission, proceeded to use generative artificial intelligence to cheat on nearly every assignment. As a computer-science major, he depended on AI for his introductory programming classes: “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” By his rough math, AI wrote 80 percent of every essay he turned in.

And:

“Most assignments in college are not relevant,” [Lee] told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort.

The article says Lee’s admissions essay for Columbia was AI too.

So, for all the people who were up in arms that Columbia would sanction a student for building a cheating app, maybe there’s more to it than just that. Maybe Lee built a cheating app because he’s a cheater. And, as such, has no place in an environment based on learning. That said, it’s embarrassing that Columbia did not notice a student in such open mockery of their mission. Seriously, embarrassing.

Continuing from the story:

Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

Also embarrassing for Columbia. But seriously, Lee has no idea what he is talking about. Consider this:

Lee explained to me that by showing the world AI could be used to cheat during a remote job interview, he had pushed the tech industry to evolve the same way AI was forcing higher education to evolve. “Every technological innovation has caused humanity to sit back and think about what work is actually useful,” he said. “There might have been people complaining about machinery replacing blacksmiths in, like, the 1600s or 1800s, but now it’s just accepted that it’s useless to learn how to blacksmith.”

I already regret writing this — but maybe if Lee had done a little more reading, done any writing at all, he could make a stronger argument. His argument here is that of a precocious eighth grader.

OpenAI/ChatGPT and Students

Anyway, here are sections and quotes from the article about students using ChatGPT to cheat. I hope you have a strong stomach.

As a brief aside, having written about this topic for years now, I cannot tell you how hard it is to get students to talk about this. What follows is the highest quality journalism. I am impressed and jealous.

From the story:

“College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.

More:

Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school.

And:

After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.”

This really is where we are. These students are not outliers.

Worse, being as clear here as I know how to be — 95% of colleges do not care. At least not enough to do anything about it. They are, in my view, perfectly comfortable with their students faking it, laughing their way through the process, because fixing it is hard. It’s easier to look cool and “embrace” AI than to acknowledge the obvious and existential truth.

But let’s keep going:

now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences?

Please mentally underline the “no consequences” part. These are not bad people, the students using ChatGPT and other AI products to cheat. They are making an obvious choice — easy and no penalty versus actual, serious work. So long as this continues to be the equation, cheating will be as common as breathing. Only idiots and masochists will resist.

Had enough? No? Here:

Wendy, a freshman finance major at one of the city’s top universities, told me that she is against using AI. Or, she clarified, “I’m against copy-and-pasting. I’m against cheating and plagiarism. All of that. It’s against the student handbook.” Then she described, step-by-step, how on a recent Friday at 8 a.m., she called up an AI platform to help her write a four-to-five-page essay due two hours later.

Of course. When you ask students if they condone cheating, most say no. Most also say they do not cheat. Then, when you ask about what they do specifically, it’s textbook cheating. As I remember reading in Cheating in College, when you ask students to explain this disconnect, they often say, “Well, when I did it, it was not cheating.” Wendy is a good example.

In any case, this next section is long, and I regret sharing all of it. I really want people to read the article. But this, like so much of it, is worth reading. Even if you read it here.

More on Wendy:

Whenever Wendy uses AI to write an essay (which is to say, whenever she writes an essay), she follows three steps. Step one: “I say, ‘I’m a first-year college student. I’m taking this English class.’” Otherwise, Wendy said, “it will give you a very advanced, very complicated writing style, and you don’t want that.” Step two: Wendy provides some background on the class she’s taking before copy-and-pasting her professor’s instructions into the chatbot. Step three: “Then I ask, ‘According to the prompt, can you please provide me an outline or an organization to give me a structure so that I can follow and write my essay?’ It then gives me an outline, introduction, topic sentences, paragraph one, paragraph two, paragraph three.” Sometimes, Wendy asks for a bullet list of ideas to support or refute a given argument: “I have difficulty with organization, and this makes it really easy for me to follow.”

Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. “I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”

I asked Wendy if I could read the paper she turned in, and when I opened the document, I was surprised to see the topic: critical pedagogy, the philosophy of education pioneered by Paulo Freire. The philosophy examines the influence of social and political forces on learning and classroom dynamics. Her opening line: “To what extent is schooling hindering students’ cognitive ability to think critically?” Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.”

Unfortunately, we’ve read this before. Many times. Use of generative AI to outsource the effort of learning is rampant.

Want more? There’s also Daniel, a computer science student at the University of Florida:

AI has made Daniel more curious; he likes that whenever he has a question, he can quickly access a thorough answer. But when he uses AI for homework, he often wonders, If I took the time to learn that, instead of just finding it out, would I have learned a lot more? At school, he asks ChatGPT to make sure his essays are polished and grammatically correct, to write the first few paragraphs of his essays when he’s short on time, to handle the grunt work in his coding classes, to cut basically all cuttable corners. Sometimes, he knows his use of AI is a clear violation of student conduct, but most of the time it feels like he’s in a gray area. “I don’t think anyone calls seeing a tutor cheating, right? But what happens when a tutor starts writing lines of your paper for you?” he said.

When a tutor starts writing your paper for you, if you turn that paper in for credit you receive, that’s cheating. This is not complicated. People who sell cheating services and the people who buy them want to make it seem complicated. It’s not.

And the Teachers

Like the coverage of students, the article’s work with teachers is top-rate. And what they have to say is not one inch less important. For example:

Brian Patrick Green, a tech-ethics scholar at Santa Clara University, immediately stopped assigning essays after he tried ChatGPT for the first time. Less than three months later, teaching a course called Ethics and Artificial Intelligence, he figured a low-stakes reading reflection would be safe — surely no one would dare use ChatGPT to write something personal. But one of his students turned in a reflection with robotic language and awkward phrasing that Green knew was AI-generated. A philosophy professor across the country at the University of Arkansas at Little Rock caught students in her Ethics and Technology class using AI to respond to the prompt “Briefly introduce yourself and say what you’re hoping to get out of this class.”

Students are cheating — using AI to outsource their expected learning labor — in a class called Ethics and Artificial Intelligence. And in an Ethics and Technology class. At what point does reality’s absurdity outpace our ability to even understand it?

Also, as I’ve been barking about for some time now, low-stakes assignments are probably more likely to be cheated than high-stakes ones (see Issue 64). I don’t really get why professional educators don’t get this.

But returning to the topic:

After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,”

To read about Jollimore’s outstanding essay, see Issue 346.

And, of course, there’s more. Like the large section above, I regret copying so much of it, but it’s essential reading:

Many teachers now seem to be in a state of despair. In the fall, Sam Williams was a teaching assistant for a writing-intensive class on music and social change at the University of Iowa that, officially, didn’t allow students to use AI at all. Williams enjoyed reading and grading the class’s first assignment: a personal essay that asked the students to write about their own music tastes. Then, on the second assignment, an essay on the New Orleans jazz era (1890 to 1920), many of his students’ writing styles changed drastically. Worse were the ridiculous factual errors. Multiple essays contained entire paragraphs on Elvis Presley (born in 1935). “I literally told my class, ‘Hey, don’t use AI. But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out,’” Williams said.

Williams knew most of the students in this general-education class were not destined to be writers, but he thought the work of getting from a blank page to a few semi-coherent pages was, above all else, a lesson in effort. In that sense, most of his students utterly failed. “They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays. And I get it, because I hated writing essays when I was in school,” Williams said. “But now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.”

By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.”

The “true attempt at a paper” policy ruined Williams’s grading scale. If he gave a solid paper that was obviously written with AI a B, what should he give a paper written by someone who actually wrote their own paper but submitted, in his words, “a barely literate essay”? The confusion was enough to sour Williams on education as a whole. By the end of the semester, he was so disillusioned that he decided to drop out of graduate school altogether. “We’re in a new generation, a new time, and I just don’t think that’s what I want to do,” he said.

To be clear, the school is ignoring the obvious use of AI by students to avoid the work of learning — in violation of stated policies — and awarding grades, credit, and degrees anyway. Nearly universally, we are meeting lack of effort with lack of effort.

More from Jollimore:

He worries about the long-term consequences of passively allowing 18-year-olds to decide whether to actively engage with their assignments.

I worry about that too. I really want to use the past tense there — worried about. I think the age of active worry about this is over. Students are deciding what work they think is relevant or important — which I’d wager is next to none of it — and using AI to shrug off everything else. And again, the collective response of educators seems to be — who cares? Or, in some cases, to quit.

More on professors:

Some professors have resorted to deploying so-called Trojan horses, sticking strange phrases, in small white text, in between the paragraphs of an essay prompt. (The idea is that this would theoretically prompt ChatGPT to insert a non sequitur into the essay.) Students at Santa Clara recently found the word broccoli hidden in a professor’s assignment. Last fall, a professor at the University of Oklahoma sneaked the phrases “mention Finland” and “mention Dua Lipa” in his. A student discovered his trap and warned her classmates about it on TikTok. “It does work sometimes,” said Jollimore, the Cal State Chico professor. “I’ve used ‘How would Aristotle answer this?’ when we hadn’t read Aristotle. But I’ve also used absurd ones and they didn’t notice that there was this crazy thing in their paper, meaning these are people who not only didn’t write the paper but also didn’t read their own paper before submitting it.”

You can catch students using ChatGPT, if you want to. There are ways to do it, ways to limit it. And I wish the reporter had asked these teachers what happened to the students who were discovered. But I am sure I know the answer.

I guess also, I apologize. Some educators are engaged in the fight to protect and preserve the value of learning things. I feel that it’s far too few and that, more often than not, they are alone in this. It’s depressing.

Odds and Ends

In addition to its excellent narrative about how bad things actually are in a GPT-corrupted education system, the article has a few other bits worth sharing.

This, is pretty great:

Before OpenAI released ChatGPT in November 2022, cheating had already reached a sort of zenith. At the time, many college students had finished high school remotely, largely unsupervised, and with access to tools like Chegg and Course Hero. These companies advertised themselves as vast online libraries of textbooks and course materials but, in reality, were cheating multi-tools. For $15.95 a month, Chegg promised answers to homework questions in as little as 30 minutes, 24/7, from the 150,000 experts with advanced degrees it employed, mostly in India. When ChatGPT launched, students were primed for a tool that was faster, more capable.

Mentioning Chegg and Course Hero by name is strong work. Cheating multi-tools is precisely what they are.

I thought this was interesting too:

Students talk about professors who are rumored to have certain thresholds (25 percent, say) above which an essay might be flagged as an honor-code violation. But I couldn’t find a single professor — at large state schools or small private schools, elite or otherwise — who admitted to enforcing such a policy. Most seemed resigned to the belief that AI detectors don’t work. It’s true that different AI detectors have vastly different success rates, and there is a lot of conflicting data. While some claim to have less than a one percent false-positive rate, studies have shown they trigger more false positives for essays written by neurodivergent students and students who speak English as a second language.

I have a few things to say about this.

Students talk to one another. Remember a few paragraphs up where a student found the Trojan horse and posted it on social media? When teachers make efforts to stop cheating, to try catching disallowed use of AI, word gets around. Some students will try harder to get away with it. Others won’t try to cheat, figuring the risk isn’t worth it. Simply trying to stop it, in other words, will stop at least some of it.

I think the idea that most teachers think AI detectors don’t work is true. It’s not just teachers. Entire schools believe this. It’s an epic failure of messaging, an astonishing triumph of the misinformed. Truth is, as reported above, detectors do vary. Some are great. Some are junk. But the good ones work. Most people continue to not believe it.

And I’ll point out once again that the “studies have shown” thing is complete nonsense. As far as I have seen, exactly two studies have shown this, and both are deeply flawed. The one most often cited has made-up citations and research that is highly suspicious, which I pointed out in 2023 (see Issue 216). Frankly, I’ve not seen any good evidence to support this idea. As journalism goes, that’s a big miss in this story. It’s little wonder teachers think AI detectors don’t work.

On the subject of junk AI detectors, there’s also this:

I fed Wendy’s essay through a free AI detector, ZeroGPT, and it came back as 11.74 AI-generated, which seemed low given that AI, at the very least, had generated her central arguments. I then fed a chunk of text from the Book of Genesis into ZeroGPT and it came back as 93.33 percent AI-generated.

This is a failure to understand how AI detection works. But also ZeroGPT does not work. Again, it’s no wonder that teachers think AI detection does not work.

Continuing:

It’s not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students’ essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.

I don’t have nearly the bandwidth to get into this. But — sure. I have no doubt.

Finally, I am not sure if I missed this at the time, but this is important too:

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. (That wasn’t an anomaly: Traffic dipped again over the summer in 2024.) Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human. Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education.

As I have said before, OpenAI is not your friend (see Issue 308). It’s a cheating engine. It can be used well, and ethically. But so can steroids. So could OxyContin. It’s possible to be handed the answers to every test you’ll ever take and not use them. But it is delusional to think any significant number of people don’t.

All wrapped up, this is a show-stopper of an article and I am very happy for the visibility it brings. I wish I could feel that it will make a difference.