Search This Blog

Showing posts sorted by relevance for query cheating. Sort by date Show all posts
Showing posts sorted by relevance for query cheating. Sort by date Show all posts

Thursday, May 15, 2025

The Epic, Must-Read Coverage in New York Magazine (Derek Newton)


The Epic, Must-Read Coverage in New York Magazine
 
READ IN APP
 

Issue 364

Subscribe below to join 4,663 (+6) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.

The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.


New York Magazine Goes All-In, And It’s Glorious

Venerable New York Magazine ran an epic piece (paywall) on cheating and cheating with AI recently. It’s a thing of beauty. I could have written it. I should have. But honestly, I could not have done much better.

The headline is brutal and blunt:

Everyone Is Cheating Their Way Through College

To which I say — no kidding.

The piece wanders around, in a good way. But I’m going to try to put things in a more collected order and share only the best and most important parts. If I can. Whether I succeed or not, I highly encourage you to go over and read it.

Lee and Cheating Everything

The story starts with Chungin “Roy” Lee, the former student at Columbia who was kicked out for selling cheating hacks and then started a company to sell cheating hacks. His story is pretty well known at this point, but if you want to review it, we touched on it in Issue 354.

What I learned in this story is that, at Columbia, Lee:

by his own admission, proceeded to use generative artificial intelligence to cheat on nearly every assignment. As a computer-science major, he depended on AI for his introductory programming classes: “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” By his rough math, AI wrote 80 percent of every essay he turned in.

And:

“Most assignments in college are not relevant,” [Lee] told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort.

The article says Lee’s admissions essay for Columbia was AI too.

So, for all the people who were up in arms that Columbia would sanction a student for building a cheating app, maybe there’s more to it than just that. Maybe Lee built a cheating app because he’s a cheater. And, as such, has no place in an environment based on learning. That said, it’s embarrassing that Columbia did not notice a student in such open mockery of their mission. Seriously, embarrassing.

Continuing from the story:

Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

Also embarrassing for Columbia. But seriously, Lee has no idea what he is talking about. Consider this:

Lee explained to me that by showing the world AI could be used to cheat during a remote job interview, he had pushed the tech industry to evolve the same way AI was forcing higher education to evolve. “Every technological innovation has caused humanity to sit back and think about what work is actually useful,” he said. “There might have been people complaining about machinery replacing blacksmiths in, like, the 1600s or 1800s, but now it’s just accepted that it’s useless to learn how to blacksmith.”

I already regret writing this — but maybe if Lee had done a little more reading, done any writing at all, he could make a stronger argument. His argument here is that of a precocious eighth grader.

OpenAI/ChatGPT and Students

Anyway, here are sections and quotes from the article about students using ChatGPT to cheat. I hope you have a strong stomach.

As a brief aside, having written about this topic for years now, I cannot tell you how hard it is to get students to talk about this. What follows is the highest quality journalism. I am impressed and jealous.

From the story:

“College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.

More:

Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school.

And:

After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.”

This really is where we are. These students are not outliers.

Worse, being as clear here as I know how to be — 95% of colleges do not care. At least not enough to do anything about it. They are, in my view, perfectly comfortable with their students faking it, laughing their way through the process, because fixing it is hard. It’s easier to look cool and “embrace” AI than to acknowledge the obvious and existential truth.

But let’s keep going:

now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences?

Please mentally underline the “no consequences” part. These are not bad people, the students using ChatGPT and other AI products to cheat. They are making an obvious choice — easy and no penalty versus actual, serious work. So long as this continues to be the equation, cheating will be as common as breathing. Only idiots and masochists will resist.

Had enough? No? Here:

Wendy, a freshman finance major at one of the city’s top universities, told me that she is against using AI. Or, she clarified, “I’m against copy-and-pasting. I’m against cheating and plagiarism. All of that. It’s against the student handbook.” Then she described, step-by-step, how on a recent Friday at 8 a.m., she called up an AI platform to help her write a four-to-five-page essay due two hours later.

Of course. When you ask students if they condone cheating, most say no. Most also say they do not cheat. Then, when you ask about what they do specifically, it’s textbook cheating. As I remember reading in Cheating in College, when you ask students to explain this disconnect, they often say, “Well, when I did it, it was not cheating.” Wendy is a good example.

In any case, this next section is long, and I regret sharing all of it. I really want people to read the article. But this, like so much of it, is worth reading. Even if you read it here.

More on Wendy:

Whenever Wendy uses AI to write an essay (which is to say, whenever she writes an essay), she follows three steps. Step one: “I say, ‘I’m a first-year college student. I’m taking this English class.’” Otherwise, Wendy said, “it will give you a very advanced, very complicated writing style, and you don’t want that.” Step two: Wendy provides some background on the class she’s taking before copy-and-pasting her professor’s instructions into the chatbot. Step three: “Then I ask, ‘According to the prompt, can you please provide me an outline or an organization to give me a structure so that I can follow and write my essay?’ It then gives me an outline, introduction, topic sentences, paragraph one, paragraph two, paragraph three.” Sometimes, Wendy asks for a bullet list of ideas to support or refute a given argument: “I have difficulty with organization, and this makes it really easy for me to follow.”

Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. “I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”

I asked Wendy if I could read the paper she turned in, and when I opened the document, I was surprised to see the topic: critical pedagogy, the philosophy of education pioneered by Paulo Freire. The philosophy examines the influence of social and political forces on learning and classroom dynamics. Her opening line: “To what extent is schooling hindering students’ cognitive ability to think critically?” Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.”

Unfortunately, we’ve read this before. Many times. Use of generative AI to outsource the effort of learning is rampant.

Want more? There’s also Daniel, a computer science student at the University of Florida:

AI has made Daniel more curious; he likes that whenever he has a question, he can quickly access a thorough answer. But when he uses AI for homework, he often wonders, If I took the time to learn that, instead of just finding it out, would I have learned a lot more? At school, he asks ChatGPT to make sure his essays are polished and grammatically correct, to write the first few paragraphs of his essays when he’s short on time, to handle the grunt work in his coding classes, to cut basically all cuttable corners. Sometimes, he knows his use of AI is a clear violation of student conduct, but most of the time it feels like he’s in a gray area. “I don’t think anyone calls seeing a tutor cheating, right? But what happens when a tutor starts writing lines of your paper for you?” he said.

When a tutor starts writing your paper for you, if you turn that paper in for credit you receive, that’s cheating. This is not complicated. People who sell cheating services and the people who buy them want to make it seem complicated. It’s not.

And the Teachers

Like the coverage of students, the article’s work with teachers is top-rate. And what they have to say is not one inch less important. For example:

Brian Patrick Green, a tech-ethics scholar at Santa Clara University, immediately stopped assigning essays after he tried ChatGPT for the first time. Less than three months later, teaching a course called Ethics and Artificial Intelligence, he figured a low-stakes reading reflection would be safe — surely no one would dare use ChatGPT to write something personal. But one of his students turned in a reflection with robotic language and awkward phrasing that Green knew was AI-generated. A philosophy professor across the country at the University of Arkansas at Little Rock caught students in her Ethics and Technology class using AI to respond to the prompt “Briefly introduce yourself and say what you’re hoping to get out of this class.”

Students are cheating — using AI to outsource their expected learning labor — in a class called Ethics and Artificial Intelligence. And in an Ethics and Technology class. At what point does reality’s absurdity outpace our ability to even understand it?

Also, as I’ve been barking about for some time now, low-stakes assignments are probably more likely to be cheated than high-stakes ones (see Issue 64). I don’t really get why professional educators don’t get this.

But returning to the topic:

After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,”

To read about Jollimore’s outstanding essay, see Issue 346.

And, of course, there’s more. Like the large section above, I regret copying so much of it, but it’s essential reading:

Many teachers now seem to be in a state of despair. In the fall, Sam Williams was a teaching assistant for a writing-intensive class on music and social change at the University of Iowa that, officially, didn’t allow students to use AI at all. Williams enjoyed reading and grading the class’s first assignment: a personal essay that asked the students to write about their own music tastes. Then, on the second assignment, an essay on the New Orleans jazz era (1890 to 1920), many of his students’ writing styles changed drastically. Worse were the ridiculous factual errors. Multiple essays contained entire paragraphs on Elvis Presley (born in 1935). “I literally told my class, ‘Hey, don’t use AI. But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out,’” Williams said.

Williams knew most of the students in this general-education class were not destined to be writers, but he thought the work of getting from a blank page to a few semi-coherent pages was, above all else, a lesson in effort. In that sense, most of his students utterly failed. “They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays. And I get it, because I hated writing essays when I was in school,” Williams said. “But now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.”

By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.”

The “true attempt at a paper” policy ruined Williams’s grading scale. If he gave a solid paper that was obviously written with AI a B, what should he give a paper written by someone who actually wrote their own paper but submitted, in his words, “a barely literate essay”? The confusion was enough to sour Williams on education as a whole. By the end of the semester, he was so disillusioned that he decided to drop out of graduate school altogether. “We’re in a new generation, a new time, and I just don’t think that’s what I want to do,” he said.

To be clear, the school is ignoring the obvious use of AI by students to avoid the work of learning — in violation of stated policies — and awarding grades, credit, and degrees anyway. Nearly universally, we are meeting lack of effort with lack of effort.

More from Jollimore:

He worries about the long-term consequences of passively allowing 18-year-olds to decide whether to actively engage with their assignments.

I worry about that too. I really want to use the past tense there — worried about. I think the age of active worry about this is over. Students are deciding what work they think is relevant or important — which I’d wager is next to none of it — and using AI to shrug off everything else. And again, the collective response of educators seems to be — who cares? Or, in some cases, to quit.

More on professors:

Some professors have resorted to deploying so-called Trojan horses, sticking strange phrases, in small white text, in between the paragraphs of an essay prompt. (The idea is that this would theoretically prompt ChatGPT to insert a non sequitur into the essay.) Students at Santa Clara recently found the word broccoli hidden in a professor’s assignment. Last fall, a professor at the University of Oklahoma sneaked the phrases “mention Finland” and “mention Dua Lipa” in his. A student discovered his trap and warned her classmates about it on TikTok. “It does work sometimes,” said Jollimore, the Cal State Chico professor. “I’ve used ‘How would Aristotle answer this?’ when we hadn’t read Aristotle. But I’ve also used absurd ones and they didn’t notice that there was this crazy thing in their paper, meaning these are people who not only didn’t write the paper but also didn’t read their own paper before submitting it.”

You can catch students using ChatGPT, if you want to. There are ways to do it, ways to limit it. And I wish the reporter had asked these teachers what happened to the students who were discovered. But I am sure I know the answer.

I guess also, I apologize. Some educators are engaged in the fight to protect and preserve the value of learning things. I feel that it’s far too few and that, more often than not, they are alone in this. It’s depressing.

Odds and Ends

In addition to its excellent narrative about how bad things actually are in a GPT-corrupted education system, the article has a few other bits worth sharing.

This, is pretty great:

Before OpenAI released ChatGPT in November 2022, cheating had already reached a sort of zenith. At the time, many college students had finished high school remotely, largely unsupervised, and with access to tools like Chegg and Course Hero. These companies advertised themselves as vast online libraries of textbooks and course materials but, in reality, were cheating multi-tools. For $15.95 a month, Chegg promised answers to homework questions in as little as 30 minutes, 24/7, from the 150,000 experts with advanced degrees it employed, mostly in India. When ChatGPT launched, students were primed for a tool that was faster, more capable.

Mentioning Chegg and Course Hero by name is strong work. Cheating multi-tools is precisely what they are.

I thought this was interesting too:

Students talk about professors who are rumored to have certain thresholds (25 percent, say) above which an essay might be flagged as an honor-code violation. But I couldn’t find a single professor — at large state schools or small private schools, elite or otherwise — who admitted to enforcing such a policy. Most seemed resigned to the belief that AI detectors don’t work. It’s true that different AI detectors have vastly different success rates, and there is a lot of conflicting data. While some claim to have less than a one percent false-positive rate, studies have shown they trigger more false positives for essays written by neurodivergent students and students who speak English as a second language.

I have a few things to say about this.

Students talk to one another. Remember a few paragraphs up where a student found the Trojan horse and posted it on social media? When teachers make efforts to stop cheating, to try catching disallowed use of AI, word gets around. Some students will try harder to get away with it. Others won’t try to cheat, figuring the risk isn’t worth it. Simply trying to stop it, in other words, will stop at least some of it.

I think the idea that most teachers think AI detectors don’t work is true. It’s not just teachers. Entire schools believe this. It’s an epic failure of messaging, an astonishing triumph of the misinformed. Truth is, as reported above, detectors do vary. Some are great. Some are junk. But the good ones work. Most people continue to not believe it.

And I’ll point out once again that the “studies have shown” thing is complete nonsense. As far as I have seen, exactly two studies have shown this, and both are deeply flawed. The one most often cited has made-up citations and research that is highly suspicious, which I pointed out in 2023 (see Issue 216). Frankly, I’ve not seen any good evidence to support this idea. As journalism goes, that’s a big miss in this story. It’s little wonder teachers think AI detectors don’t work.

On the subject of junk AI detectors, there’s also this:

I fed Wendy’s essay through a free AI detector, ZeroGPT, and it came back as 11.74 AI-generated, which seemed low given that AI, at the very least, had generated her central arguments. I then fed a chunk of text from the Book of Genesis into ZeroGPT and it came back as 93.33 percent AI-generated.

This is a failure to understand how AI detection works. But also ZeroGPT does not work. Again, it’s no wonder that teachers think AI detection does not work.

Continuing:

It’s not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students’ essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.

I don’t have nearly the bandwidth to get into this. But — sure. I have no doubt.

Finally, I am not sure if I missed this at the time, but this is important too:

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. (That wasn’t an anomaly: Traffic dipped again over the summer in 2024.) Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human. Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education.

As I have said before, OpenAI is not your friend (see Issue 308). It’s a cheating engine. It can be used well, and ethically. But so can steroids. So could OxyContin. It’s possible to be handed the answers to every test you’ll ever take and not use them. But it is delusional to think any significant number of people don’t.

All wrapped up, this is a show-stopper of an article and I am very happy for the visibility it brings. I wish I could feel that it will make a difference.

Friday, November 1, 2024

Student Newspaper Promotes Cheating Services for Cash (Derek Newton)

The Daily, the student newspaper at Ball State University in Indiana, ran an article recently with this headline:

Best Way to Remove AI Plagiarism from Text: Bypass AI Detectors

So, that’s pretty bad. There’s no real justification that I can imagine for advising students on how not to get caught committing academic fraud. But here we are.

The article is absent a byline, of course. And it comes with the standard disclaimer that papers and publishers probably believe absolves them of responsibility:

This post is provided by a third party who may receive compensation from the products or services they mention.

Translated, this means that some company, probably a soulless, astroturf digital content and placement agent, was paid by a cheating provider to place their dubious content and improve their SEO results. The agent, in turn, pays the newspaper for the “post” to appear on their pages, under their masthead. The paper, in turn, gets to play the ridiculous and tiring game of — “that’s not us.”

We covered similar antics before, in Issue 204.

Did not mean to rhyme. Though, I do it all the time.

Anyway, seeing cheating services in a student newspaper feels new, and doubly problematic — not only increasing the digital credibility of companies that sell deception and misconduct, but perhaps actually reaching target customers. It’s not ideal.

I did e-mail The Daily to ask about the article/advertisement and where they thought their duties sat related to integrity and fraud. They have not replied, and the article is still up.

That article is what you may expect. It starts:

Is your text AI-generated? If so, it may need to be revised to remove AI plagiarism.

You may need to remove the plagiarism — not actually do the work, by the way — because, they say, submitting “AI-plagiarized content” in assignments and research papers is, and I quote, “not advisable.”

Do tell, how do you use AI to generate text and remove the plagiarism? The Ball State paper is happy to share. Always check your paper through an AI-detector, they advise. Then, “it should be converted to human-like content.”

The article continues:

Dozens of AI humanizing tools are available to bypass AI detectors and produce 100% human-like content.

And, being super helpful, the article lists and links to several of them. But first, in what I can just barely describe as English, the article includes:

  • If the text is generated or paraphrased with AI models are most likely that AI plagiarised.

  • If you write the content using custom LLMs with advanced prompts are less liked AI-generated.

  • When you copied word-to-word content from other AI writers.

  • Trying to humanize AI content with cheap Humanizer tools leads to AI plagiarism. 

Ah, what’s that again?

Following that, the piece offers step-by-step advice to remove AI content, directing readers to AI detectors, then pasting the flagged content into a different software and:

Click the “Humanize” button.

The suggested software, the article says:

produces human content for you.

First, way creepy. Second, there is zero chance that’s not academic cheating. Covering your tracks is not clever, it’s an admission of intent to deceive.

And, the article goes on:

If you successfully removed AI-generated content with [company redacted], you can use it.

Go ahead, use it. But let’s also reflect on the obvious — using AI content to replace AI content is no way removing AI content.

Surprising absolutely no one, the article also suggests using QuillBot, which is owned by cheating titan Course Hero (now Learneo), and backed by several education investors (see Issue 80).

Continuing:

Quillbot can accurately rewrite any AI-generated content into human-like content

Yes, the company that education investors have backed is being marketed as a way to sneak AI-created academic work past AI detection systems. It’s being marketed that way because that is exactly what it does. These investors, so far as I can tell, seem not the least bit bothered by the fact that one of their companies is polluting and destroying the teaching and learning value proposition they claim to support.

As long as the checks keep coming - amiright?

After listing other step-by-step ways to get around AI detectors, the article says:

If you use a good service, you can definitely transform AI-generated content into human-like content.

By that, they mean not getting caught cheating.

None of this should really surprise anyone. Where there’s a dollar to be made by peddling unethical shortcuts, someone will do it because people will pay.

Before moving on, let me point out once again the paradox — if AI detectors do not work, as some people mistakenly claim, why are companies paying for articles such as this one to sell services to bypass them? If AI detection was useless, there would be no market at all for these fingerprint erasure services. 

This article first appeared at Derek Newton's The Cheat Sheet.  

Friday, May 5, 2023

Cheating Giant Chegg, Shrinks (Derek Newton)

[Editor's Note: This article first appeared in The Cheat Sheet, the free newsletter on academic integrity and cheating.]


Yesterday, academic cheating company Chegg took yet another major hit on its stock value after the market closed, a decline that continues.

Today, Chegg - which is shockingly listed on the New York Stock Exchange - tumbled below $10 a share. In February 2021, Chegg shares were worth more than $113. In just over two years, Chegg shares have lost more than $100 in value - an Alpine decline of more than 91%.

Yikes.

The panic retreat by investors was initiated by Chegg’s quarterly earnings (Q1 - 2023) which were, not good. The bullets, according to news coverage:

Total net revenue down 7% year over year

*Subscription services, which represent 90% of Chegg’s business, were down 3% year over year.

*Total subscribers were down 5.1 million year over year.

*Projected further, continued declines in revenue, subscribers, and profit.

The company and media blamed the decline on AI tools such as ChatGPT - the automated service that can answer academic questions faster than Chegg, and for free.

In the earnings announcement, Chegg’s CEO said:

"since March we saw a significant spike in student interest in ChatGPT. We now believe it’s having an impact on our new customer growth rate."

Two things.

To start, The Cheat Sheet could have saved Chegg’s investors some serious money. Or, made you some, had you shorted Chegg. Back in Issue 68, I wrote:

Bottom line: Chegg as a business is in trouble.

Yup.

This past February, in Issue 193, I wrote:

… Chegg thinks their earnings will be essentially unchanged for 2023 vs 2022. I think they’re dreaming.

They were.

I’d repeated the wisdom of some smart readers who said early, early on that the likes of ChatGPT was going to be a Chegg killer. I agreed and told EdSurge exactly that, also in February (see Issue 193):

Some instructors have opposed companies like Chegg and Course Hero, as trying to get content related to the courses they teach removed can cause a headache. The chatbots represent a new headache, for teachers and possibly also for homework-help companies.

That whole business could be threatened by free tools like ChatGPT, argues Derek Newton, who runs The Cheat Sheet, a newsletter that covers academic dishonesty.

For Newton, the primary motivation of a student using homework-help services is laziness or a lack of preparedness. And so having a free alternative that can give answers to questions — like ChatGPT — could shrink the number of students who are willing to pay

In that Issue I wrote:

It’s too early to tell if ChatGPT will dent Chegg and its irresponsible ilk - but I can’t really see how it won’t.

And so it came to pass.

It is clear now that Chegg’s recent announcement of a partnership with ChatGPT (see Issue 203) was a desperate Hail Mary. And there’s no reason to think it will work, no reason to think Chegg’s decline won’t continue.

It also answers a question I’d been wrestling with for years - whether Chegg’s investors (see Issue 142) knew its core business was academic misconduct or not. This most recent investment retreat proved to me that they did. They only left when a better, more efficient cheater started eating their profits.

But a wise confidant and reader texted to say my question was academic - Chegg’s investors know now. He’s right.

When a free answer site takes away your customers, it becomes very clear very quickly what you’re actually selling.

Finally, a reminder that a collapsing valuation is not Chegg’s only problem.

As it happens, I checked in last week on the legal challenge by Pearson, against Chegg (see Issue 55). The suit is still active. And if Pearson wins, it could decapitate Chegg’s entire value proposition - selling the answers to questions they do not own. Chegg also continues to face investor legal challenges (see Issue 163). Since this recent stock evaporation essentially confirms that Chegg was a cheating provider all along, it’s hard to see how this recent news hurts investors’ claims.

Related links: 


Monday, December 30, 2024

2025 Will Be Wild!

2025 promises to be a disruptive year in higher education and society, not just in DC but across the US. While some now can see two demographic downturns, worsening climate conditions, and a Department of Education in transition, there are other less predictable and lesser-known trends and developments that we hope to cover at the Higher Education Inquirer. 

The Trump Economy

Folks are expecting a booming economy in 2025. Crypto and AI mania, along with tax cuts and deregulation, mean that corporate profits should be enormous. The Roaring 2020s will be historic for the US, just as the 1920s were, with little time and thought spent on long-range issues such as climate change and environmental destruction, economic inequality, or the potential for an economic crash.  

A Pyramid, Two Cliffs, a Wall and a Door  

HEI has been reporting about enrollment declines since 2016.  Smaller numbers of younger people and large numbers of elderly Baby Boomers and their health and disability concerns spell trouble ahead for states who may not consider higher education a priority. We'll have to see how Republican promises for mass deportations turn out, but just the threats to do so could be chaotic. There will also be controversies over the Trump/Musk plan to increase the number of H1B visas.  

The Shakeup at ED

With Linda McMahon at the helm of the Department of Education, we should expect more deregulation, more cuts, and less student loan debt relief. Mike Rounds has introduced a Senate Bill to close ED, but the Bill does not appear likely to pass. Diversity, Equity, and Inclusion (DEI) efforts may take a hit. However, online K12 education, robocolleges, and surviving online program managers could thrive in the short run.   

Student Loan Debt 

Student loan debt is expected to rise again in 2025. After a brief respite from 2020 to late 2024, and some receiving debt forgiveness, untold millions of borrowers will be expected to make payments that they may not be able to afford. How this problem affects an otherwise booming economy has not been receiving much media attention. 

Policies Against Diversity, Equity, and Inclusion

This semester at highly selective institutions, Black first-year student enrollment dropped by 16.9 percent. At MIT, the percentage of Black students decreased from 15 percent to 5 percent. At Harvard Law School, the number of Black law students has been cut by more than half.  Florida, Texas, Alabama, Iowa and Utah have banned diversity, equity and inclusion (DEI) offices at public universities. Idaho, Indiana and Kansas have prohibited colleges from requiring diversity statements in hiring and admissions. The resistance so far has been limited.

Failing Schools and Strategic Partnerships 

People should expect more colleges to fail in the coming months and years, with the possibility that the number of closures could accelerate. Small religious schools are particularly vulnerable. Colleges may further privatize their operations to save money and make money in an increasingly competitive market.

Campus Protests and Mass Surveillance

Protests may be limited out of fear of persecution, even if there are a number of legitimate issues to protest, to include human induced climate change, genocide in Palestine, mass deportations, and the resurgence of white supremacy. Things could change if conditions are so extreme that a critical mass is willing to sacrifice. Other issues, such as the growing class war, could bubble up. But mass surveillance and stricter campus policies have been emplaced at elite and name brand schools to reduce the odds of conflict and disruption.

The Legitimization of Robocollege Credentials    

Online higher education has become mainstream despite questions of its efficacy. Billions of dollars will be spent on ads for robocolleges. Religious robocolleges like Liberty University and Grand Canyon University should continue to grow and more traditional religious schools continue to shrink. University of Southern Hampshire, Purdue Global and Arizona Global will continue to enroll folks with limited federal oversight.  Adult students at this point are still willing to take on debt, especially if it leads to job promotions where an advanced credential is needed. 


Apollo Global Management is still working to unload the University of Phoenix. The sale of the school to the Idaho Board of Education or some other state organization remains in question.

AI and Cheating 

AI will continue to affect society, promising to add more jobs and threatening to take others.  One less visible way AI affects society is in academic cheating.  As long as there have been grades and competition, students have cheated.  But now it's become an industry. Even the concept of academic dishonesty has changed over the years. One could argue that cheating has been normalized, as Derek Newton of the Cheat Sheet has chronicled. Academic research can also be mass produced with AI.   

Under the Radar

A number of schools, companies, and related organizations have flown under the radar, but that could change. This includes Maximus and other Student Loan Servicers, Guild Education, EducationDynamics, South University, Ambow Education, National American UniversityPerdoceo, Devry University, and Adtalem

Related links:

Survival of the Fittest

The Coming Boom 

The Roaring 2020s and America's Move to the Right

Austerity and Disruption

Dozens of Religious Schools Under Department of Education Heightened Cash Monitoring

Shall we all pretend we didn't see it coming, again?: higher education, climate change, climate refugees, and climate denial by elites

The US Working-Class Depression: "Let's all pretend we couldn't see it coming."

Tracking Higher Ed’s Dismantling of DEI (Erin Gretzinger, Maggie Hicks, Christa Dutton, and Jasper Smith, Chronicle of Higher Education). 

Friday, November 22, 2024

Accreditor ACCSC Again Grants Maximum Renewal To Troubled For-Profit Colleges (David Halperin)

College accreditor ACCSC has renewed approval of four for-profit colleges owned by California-based International Education Corp. (IEC), a company that was forced to shut down many of its campuses in the past year after a U.S. Department of Education investigation revealed the schools were rigging student entrance exams and engaging in other fraudulent conduct.

By memo dated November 15, ACCSC noticed the public that it had renewed accreditation of four IEC-owned schools for five years, which is the maximum period of renewal that ACCSC grants to colleges. Three of the schools — in Gardena, Riverside, and Sacramento, California — are branded as UEI College, while the fourth, called United Education Institute, is in Las Vegas.

Abuses at IEC schools

In February, the Department of Education terminated financial aid eligibility to another IEC-owned chain called Florida Career College, and the school closed. As part of the resolution of that matter, the CEO of IEC, Fardad Fateri, stepped down. The Department acted because it found, as described in a detailed 38-page letter sent to FCC in April 2023, blatant cheating at FCC on “ability-to-benefit” entrance exams for students without a high school diploma.

Republic Report, relying on interviews with numerous FCC staff, had first exposed that long-running rampant misconduct, along with other blatant recruiting and financial abuses, at FCC. FCC’s misbehavior lured numerous students — veterans, single parents, immigrants, and other struggling Americans — into low-quality school programs that left them deep in debt and without the career advancement they sought.

The Department’s February settlement agreement with IEC indicated that the Department had an open investigation of potential violations at UEI similar to those found at FCC. The settlement barred UEI from administering ATB tests going forward. As part of the settlement, the Department agreed to end its investigation of UEI, if UEI complied with the settlement agreement. That investigation of UEI is apparently now over.

However, a September 2023 letter from ACCSC to IEC revealed that the company was also under investigation by California’s attorney general. It’s unclear whether that investigation remains open.

ACCSC was not the accreditor of Florida Career College, but it does accredit some of the UEI campuses. Soon after the Department announced in April 2023 that it was moving to cut off federal student aid to FCC, ACCSC placed UEI College and International Education Corp. on “System-Wide Warning” status, citing the Department’s findings that senior IEC leaders knew of and encouraged the cheating, and also citing IEC’s alleged failure to inform ACCSC of the Department’s investigation in a timely manner. ACCSC also noted that IEC had voluntarily halted ability-to-benefit testing and enrollment at UEI; the accreditor’s May 2023 order included a requirement that such testing and enrollment be suspended — suggesting already that there might be questions about ATB testing at UEI.

Yet now ACCSC has renewed accreditation for IEC/UEI schools for the maximum period, the same renewal that it would grant to the best-behaving schools. The renewals are effective back to dates in 2020 and 2022, reflecting in part that ACCSC delayed decisions on renewal while the schools were being evaluated, so the schools must seek renewal again soon. But it’s fair to ask whether the full five-year renewals were appropriate, or whether, instead, ACCSC continues to tolerate college abuses, to the detriment of both students and of the U.S. taxpayers who support the hundreds of millions in federal financial aid that have flowed to ACCSC schools.

ACCSC executive director Michale McComis did not respond to a request for comment regarding the renewal for the IEC schools.

Abuses at other ACCSC-accredited schools

The question of ACCSC’s tolerance for predatory college abuses is again squarely presented as ACCSC faces its own next review: its application to be renewed in 2026 by the Department of Education as a recognized accreditor, a status that allows schools it accredits to be eligible for federal student grants and loans. The maximum renewal period for this gatekeeper status is also five years. That review process is already underway at the Department.

The last time ACCSC was up for renewal, in 2021, the Department, citing failures by ACCSC in curbing long-running abuses at another awful predatory college operation, the Center for Excellence in Higher Education (owner of now-shuttered Independence University), delayed renewal of recognition, required ACCSC to explain its conduct, and ultimately extended ACCSC f0r three years instead of five — although the way the process went forward, the practical effect, disappointingly, was a five year renewal.

Data shows many ACCSC schools have left students worse off than when they started.

Since ACCSC’s last review, the accreditor has engaged in other troubling behavior.

Most notably, as Republic Report first reported, in July 2023, ACCSC had watched while Atlantis University, a Miami-based for-profit school, acted in blatant violation of an ACCSC rule governing the use of “branch campuses” tied to a school’s central campus. Atlantis’s executive director was, at the time, the chair of ACCSC.

The Atlantis branch campus, called Florida Palms University, shut down soon after our report. ACCSC then put Atlantis on warning status, via a letter that, as we noted at the time, was heavily redacted in the version released to the public. Whatever problems the many blacked-out passages of the October 2023 letter concealed stood in sharp contrast to ACCSC’s unconditional five-year renewal of Atlantis in December 2022. ACCSC removed Atlantis from warning status by February 2024.

This year, after Republic Report had repeatedly been able to learn valuable information about the bad behavior of some ACCSC-accredited schools through the public release of detailed letters from the accreditor to schools like UEI and Atlantis — at least the unredacted portions — ACCSC moved away from transparency and accountability. It started releasing, instead of the actual letters to schools, vague summaries that keep the public in the dark about what is actually happening.

ACCSC is also the accreditor of troubled Connecticut-based for-profit Paier College, which faces possible closure after losing access to federal student aid and having been sued for deceptive practices by the state’s attorney general. ACCSC placed Paier on warning status in June, citing low graduation rates and weak validation of faculty credentials. But that action by ACCSC came six months after the school, facing scrutiny from the U.S. Department of Education, voluntarily withdrew from eligibility for federal student grants and loans.

Another ACCSC school, Career College of Northern Nevada (CCNN), abruptly closed in February, replacing its website with a closure notice and literally locking students out of the building.

In June 2023, yet another ACCSC-accredited school, Hussian College, suddenly shut down. In June 2022, ACCSC had put Hussian on system-wide warning, citing concerns about student achievement at the schools. But ACCSC removed the warning and renewed Hussian’s accreditation in December 2022.

ACCSC also accredits Florida’s for-profit Southeastern College. There is much evidence suggesting that that school, owned by ultra-rich Floridians Arthur and Belinda Keiser, effectively receives improper subsidies from Keiser University, a non-profit college controlled by the Keisers.

ACCSC’s renewal application and the new Trump administration

Members of the public have until December 6 to submit written comments to the Department of Education regarding ACCSC’s bid for renewal.

The Biden administration, and U.S. Secretary of Education Miguel Cardona, to their credit, took much more seriously the Department’s obligation to review accreditors for their vigilance in guarding against predatory college abuses than the first Trump administration and Secretary Betsy DeVos did. If the second Trump administration, and new education secretary pick Linda McMahon, truly want to help students, and truly want to implement the incoming administration’s professed commitment to rooting out waste, fraud, and abuse in federal government programs, then it should continue the Biden team’s work of holding predatory colleges accountable — and also holding accountable the accreditors that allow such abuses to persist.

[Editor's note: This article originally appeared on Republic Report.] 

Thursday, July 1, 2021

The Growth of "RoboColleges" and "Robostudents"


In a previous Higher Education Inquirer article, I presented frightening full-time faculty numbers at some large online universities which I call "robocolleges."  Full-time faculty at these robocolleges, in fact, are nearly nonexistent. Bear in mind that all of them are regionally accredited, the highest level of institutional accreditation, and the list includes well-known public university systems as well as for-profit ones.  

Robocolleges have de-skilled instruction by paying teams of workers, some qualified and some not, to write content, while computer programs perform instructional and management tasks. Learning management systems with automated instruction programs are known by different names and their mechanisms are proprietary.  As professor jobs are deskilled, tasks can be farmed out at reduced costs.  

Besides the human content creators who may be given instructional titles, other staff members at robocolleges are paid to communicate with students regarding their progress. The assumption is that managing work this way significantly reduces costs, and it does, at least in the short and medium terms.  However, instructional costs are frequently replaced by marketing and advertising expenses to pitch the schools to prospective students and their families.  Companies like EducationDynamics and Guild Education have filled the niche of promoting robocolleges to workers at a reduced cost but their overall impact is minimal.  

Meanwhile,  companies like Chegg profit from this form of learning, helping students game the system in greater numbers, in essence creating robostudents.  

The business model in higher education for reducing labor power and faculty costs is not reserved to for-profit colleges.  Community colleges also rely on a small number of full-time faculty and armies of low-wage contingent labor.  

In some cases, colleges and universities, including many brand name schools, utilize outside companies, online program managers (OPMs), to run their online programs, with OPMs like 2U taking up as much as 60 percent of the revenues.  OPMs can perform a variety of jobs, but are best known for their work in enrollment and retention.  Prospective students may believe they are talking to representatives of a particular university when in fact they are talking to someone from an outside source.  Noodle has disrupted the OPM model by selling their services ala carte, but only time will tell whether it has an impact, or whether schools will merely find less costly outsourced servicers.  

Outsourcing higher education has been a reality in US higher education for decades. And automation is also part of education, as it should, when it performs menial tasks, such as taking roll and doing preliminary work to determine student cheating.  It's likely that more schools will become more robotic in nature to reduce organizational expenses.  But what are the long-term consequences with long-term student outcomes, when automation is used to perform higher level tasks, and when outsourced individuals act in the name of brand name colleges?  

To get a small glimpse of this robocollege phenomenon, these schools cumulatively have about 3000 full-time instructors for more than a half-million students.  

American Intercontinental University: 51 full-time instructors for about 8,700 students.
American Public University System has 345 F/T instructors for more than 50,000 students. 
Aspen University has 34 F/T instructors for about 9,500 students.  
Capella University: 216 F/T for about 38,000 students.
Colorado State University Global: 34 F/T instructors for 12,000 students.
Colorado Technical University: 59 F/T instructors for 26,000 students.
Devry University online: 53 F/T instructors for about 17,000 students.
Grand Canyon University has 461 F/T instructors for 103,000 students.*  
Liberty University: 1072 F/T for more than 85,000 students.*
Purdue University Global: 346 F/T instructors for 38,000 students.
South University: 0 F/T instructors for more than 6000 students.
Southern New Hampshire University: 164 F/T for 104,000 students.
University of Arizona Global Campus: 194 F/T instructors for about 35,000 students.
University of Maryland Global: 193 F/T instructors for 60,000 students.
University of Phoenix: 127 F/T instructors for 96,000 students.
Walden University: 206 F/T for more than 50,000 students.

*Most of these full-time instructors are faculty at the physical campuses.