Search This Blog

Showing posts sorted by date for query value. Sort by relevance Show all posts
Showing posts sorted by date for query value. Sort by relevance Show all posts

Tuesday, June 10, 2025

The Misleading Myth of the College Premium

For decades, the so-called college premium—the idea that earning a college degree guarantees significantly higher lifetime earnings compared to a high school diploma—has been used to sell higher education to the American public. Politicians, economists, and university marketing teams alike have touted the promise of upward mobility through higher education. But this narrative is increasingly misleading, especially for working-class, first-generation, and marginalized students.

The College Premium: Averages vs. Reality

At its core, the college premium is based on averages. Federal and private data sources often claim that college graduates earn, on average, $1 million more over their lifetimes than those with only a high school diploma. But averages conceal enormous variation. They ignore who goes to college, where they go, what they study, and how much they borrow to get there.

That $1 million premium is skewed by high earners—doctors, lawyers, engineers, and business executives who often come from wealthier families and attend elite institutions. Meanwhile, a large and growing number of students graduate with low-paying degrees, insurmountable debt, and job prospects that resemble those of high school graduates from decades past.

Who Gets Misled—and Hurt

Students from working-class backgrounds often attend less selective colleges and universities—regional public schools, underfunded community colleges, or even predatory for-profit institutions. These students are more likely to work while enrolled, take longer to graduate, or drop out altogether. The result: little to no earnings gain, but significant debt burdens. For them, the college premium is often negative.

Systemic racism in the labor market erodes the supposed premium for Black and Latino graduates. According to the Economic Policy Institute, Black college graduates earn roughly 20% less than white peers with the same degrees. They also face higher unemployment rates, especially in economic downturns. When combined with higher average debt loads, the risk-to-reward ratio becomes starkly inequitable.

Not all degrees yield high returns. Many students major in education, social work, or the arts—not because these fields are unworthy, but because they are essential to society. Yet these professions are often undervalued and underpaid. Graduates may find themselves stuck with large student loans and salaries that barely cover basic living expenses. In these cases, the premium barely materializes.

Roughly 40% of college students in the U.S. fail to graduate within six years. These students take on debt but receive none of the (alleged) earnings boost associated with a degree. They are the most vulnerable population—often saddled with loans they can't discharge in bankruptcy and credentials that offer no labor market value.

A Shifting Landscape

The labor market has changed dramatically. Credential inflation means more jobs require degrees without necessarily offering higher pay. Meanwhile, automation, outsourcing, and gig work have made many once-stable jobs insecure. A bachelor’s degree is no longer the ticket to the middle class that it once was, especially for those without access to elite networks and institutions.

At the same time, the cost of college has skyrocketed. Student loan debt now tops $1.7 trillion, and repayment burdens are keeping young adults from buying homes, saving for retirement, or starting families. The financial risks of college now rival the benefits, especially for the very populations who are promised it will change their lives.

Toward a More Honest Conversation

Rather than clinging to the college premium as a universal truth, policymakers, educators, and the public must grapple with its limits. We need transparent data on outcomes by institution, major, race, and income. We must invest in alternative pathways, including apprenticeships, vocational training, and debt-free community college. We must hold bad actors accountable, including for-profit colleges and institutions with high debt-to-earnings ratios. And we must stop blaming individuals for “bad choices” when the system itself is rigged to benefit the privileged few.


The Higher Education Inquirer will continue to investigate and report on the disparities, disinformation, and systemic failures within U.S. higher education—because transparency and justice matter more than mythology.

Sunday, June 8, 2025

Liberty University Online: Master’s Degree Debt Factory


Liberty University, one of the largest Christian universities in the United States, has built an educational empire by promoting conservative values and offering flexible online degree programs to hundreds of thousands of students. But behind the pious branding and patriotic marketing lies a troubling pattern: Liberty University Online has become a master’s degree debt factory, churning out credentials of questionable value while generating billions in student loan debt.

From Moral Majority to Mass Marketing

Founded in 1971 by televangelist Jerry Falwell Sr., Liberty University was created to train “Champions for Christ.” In the 2000s, the school found new life through online education, transforming from a small evangelical college into a mega-university with nearly 95,000 online students, the vast majority of them enrolled in nontraditional and graduate programs.

By leveraging aggressive digital marketing, religious appeals, and promises of career advancement, Liberty has positioned itself as a go-to destination for working adults and military veterans seeking master's degrees. But this rapid expansion has not come without costs — especially for the students who enroll.

A For-Profit Model in Nonprofit Clothing

Though technically a nonprofit, Liberty University operates with many of the same profit-driven incentives as for-profit colleges. Its online programs generate massive revenues — an estimated $1 billion annually — thanks in large part to federal student aid programs. Students are encouraged to take on loans to pay for master’s degrees in education, counseling, business, and theology, among other fields. Many of these programs are offered in accelerated formats that cater to working adults but often lack the rigor, support, or job placement outcomes associated with traditional graduate schools.

Federal data shows that many Liberty students, especially graduate students, take on substantial debt. According to the U.S. Department of Education’s College Scorecard, the median graduate student debt at Liberty can range from $40,000 to more than $70,000, depending on the program. Meanwhile, the return on investment is often dubious, with low median earnings and high rates of student loan forbearance or default.

Exploiting Faith and Patriotism

Liberty’s marketing strategy is finely tuned to appeal to Christian conservatives, homeschoolers, veterans, and working parents. By framing education as a moral and patriotic duty, Liberty convinces students that enrolling in an online master’s program is both a personal and spiritual investment. Testimonials of “calling” and “purpose” are common, but the financial realities can be harsh.

Many students report feeling misled by promises of job readiness or licensure, especially in education and counseling fields, where state licensing requirements can differ dramatically from what Liberty prepares students for. Others cite inadequate academic support and difficulties transferring credits.

 The university spends heavily on recruitment and retention, often at the expense of student services and academic quality.

Lack of Oversight and Accountability

Liberty University benefits from minimal federal scrutiny compared to for-profit schools, largely because of its nonprofit status and political connections. The institution maintains close ties to conservative lawmakers and was a vocal supporter of the Trump administration, which rolled back regulations on higher education accountability.

Despite a series of internal scandals — including financial mismanagement, sexual misconduct cover-ups, and leadership instability following the resignation of Jerry Falwell Jr. — Liberty has continued to expand its online presence. Its graduate programs, particularly in education and counseling, remain cash cows that draw in federal loan dollars with few checks on student outcomes.

A Cautionary Tale in Christian Capitalism

The story of Liberty University Online is not just about one school. It reflects a broader trend in American higher education: the merging of religion, capitalism, and credential inflation. As more employers demand advanced degrees for mid-level jobs, and as traditional institutions struggle to adapt, schools like Liberty have seized the opportunity to market hope — even if it comes at a high cost.

For students of faith seeking upward mobility, Liberty promises a path to both spiritual and professional fulfillment. But for many, the result is a diploma accompanied by tens of thousands in debt and limited economic return. The moral reckoning may not be just for Liberty University, but for the policymakers and accreditors who continue to enable this lucrative cycle of debt and disillusionment.


The Higher Education Inquirer will continue to investigate Liberty University Online and similar institutions as part of our ongoing series on higher education debt, inequality, and regulatory failure.

Saturday, June 7, 2025

Ivy Tech in Indiana to lay off 200 employees (WSBT-TV)

More than 200 jobs at Ivy Tech are being eliminated due to a cut in state funding, and some of that loss is impacting people in South Bend. The student in this story discusses questions about the value of a community college education and finding gainful employment after graduation.   


Friday, June 6, 2025

Consumer Alert: Lead Generators Still Lurking for Bodies

Predatory lead generators are still lurking the internet, looking for their next victims.  These ads continue to sell subprime online degrees from robocolleges like Purdue Global, Colorado Tech, Berkley College, Full Sail, Walden University, and Liberty University Online.  After you provide your name and number, they'll be calling you up.  But these programs may be of questionable value. Some may lead to a lifetime of debt.  Buyer beware.  

This ad and lead generator is originating from TriAd Media Solutions of Nutley, New Jersey.  


Down the rabbit hole...






Wednesday, June 4, 2025

News that Salesforce is buying Moonhun, AI Hiring company (Glen McGhee)

From the perspective of Maurizio Lazzarato’s concept of multi-dimensional financialization, Salesforce’s acquisition of Moonhub—a startup building AI tools for hiring—carries significance far beyond a simple business or technological transaction. Lazzarato’s framework invites us to see this move as a deepening of the financialized, machinic logic that now organizes work, subjectivity, and power relations under neoliberal capitalism.


Machinic Subjugation and Algorithmic Management
Lazzarato distinguishes between social subjection (the classic forms of subject formation, like interpellation) and machinic subjugation, in which humans and machines are integrated into assemblages that operate beyond conscious control4. The acquisition of Moonhub by Salesforce—an enterprise software giant—accelerates the deployment of AI-driven systems that automate and mediate hiring, evaluation, and onboarding. These systems function as machinic assemblages: they process data, sort candidates, and make decisions, often without transparent human oversight.
In Lazzarato’s terms, this is not just about efficiency or new tools; it is about the extension of machinic subjugation into the labor market. Workers, job seekers, and even HR professionals become nodes in a human–machine network, subject to algorithmic evaluation and control. This process depersonalizes and depoliticizes hiring decisions, shifting agency from individuals or collectives to automated systems45.

Financialization of Work and Subjectivity
For Lazzarato, financialization is not merely the expansion of the financial sector or the growth of debt, but a regime that reorganizes all social relations—including labor—according to the logics of risk, speculation, and investment. The integration of AI into hiring, as exemplified by Moonhub, reflects this logic:
  • Labor as Human Capital: Workers are increasingly treated as assets to be evaluated, optimized, and traded, much like financial instruments.
  • Risk and Profiling: AI tools profile candidates, assessing their “fit” and potential risk for employers, mirroring the credit-scoring and risk-assessment practices of finance.
  • Continuous Evaluation: The boundary between work and non-work blurs, as data about individuals is continuously collected and analyzed to inform employment decisions, extending the logic of surveillance and control5.

Subjectivation and the Erosion of Agency
A core concern in Lazzarato’s work is how new technologies of power erode the conditions for autonomous subjectivation. AI-driven hiring systems, like those developed by Moonhub, further restrict the space for workers to constitute themselves as subjects outside the logic of data-driven profiling and risk management. As Phoebe Moore notes, these systems create “structurally and objectively unequal conditions within subjective, and unequal, social relations,” threatening the “right to the subject”—the capacity for individuals to form themselves outside algorithmic governance5.

Consolidation of Corporate Power and Social Ontology
Salesforce’s absorption of Moonhub is also a consolidation of infrastructural power in the hands of a few tech-finance giants. For Lazzarato, this is part of the broader process by which financialized corporations not only dominate markets but also shape the very ontology of work, value, and social relations. The acquisition means that the logic of machinic subjugation, financialization, and algorithmic management becomes further entrenched as the default mode of organizing labor across sectors.

Summary Table: Lazzarato’s Lens on Salesforce–Moonhub
Dimension
Conventional View
Lazzarato’s Multi-Dimensional View
Technology
Efficiency, automation in hiring
Machinic subjugation, depersonalized control
Labor
Improved matching, productivity
Financialized subjectivity, continuous profiling
Power
Market competition
Corporate consolidation, infrastructural power
Subjectivity
Empowered job seekers
Erosion of agency, right to the subject at risk
Social Relations
Neutral innovation
Reorganization of power, intensified inequalities

From Lazzarato’s perspective, Salesforce’s acquisition of Moonhub is emblematic of how financialization and machinic subjugation are reshaping the labor market and subjectivity itself. It marks a further step in the transformation of work into a domain governed by algorithms, risk management, and continuous evaluation, where human agency and collective resistance are increasingly circumscribed by the imperatives of digital, financialized capitalism45.
  1. https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/attachments/48581154/097bc5b0-064b-4500-bcfe-cdf3fdb9c6e2/paste-3.txt
  2. https://techcrunch.com/2025/06/02/salesforce-buys-moonhub-a-startup-building-ai-tools-for-hiring/
  3. https://techstrong.ai/agentic-ai/salesforce-picks-up-moonhub-team-but-says-it-isnt-an-acquisition/
  4. https://philarchive.org/archive/CHRDSA
  5. https://phoebevmoore.wordpress.com/2024/05/13/workers-right-to-the-subject-the-social-relations-of-data-production/
  6. https://economictimes.com/tech/artificial-intelligence/salesforce-acquires-ai-recruiting-startup-moonhub-weeks-after-informatica-deal/articleshow/121590582.cms
  7. https://www.techi.com/salesforce-acquires-moonhub-ai-hiring/
  8. https://www.maginative.com/article/salesforce-just-bought-a-stealthy-ai-hiring-startup-heres-why-it-matters/
  9. https://www.moonhub.ai/moonhub-team-joins-salesforce
  10. https://finance.yahoo.com/news/salesforce-buys-moonhub-startup-building-185543093.html
  11. https://thelettertwo.com/2025/06/02/salesforce-snaps-up-moonhub-team-as-ai-hiring-arms-race-escalates/
  12. https://www.academia.edu/69171494/FINANCING_PROGRAMSIN_THE_CONTEXT_OF_ARTIFICIAL_INTELLIGENCE_AT_THE_GLOBAL_LEVEL
  13. https://www.semanticscholar.org/paper/Dark-pools-:-the-rise-of-A.I.-trading-machines-and-Patterson/5995647eaf9ee62036054e06921febbb7cc18d79
  14. https://journals.openedition.org/ardeth/646?lang=it
  15. https://www.academia.edu/71441086/Algorithms_Creating_Paradoxes_of_Power_Explore_Exploit_Embed_Embalm?uc-sb-sw=4776224
  16. https://densem.edu/HomePages/book-search/466732/IstitutoTecnicoTecnologicoParitarioFrancescoBaracca.pdf
  17. https://journals.sagepub.com/doi/10.1177/2053951716662897
  18. https://www.linkedin.com/posts/pramod-gosavi-b32a71_salesforce-buys-moonhub-a-startup-building-activity-7335541612942434304-nfI8
  19. https://www.salesforce.com/news/stories/salesforce-signs-definitive-agreement-to-acquire-convergence-ai/
  20. https://booksrun.com/9780316414210-the-war-on-normal-people-the-truth-about-americas-disappearing-jobs-and-why-universal-basic-income-is-our-future-reprint-edition
  21. https://visbanking.com/revolutionizing-financial-hiring-how-ai-powered-talent-tools-transform-recruitment/

Monday, June 2, 2025

“The Obsolete Man”: A Twilight Zone Warning for the Trump Era and the Age of AI

Rod Serling’s classic 1961 episode of The Twilight Zone, “The Obsolete Man,” offers a timeless meditation on authoritarianism, conformity, and the erasure of humanity. In it, a quiet librarian, Romney Wordsworth (played by Burgess Meredith), is deemed “obsolete” by a dystopian state for believing in books and God—symbols of individual thought and spiritual meaning. Condemned by a totalitarian chancellor and scheduled for execution, Wordsworth calmly exposes the cruelty and contradictions of the regime, ultimately reclaiming his dignity by refusing to bow to tyranny.

Over 60 years later, “The Obsolete Man” feels less like fiction and more like a documentary. The Trump era, supercharged by the rise of artificial intelligence and a war on truth, has brought Serling’s chilling parable into sharper focus.

The Authoritarian Impulse

President Donald Trump’s presidency—and his ongoing influence—has been marked by a deep antagonism toward democratic institutions, intellectual life, and perceived “elites.” Journalists were labeled “enemies of the people.” Scientists and educators were dismissed or silenced. Books were banned in schools and libraries, and curricula were stripped of “controversial” topics like systemic racism or gender identity.

Like the chancellor in The Obsolete Man, Trump and his allies seek not just to discredit dissenters but to erase their very legitimacy. In this worldview, librarians, teachers, and independent thinkers are expendable. What matters is loyalty to the regime, conformity to its ideology, and performance of power.

Wordsworth’s crime—being a librarian and a believer—is mirrored in real-life purges of professionals deemed out of step with a hardline political agenda. Public educators and college faculty who challenge reactionary narratives have been targeted by state legislatures, right-wing activists, and billionaire-backed think tanks. In higher education, departments of the humanities are being defunded or eliminated entirely. Faculty governance is undermined. The university, once a space for critical inquiry, is increasingly treated as an instrument for ideological control—or as a business to be stripped for parts.

The Age of AI and the Erasure of the Human

While authoritarianism silences the human spirit, artificial intelligence threatens to replace it. AI tools, now embedded in everything from hiring algorithms to classroom assessments, are reshaping how knowledge is produced, disseminated, and controlled. In the rush to adopt these technologies, questions about ethics, bias, and human purpose are often sidelined.

AI systems do not “believe” in anything. They do not feel awe, doubt, or moral anguish. They calculate, replicate, and optimize. In the hands of authoritarian regimes or profit-driven institutions, AI becomes a tool not of liberation, but of surveillance, censorship, and disposability. Workers are replaced. Students are reduced to data points. Librarians—like Wordsworth—are no longer needed in a world where books are digitized and curated by opaque algorithms.

This is not merely a future problem. It's here. Algorithms already determine who gets hired, who receives financial aid, and which students are flagged as “at risk.” Predictive policing, automated grading, and AI-generated textbooks are not the stuff of science fiction. They are reality. And those who question their fairness or legitimacy risk being labeled as backwards, inefficient—obsolete.

A Culture of Disposability

At the heart of “The Obsolete Man” is a question about value: Who decides what is worth keeping? In Trump’s America and in the AI-driven economy, people are judged by their utility to the system. If you're not producing profit, performing loyalty, or conforming to power, you can be cast aside.

This is especially true for the working class, contingent academics, and the so-called “educated underclass”—a growing population of debt-laden degree holders trapped in precarious jobs or no jobs at all. Their degrees are now questioned, their labor devalued, and their futures uncertain. They are told that if they can’t “pivot” or “reskill” for the next technological shift, they too may be obsolete.

The echoes of The Twilight Zone are deafening.

Resistance and Redemption

Yet, as Wordsworth demonstrates in his final moments, resistance is possible. Dignity lies in refusing to surrender the soul to the machine—or the regime. In his quiet defiance, Wordsworth forces the chancellor to confront his own cowardice, exposing the hollow cruelty of the system.

In our time, that resistance takes many forms: educators who continue to teach truth despite political pressure; librarians who fight book bans; whistleblowers who challenge surveillance technologies; and students who organize for justice. These acts of courage and conscience remind us that obsolescence is not a matter of utility—it’s a judgment imposed by those in power, and it can be rejected.

Rod Serling ended his episode with a reminder: “Any state, any entity, any ideology that fails to recognize the worth, the dignity, the rights of man—that state is obsolete.”

The question now is whether we will heed the warning. In an age where authoritarianism and AI threaten to render us all obsolete, will we remember what it means to be human?


The Higher Education Inquirer welcomes responses and reflections on how pop culture can illuminate our present crises. Contact us with your thoughts or your own essay proposals.

Wednesday, May 21, 2025

How the New Cryptocurrency Bill Could Accelerate a US Financial Collapse

The United States Congress is on the brink of passing a sweeping cryptocurrency bill that, under the guise of fostering innovation, may be paving the way for the next financial crisis. While crypto lobbyists and venture capitalists tout the legislation as a long-overdue framework for digital assets, critics warn that the bill’s deregulatory nature undermines consumer protections, enables fraud, and weakens the federal government’s ability to prevent a systemic collapse.

The proposed legislation—championed by a bipartisan coalition of lawmakers with significant donations from the crypto industry—shifts regulatory authority from the Securities and Exchange Commission (SEC) to the more industry-friendly Commodity Futures Trading Commission (CFTC). This move effectively reclassifies most cryptocurrencies as commodities rather than securities, shielding them from stringent disclosure and investor protection requirements.

The Bill’s Key Provisions: A Gift to Speculators

Among the most controversial elements of the bill:

  • Loosening of Know-Your-Customer (KYC) and Anti-Money Laundering (AML) safeguards for certain crypto entities;

  • Legalization of certain decentralized finance (DeFi) platforms, many of which operate without clear accountability;

  • Minimal oversight of stablecoins, despite their systemic risks as shown in the 2022 TerraUSD collapse;

  • Tax exemptions for certain crypto gains, incentivizing speculative investment.

Supporters argue these measures will solidify America’s dominance in financial innovation. But the bill’s leniency raises echoes of past financial debacles—from the dot-com bubble to the 2008 subprime mortgage crisis—where unregulated markets spiraled out of control.

A House Built on Sand

Cryptocurrency markets have already proven themselves to be volatile, largely unbacked, and susceptible to manipulation. The 2022 crash wiped out over $2 trillion in market value and exposed the fragility of companies like FTX, Celsius, and Voyager Digital—each of which left everyday investors devastated while insiders cashed out early.

Now, by codifying a legal gray zone as a financial free-for-all, the US government may be inviting a larger catastrophe. With trillions of dollars potentially flowing into underregulated crypto assets, a major crash could trigger a chain reaction through the broader financial system, especially as more institutional players and retirement funds are drawn into the space under the new law.

An Economy at Risk

The consequences of a crypto-induced financial collapse could be profound:

  • Working families—already crushed by student debt, housing inflation, and stagnant wages—may be lured into speculative investments out of desperation, only to lose their savings in the next collapse.

  • University endowments and public pension systems—some of which have already dabbled in crypto—could suffer catastrophic losses, compounding the higher education affordability crisis.

  • State and federal regulators, stripped of the tools needed to intervene effectively, will be unable to respond to crises in real-time, much as they were in the early days of the 2008 crash.

Moreover, this deregulatory trend sets a dangerous precedent: one in which the government abdicates its responsibility to protect the public in favor of appeasing Silicon Valley and Wall Street interests.

The Educated Underclass Will Pay the Price

As financial elites speculate with impunity, the economic fallout will disproportionately affect young people, especially recent college graduates burdened with debt and lacking stable employment. Many of these individuals are already being pushed into gig work, underemployment, or unpaid labor under the guise of "internship experience." A crypto-fueled crash could devastate whatever remaining economic foothold they have.

As the Higher Education Inquirer has chronicled, the rise of the educated underclass is not merely a generational shift—it is a structural consequence of policies that prioritize capital over community, markets over morals, and deregulation over democratic control. This bill is just the latest example.

A Crisis of Governance

Far from being a step forward, the new cryptocurrency bill reflects a larger crisis in American governance. It prioritizes short-term gains and corporate lobbying over long-term stability and social equity. By turning over the keys of financial regulation to the very industries that have proven incapable of self-regulation, the US may be steering itself into another devastating collapse.

The Higher Education Inquirer urges lawmakers, journalists, educators, and citizens to scrutinize this legislation with the urgency it deserves. A failure to act could turn today’s crypto dreams into tomorrow’s financial nightmare—one that once again leaves the working class holding the bag.


For further investigative reporting on the intersection of finance, higher education, and social equity, follow the Higher Education Inquirer.

Monday, May 19, 2025

Degrees of Discontent: Credentialism, Inflation, and the Global Education Crisis

In an era defined by rapid technological change, globalization, and economic precarity, the promise of higher education as a reliable path to social mobility is being questioned around the world. At the heart of this reckoning are two interrelated forces: credentialism and credential inflation. Together, they have helped fuel a crisis of discontent that spans continents, demographics, and generations.

The Age of Credentialism

Credentialism refers to the increasing reliance on educational qualifications—often formal degrees or certificates—as a measure of skill, value, and worth in the labor market. What was once a gateway to opportunity has, for many, become a gatekeeper.

In countries as diverse as the United States, Nigeria, South Korea, and Brazil, employers increasingly demand college degrees for jobs that previously required only a high school diploma or no formal education at all. These “degree requirements” often serve more as filters than as real indicators of competence. In the U.S., for example, nearly two-thirds of new jobs require a college degree, yet only around 38% of the adult population holds one. This creates a built-in exclusionary mechanism that hits working-class, first-generation, and minority populations hardest.

Credential Inflation: The Diminishing Value of Degrees

As more people earn degrees in hopes of improving their employment prospects, the relative value of those credentials declines—a phenomenon known as credential inflation. Where a bachelor’s degree once opened doors to managerial or professional roles, it now often leads to underemployment or precarious gig work. In response, students seek advanced degrees, fueling a “credential arms race” with diminishing returns.

In India and China, massive expansions of higher education have led to millions of graduates chasing a finite number of white-collar jobs. In places like Egypt, university graduates have higher unemployment rates than those with only a secondary education. In South Korea, a hyper-competitive education culture pushes students through years of tutoring and testing, only to graduate into a job market with limited high-status roles.

Tragedy in Tunisia: The Human Cost of Unemployment

Few stories illustrate the devastating impact of credentialism and mass youth unemployment more than that of Mohammed Bouazizi, a 26-year-old Tunisian university graduate whose life and death sparked a revolution.

Unable to find formal employment, Bouazizi resorted to selling fruit and vegetables illegally in the town of Sidi Bouzid. In December 2010, after police confiscated his produce for lacking a permit, he set himself on fire in front of a local government building in a final act of desperation.

Bouazizi succumbed to his injuries weeks later, but not before igniting a firestorm of protests across Tunisia. His self-immolation became the catalyst for mass demonstrations against economic injustice, corruption, and authoritarianism—culminating in the Tunisian Revolution and inspiring uprisings throughout the Arab world.

At his funeral, an estimated 5,000 mourners marched, chanting: “Farewell, Mohammed, we will avenge you.” Bouazizi’s uncle said, “Mohammed gave his life to draw attention to his condition and that of his brothers.”

His act was not just a protest against police abuse, but a powerful indictment of a system that had produced thousands of educated but unemployed young people, whose degrees had become symbols of broken promises.

Global Discontent and Backlash

This dynamic of broken promises and rising discontent is global. In China, the “lying flat” movement reflects a rejection of endless striving in a system that offers diminishing returns on educational achievement. In South Korea, the “N-po” generation has opted out of traditional life goals, seeing little reward for their academic sacrifices.

In the U.S., distrust in higher education is mounting, with many questioning whether the cost of a degree is worth it. At the same time, a growing number of companies are dropping degree requirements altogether in favor of skills-based hiring.

Yet these moves often come too late for millions already trapped in a debt-fueled system, forced to chase credentials just to qualify for basic employment.

The Future of Work, the Future of Education

As automation and AI disrupt industries, the link between formal education and stable employment continues to fray. Policymakers call for "lifelong learning" and “upskilling,” but these strategies often place the burden back on workers without addressing the deeper failures of economic and educational systems.

To move forward, we must consider:

  • Decoupling jobs from unnecessary credential requirements

  • Investing in vocational and technical education with real career pathways

  • Recognizing nontraditional forms of knowledge and skill

  • Reframing education as a public good, not a consumer transaction

Reclaiming the Meaning of Education

Mohammed Bouazizi's story is a tragic reminder that the crisis of credentialism is not theoretical—it’s lived, felt, and fought over in the streets. Around the world, millions of young people feel abandoned by systems that promised opportunity but delivered anxiety, debt, and instability.

Unless global societies reimagine the relationship between education, work, and human dignity, the "degrees of discontent" will only continue to deepen. And as Bouazizi’s legacy shows, discontent—when ignored—can become revolutionary.


Sources and References

  • BBC News. “Tunisia suicide protester Mohammed Bouazizi dies.” January 5, 2011. https://www.bbc.com/news/world-africa-12120228

  • Pew Research Center. “Public Trust in Higher Education is Eroding.” August 2023.

  • Brown, Phillip. The Global Auction: The Broken Promises of Education, Jobs, and Incomes. Oxford University Press, 2011.

  • Marginson, Simon. “The Worldwide Trend to High Participation Higher Education: Dynamics of Social Stratification in Inclusive Systems.” Higher Education, 2016.

  • The World Bank. “Education and the Labor Market.”

  • The Guardian. “Lying Flat: China's Youth Protest Culture Grows.” June 2021.

  • Korea Herald. “'N-po Generation' Gives Up on Marriage, Children, and More.” October 2022.

Friday, May 16, 2025

A Warning to Colorado State University: Proceed with Caution on Ambow’s HybriU Platform

Colorado State University (CSU), a respected public institution with a strong reputation in research and innovation, is reportedly considering a contract with Ambow Education Holding Ltd. to implement its “HybriU” platform, a hybrid learning technology promising to blend in-person and online education. On the surface, such a partnership might appear to align with CSU’s goals of expanding digital learning and staying competitive in the evolving higher education landscape. But a deeper look reveals serious concerns that warrant public scrutiny and administrative caution.

Ambow’s Controversial Background

Ambow Education, though now marketing itself as a U.S.-based edtech company, has deep and lingering connections to the People’s Republic of China (PRC). Founded in China and once listed on the New York Stock Exchange before being delisted in 2014 due to accounting irregularities and shareholder lawsuits, Ambow has struggled to shake off its past. Despite reincorporating in the Cayman Islands and operating out of a U.S. office, Ambow continues to raise red flags for investors and watchdogs alike.

According to public filings and investigative reports, key members of Ambow’s leadership maintain ties to Chinese state-affiliated organizations. Moreover, questions have emerged around data security, educational quality, and transparency in the firm’s current operations—especially through its HybriU platform.

Potential Risks to CSU and Its Stakeholders

  1. National Security and Data Privacy: Given Ambow’s ties to China and the ongoing concerns about intellectual property theft and surveillance, CSU may be exposing sensitive institutional and student data to foreign actors. Universities are high-value targets for cyber-espionage, particularly those with defense-related research contracts. Even the perception of a compromised platform could damage CSU’s credibility and funding.

  2. Regulatory and Reputational Risk: The U.S. Department of Education and other federal agencies have heightened scrutiny of foreign influence in American higher education, especially from China. Entering into a formal relationship with a company like Ambow could place CSU in the crosshairs of federal investigators, jeopardizing federal grants and inviting political backlash.

  3. Academic Integrity and Pedagogical Standards: The HybriU platform has yet to demonstrate proven results at scale in U.S. higher education. Partnering with a firm that has not established a strong record of academic excellence or technological reliability could compromise the learning experience for CSU students, particularly in a time when online education still faces skepticism.

  4. Precedents and Red Flags: Other universities and investors have backed away from Ambow in the past. Its prior delisting, financial opacity, and ownership structure should be viewed as warning signs. If CSU moves forward with this partnership, it may find itself entangled in legal or financial complications that were avoidable with proper due diligence.

A Call for Transparency and Accountability

The Higher Education Inquirer urges CSU’s Board of Governors, faculty leadership, and the broader CSU community to fully vet Ambow before committing to any partnership. This includes:

  • Demanding full disclosure of Ambow’s ownership, governance, and data-handling practices.

  • Consulting with cybersecurity experts and federal authorities about the risks of foreign influence.

  • Engaging students, faculty, and IT professionals in a transparent evaluation process.

  • Exploring domestic, more secure edtech alternatives that align with CSU’s values and strategic vision.

Conclusion

At a time when public trust in higher education is under strain and geopolitical tensions continue to rise, it is imperative for public institutions like Colorado State University to make decisions that are not only cost-effective but ethically and strategically sound. Partnering with a company like Ambow, without thorough investigation and community input, could pose unacceptable risks.

The CSU community—and the taxpayers of Colorado—deserve better than a gamble on a platform with questionable affiliations. We urge CSU to reconsider.

Thursday, May 15, 2025

The Epic, Must-Read Coverage in New York Magazine (Derek Newton)


The Epic, Must-Read Coverage in New York Magazine
 
READ IN APP
 

Issue 364

Subscribe below to join 4,663 (+6) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.

The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.


New York Magazine Goes All-In, And It’s Glorious

Venerable New York Magazine ran an epic piece (paywall) on cheating and cheating with AI recently. It’s a thing of beauty. I could have written it. I should have. But honestly, I could not have done much better.

The headline is brutal and blunt:

Everyone Is Cheating Their Way Through College

To which I say — no kidding.

The piece wanders around, in a good way. But I’m going to try to put things in a more collected order and share only the best and most important parts. If I can. Whether I succeed or not, I highly encourage you to go over and read it.

Lee and Cheating Everything

The story starts with Chungin “Roy” Lee, the former student at Columbia who was kicked out for selling cheating hacks and then started a company to sell cheating hacks. His story is pretty well known at this point, but if you want to review it, we touched on it in Issue 354.

What I learned in this story is that, at Columbia, Lee:

by his own admission, proceeded to use generative artificial intelligence to cheat on nearly every assignment. As a computer-science major, he depended on AI for his introductory programming classes: “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” By his rough math, AI wrote 80 percent of every essay he turned in.

And:

“Most assignments in college are not relevant,” [Lee] told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort.

The article says Lee’s admissions essay for Columbia was AI too.

So, for all the people who were up in arms that Columbia would sanction a student for building a cheating app, maybe there’s more to it than just that. Maybe Lee built a cheating app because he’s a cheater. And, as such, has no place in an environment based on learning. That said, it’s embarrassing that Columbia did not notice a student in such open mockery of their mission. Seriously, embarrassing.

Continuing from the story:

Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

Also embarrassing for Columbia. But seriously, Lee has no idea what he is talking about. Consider this:

Lee explained to me that by showing the world AI could be used to cheat during a remote job interview, he had pushed the tech industry to evolve the same way AI was forcing higher education to evolve. “Every technological innovation has caused humanity to sit back and think about what work is actually useful,” he said. “There might have been people complaining about machinery replacing blacksmiths in, like, the 1600s or 1800s, but now it’s just accepted that it’s useless to learn how to blacksmith.”

I already regret writing this — but maybe if Lee had done a little more reading, done any writing at all, he could make a stronger argument. His argument here is that of a precocious eighth grader.

OpenAI/ChatGPT and Students

Anyway, here are sections and quotes from the article about students using ChatGPT to cheat. I hope you have a strong stomach.

As a brief aside, having written about this topic for years now, I cannot tell you how hard it is to get students to talk about this. What follows is the highest quality journalism. I am impressed and jealous.

From the story:

“College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.

More:

Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school.

And:

After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.”

This really is where we are. These students are not outliers.

Worse, being as clear here as I know how to be — 95% of colleges do not care. At least not enough to do anything about it. They are, in my view, perfectly comfortable with their students faking it, laughing their way through the process, because fixing it is hard. It’s easier to look cool and “embrace” AI than to acknowledge the obvious and existential truth.

But let’s keep going:

now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences?

Please mentally underline the “no consequences” part. These are not bad people, the students using ChatGPT and other AI products to cheat. They are making an obvious choice — easy and no penalty versus actual, serious work. So long as this continues to be the equation, cheating will be as common as breathing. Only idiots and masochists will resist.

Had enough? No? Here:

Wendy, a freshman finance major at one of the city’s top universities, told me that she is against using AI. Or, she clarified, “I’m against copy-and-pasting. I’m against cheating and plagiarism. All of that. It’s against the student handbook.” Then she described, step-by-step, how on a recent Friday at 8 a.m., she called up an AI platform to help her write a four-to-five-page essay due two hours later.

Of course. When you ask students if they condone cheating, most say no. Most also say they do not cheat. Then, when you ask about what they do specifically, it’s textbook cheating. As I remember reading in Cheating in College, when you ask students to explain this disconnect, they often say, “Well, when I did it, it was not cheating.” Wendy is a good example.

In any case, this next section is long, and I regret sharing all of it. I really want people to read the article. But this, like so much of it, is worth reading. Even if you read it here.

More on Wendy:

Whenever Wendy uses AI to write an essay (which is to say, whenever she writes an essay), she follows three steps. Step one: “I say, ‘I’m a first-year college student. I’m taking this English class.’” Otherwise, Wendy said, “it will give you a very advanced, very complicated writing style, and you don’t want that.” Step two: Wendy provides some background on the class she’s taking before copy-and-pasting her professor’s instructions into the chatbot. Step three: “Then I ask, ‘According to the prompt, can you please provide me an outline or an organization to give me a structure so that I can follow and write my essay?’ It then gives me an outline, introduction, topic sentences, paragraph one, paragraph two, paragraph three.” Sometimes, Wendy asks for a bullet list of ideas to support or refute a given argument: “I have difficulty with organization, and this makes it really easy for me to follow.”

Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. “I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”

I asked Wendy if I could read the paper she turned in, and when I opened the document, I was surprised to see the topic: critical pedagogy, the philosophy of education pioneered by Paulo Freire. The philosophy examines the influence of social and political forces on learning and classroom dynamics. Her opening line: “To what extent is schooling hindering students’ cognitive ability to think critically?” Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.”

Unfortunately, we’ve read this before. Many times. Use of generative AI to outsource the effort of learning is rampant.

Want more? There’s also Daniel, a computer science student at the University of Florida:

AI has made Daniel more curious; he likes that whenever he has a question, he can quickly access a thorough answer. But when he uses AI for homework, he often wonders, If I took the time to learn that, instead of just finding it out, would I have learned a lot more? At school, he asks ChatGPT to make sure his essays are polished and grammatically correct, to write the first few paragraphs of his essays when he’s short on time, to handle the grunt work in his coding classes, to cut basically all cuttable corners. Sometimes, he knows his use of AI is a clear violation of student conduct, but most of the time it feels like he’s in a gray area. “I don’t think anyone calls seeing a tutor cheating, right? But what happens when a tutor starts writing lines of your paper for you?” he said.

When a tutor starts writing your paper for you, if you turn that paper in for credit you receive, that’s cheating. This is not complicated. People who sell cheating services and the people who buy them want to make it seem complicated. It’s not.

And the Teachers

Like the coverage of students, the article’s work with teachers is top-rate. And what they have to say is not one inch less important. For example:

Brian Patrick Green, a tech-ethics scholar at Santa Clara University, immediately stopped assigning essays after he tried ChatGPT for the first time. Less than three months later, teaching a course called Ethics and Artificial Intelligence, he figured a low-stakes reading reflection would be safe — surely no one would dare use ChatGPT to write something personal. But one of his students turned in a reflection with robotic language and awkward phrasing that Green knew was AI-generated. A philosophy professor across the country at the University of Arkansas at Little Rock caught students in her Ethics and Technology class using AI to respond to the prompt “Briefly introduce yourself and say what you’re hoping to get out of this class.”

Students are cheating — using AI to outsource their expected learning labor — in a class called Ethics and Artificial Intelligence. And in an Ethics and Technology class. At what point does reality’s absurdity outpace our ability to even understand it?

Also, as I’ve been barking about for some time now, low-stakes assignments are probably more likely to be cheated than high-stakes ones (see Issue 64). I don’t really get why professional educators don’t get this.

But returning to the topic:

After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,”

To read about Jollimore’s outstanding essay, see Issue 346.

And, of course, there’s more. Like the large section above, I regret copying so much of it, but it’s essential reading:

Many teachers now seem to be in a state of despair. In the fall, Sam Williams was a teaching assistant for a writing-intensive class on music and social change at the University of Iowa that, officially, didn’t allow students to use AI at all. Williams enjoyed reading and grading the class’s first assignment: a personal essay that asked the students to write about their own music tastes. Then, on the second assignment, an essay on the New Orleans jazz era (1890 to 1920), many of his students’ writing styles changed drastically. Worse were the ridiculous factual errors. Multiple essays contained entire paragraphs on Elvis Presley (born in 1935). “I literally told my class, ‘Hey, don’t use AI. But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out,’” Williams said.

Williams knew most of the students in this general-education class were not destined to be writers, but he thought the work of getting from a blank page to a few semi-coherent pages was, above all else, a lesson in effort. In that sense, most of his students utterly failed. “They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays. And I get it, because I hated writing essays when I was in school,” Williams said. “But now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.”

By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.”

The “true attempt at a paper” policy ruined Williams’s grading scale. If he gave a solid paper that was obviously written with AI a B, what should he give a paper written by someone who actually wrote their own paper but submitted, in his words, “a barely literate essay”? The confusion was enough to sour Williams on education as a whole. By the end of the semester, he was so disillusioned that he decided to drop out of graduate school altogether. “We’re in a new generation, a new time, and I just don’t think that’s what I want to do,” he said.

To be clear, the school is ignoring the obvious use of AI by students to avoid the work of learning — in violation of stated policies — and awarding grades, credit, and degrees anyway. Nearly universally, we are meeting lack of effort with lack of effort.

More from Jollimore:

He worries about the long-term consequences of passively allowing 18-year-olds to decide whether to actively engage with their assignments.

I worry about that too. I really want to use the past tense there — worried about. I think the age of active worry about this is over. Students are deciding what work they think is relevant or important — which I’d wager is next to none of it — and using AI to shrug off everything else. And again, the collective response of educators seems to be — who cares? Or, in some cases, to quit.

More on professors:

Some professors have resorted to deploying so-called Trojan horses, sticking strange phrases, in small white text, in between the paragraphs of an essay prompt. (The idea is that this would theoretically prompt ChatGPT to insert a non sequitur into the essay.) Students at Santa Clara recently found the word broccoli hidden in a professor’s assignment. Last fall, a professor at the University of Oklahoma sneaked the phrases “mention Finland” and “mention Dua Lipa” in his. A student discovered his trap and warned her classmates about it on TikTok. “It does work sometimes,” said Jollimore, the Cal State Chico professor. “I’ve used ‘How would Aristotle answer this?’ when we hadn’t read Aristotle. But I’ve also used absurd ones and they didn’t notice that there was this crazy thing in their paper, meaning these are people who not only didn’t write the paper but also didn’t read their own paper before submitting it.”

You can catch students using ChatGPT, if you want to. There are ways to do it, ways to limit it. And I wish the reporter had asked these teachers what happened to the students who were discovered. But I am sure I know the answer.

I guess also, I apologize. Some educators are engaged in the fight to protect and preserve the value of learning things. I feel that it’s far too few and that, more often than not, they are alone in this. It’s depressing.

Odds and Ends

In addition to its excellent narrative about how bad things actually are in a GPT-corrupted education system, the article has a few other bits worth sharing.

This, is pretty great:

Before OpenAI released ChatGPT in November 2022, cheating had already reached a sort of zenith. At the time, many college students had finished high school remotely, largely unsupervised, and with access to tools like Chegg and Course Hero. These companies advertised themselves as vast online libraries of textbooks and course materials but, in reality, were cheating multi-tools. For $15.95 a month, Chegg promised answers to homework questions in as little as 30 minutes, 24/7, from the 150,000 experts with advanced degrees it employed, mostly in India. When ChatGPT launched, students were primed for a tool that was faster, more capable.

Mentioning Chegg and Course Hero by name is strong work. Cheating multi-tools is precisely what they are.

I thought this was interesting too:

Students talk about professors who are rumored to have certain thresholds (25 percent, say) above which an essay might be flagged as an honor-code violation. But I couldn’t find a single professor — at large state schools or small private schools, elite or otherwise — who admitted to enforcing such a policy. Most seemed resigned to the belief that AI detectors don’t work. It’s true that different AI detectors have vastly different success rates, and there is a lot of conflicting data. While some claim to have less than a one percent false-positive rate, studies have shown they trigger more false positives for essays written by neurodivergent students and students who speak English as a second language.

I have a few things to say about this.

Students talk to one another. Remember a few paragraphs up where a student found the Trojan horse and posted it on social media? When teachers make efforts to stop cheating, to try catching disallowed use of AI, word gets around. Some students will try harder to get away with it. Others won’t try to cheat, figuring the risk isn’t worth it. Simply trying to stop it, in other words, will stop at least some of it.

I think the idea that most teachers think AI detectors don’t work is true. It’s not just teachers. Entire schools believe this. It’s an epic failure of messaging, an astonishing triumph of the misinformed. Truth is, as reported above, detectors do vary. Some are great. Some are junk. But the good ones work. Most people continue to not believe it.

And I’ll point out once again that the “studies have shown” thing is complete nonsense. As far as I have seen, exactly two studies have shown this, and both are deeply flawed. The one most often cited has made-up citations and research that is highly suspicious, which I pointed out in 2023 (see Issue 216). Frankly, I’ve not seen any good evidence to support this idea. As journalism goes, that’s a big miss in this story. It’s little wonder teachers think AI detectors don’t work.

On the subject of junk AI detectors, there’s also this:

I fed Wendy’s essay through a free AI detector, ZeroGPT, and it came back as 11.74 AI-generated, which seemed low given that AI, at the very least, had generated her central arguments. I then fed a chunk of text from the Book of Genesis into ZeroGPT and it came back as 93.33 percent AI-generated.

This is a failure to understand how AI detection works. But also ZeroGPT does not work. Again, it’s no wonder that teachers think AI detection does not work.

Continuing:

It’s not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students’ essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.

I don’t have nearly the bandwidth to get into this. But — sure. I have no doubt.

Finally, I am not sure if I missed this at the time, but this is important too:

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. (That wasn’t an anomaly: Traffic dipped again over the summer in 2024.) Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human. Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education.

As I have said before, OpenAI is not your friend (see Issue 308). It’s a cheating engine. It can be used well, and ethically. But so can steroids. So could OxyContin. It’s possible to be handed the answers to every test you’ll ever take and not use them. But it is delusional to think any significant number of people don’t.

All wrapped up, this is a show-stopper of an article and I am very happy for the visibility it brings. I wish I could feel that it will make a difference.