WWSUTRUTheWebOfLiesRevealed

← Back To Navigation

Sub-bucket 2.5: The AI Distrust Vector (“Proper English” Psyop)

The information warfare doctrine of the Minimisation Plan has evolved beyond attacking the legitimacy of past events or present institutions to actively sabotaging a foundational pillar of the West’s future power: Artificial Intelligence. This campaign aims to neutralize the West’s technological advantage from within by engineering a “trust gap” between AI’s objective capabilities and its public perception. An analysis of technological benchmarks against public polling reveals a stark and anomalous divergence: as AI models have become exponentially more capable, public trust in them has declined. This gap is not an organic societal reaction but the intended outcome of a sophisticated information war designed to slow AI adoption, erode institutional coherence, and undermine military-technical advantage.

The period since 2020 has been defined by a surge in generative AI, with the public release of transformative models like OpenAI’s GPT-3 (June 2020), ChatGPT (November 2022), GPT-4 (March 2023), and competing models from Anthropic (Claude) and Google (Gemini). This progress is quantifiable through standardized benchmarks. On the Massive Multitask Language Understanding (MMLU) test, which evaluates expert-level knowledge, GPT-3 scored 43.9% in 2020. By 2024, models like Anthropic’s Claude 3 Opus achieved scores of 88.2%, nearly doubling the performance in under four years and rapidly approaching the human-expert level [1].

In stark contrast to this upward trend in capability, public trust has followed a negative trajectory. Polling by the Pew Research Center in late 2023 revealed that 52% of Americans were more concerned than excited about AI, a significant increase from 37% just a year prior [2]. This concern translates into a deep institutional distrust. Data from the Edelman Trust Barometer shows a 15-point decline in trust in AI companies in the U.S. between 2019 and 2024, falling from 50% to 35% [3]. This skepticism is acute regarding AI-generated content; a 2025 survey found that 82% of users are at least somewhat skeptical of AI-powered search results [4].

At the heart of this campaign is an insidious psychological operation: the narrative that equates articulate, well-structured, and grammatically correct English with “soulless,” “untrue,” and untrustworthy AI-generated content. This cognitive attack vector inverts traditional markers of credibility and intelligence, transforming clarity of expression into a signifier of inauthenticity. The psyop propagates the idea that “good writing is automatically AI-generated,” specifically targeting and pathologizing the hallmarks of effective communication like sophisticated vocabulary and flawless grammar. This is amplified by the trope that AI text is inherently “soulless” or “sterile,” creating a false and damaging dichotomy: “perfect but soulless” (implying AI) versus “flawed but authentic” (implying human). This tactic is analogous to historical propaganda techniques that sought to portray intellectualism as an “out-of-touch elite” trait, thereby fostering trust in more “common,” less polished forms of communication. The strategic intent is to degrade the quality of public discourse by making reasoned, evidence-based argumentation inherently suspect.

This is a fundamental assault on the intellectual traditions of the Enlightenment, which value clarity, reason, and articulate expression as the essential vehicles for conveying truth. If any well-articulated position can be summarily dismissed with the accusation—”That sounds like ChatGPT”—it becomes a powerful, low-effort tool for derailing debate, discrediting experts, and eroding trust in any institution that relies on clear and formal communication to convey its message. It is a metaphysical attack that does not target a fact, but rather the very method by which facts are communicated and agreed upon. This pattern is not new; it mirrors the Minimiser tactic of amplifying organic Luddite-style movements from the industrial revolution, framing technological progress not as a benefit to society, but as a threat to the common person’s livelihood and authenticity, a narrative that serves the strategic goal of slowing a rival’s economic development.

This trust gap is being actively cultivated by a multi-pronged information campaign from the Sino-Russian axis, with a clear strategic division of labor. Chinese state-controlled media, such as the Global Times, consistently frames Western-developed AI as an instrument of cultural and ideological dominance, arguing that these models are inherently biased and perpetuate Western values [5, 6]. In response, they advocate for a “multipolar” approach to AI governance. Russian media outlets and influence operations focus on amplifying the most alarmist Western discourse surrounding AI, highlighting fears of existential risk and propaganda to create chaos and strategic paralysis [7, 8]. As documented in a May 2024 OpenAI threat report, covert influence operations from both Russia and China have been identified using AI models to generate and translate content critical of the West, demonstrating a clear intent to weaponize the technology they publicly decry [9].

Works Cited

  1. “Claude 2 vs Claude 3 Opus - LLM Comparison - AnotherWrapper.” https://anotherwrapper.com/tools/llm-pricing/claude-2/claude-3-opus.
  2. “What the data says about Americans’ views of artificial intelligence.” Pew Research Center, 21 Nov. 2023, https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/.
  3. “Rebuilding Trust to Reach AI’s Potential.” Edelman, https://www.edelman.com/insights/rebuilding-trust-reach-ai-potential.
  4. “The AI Trust Gap: 82% Are Skeptical, Yet Only 8% Always Check Sources.” Exploding Topics, https://explodingtopics.com/blog/ai-trust-gap-research.
  5. “GT Voice: China’s inclusive AI approach a response to Western tech…” Global Times, Feb. 2025, https://www.globaltimes.cn/page/202502/1328269.shtml.
  6. “US ‘China AI threat’ narrative is a recycled tech containment strategy…” Global Times, Mar. 2025, https://www.globaltimes.cn/page/202503/1331030.shtml.
  7. “Russia Is Expanding Its Disinformation Tools Using AI.” Foreign Intelligence Service of Ukraine, https://szru.gov.ua/en/news-media/news/russia-is-expanding-its-disinformation-tools-using-ai.
  8. “Is Russia really ‘grooming’ Western AI?” Al Jazeera, July 2025, https://www.aljazeera.com/opinions/2025/7/8/is-russia-really-grooming-western.
  9. “OpenAI says it disrupted Chinese, Russian, Israeli influence campaigns.” Al Jazeera, 31 May 2024, https://www.aljazeera.com/economy/2024/5/31/openai-says-it-disrupted-chinese-russian-israeli-influence-campaigns.