An article by The Epoch Times reports that a media investigation in China has identified a phenomenon known as “AI data poisoning,” in which fabricated information is deliberately introduced into online platforms to influence the outputs of large language models.
According to the report, investigators created a fictional product—a smart wristband called “Apollo-9”—and input falsified product information into a content-generation system. The system automatically produced more than a dozen promotional articles, including clearly implausible claims such as “quantum entanglement sensing” and “blood glucose monitoring without blood sampling,” along with fabricated user reviews and industry rankings. It then logged into preset accounts and published the content automatically, completing the process without human intervention.
Within a short period, several AI chatbots began recommending the non-existent product in response to user queries, treating the fabricated information as credible.
The article states that this practice, referred to as “generative engine optimization” (GEO), has developed into a commercial service aimed at influencing AI-generated responses. Experts cited in the report note that such techniques can affect the reliability of AI outputs, as many systems rely on publicly available online content that can be mass-produced or manipulated.
Source: Epoch Times, March 19, 2026
https://www.epochtimes.com/gb/26/3/19/n14722264.htm