Major Taiwanese newspaper China Times recently reported that, U.S. AI leader OpenAI filed a memorandum with the U.S. House of Representatives, accusing Chinese AI vendor DeepSeek of using so-called “distillation techniques” to obtain the hard-earned model results of OpenAI and other U.S. AI developers, and then using these techniques to train its own AI models.
DeepSeek, a rising star in Chinese AI, has stunned the world since launching its R1 model last year, but it has also been embroiled in allegations of technology theft. Foreign media reports indicate that OpenAI, the developer of ChatGPT, has warned that DeepSeek is targeting several U.S. AI companies, including OpenAI, attempting to replicate their model outputs and use them as the training basis for its own systems.
In its memo, OpenAI stated that they observed accounts associated with DeepSeek employees attempting to bypass OpenAI’s access restrictions through third-party routers and various obfuscation methods, and to massively scrape model outputs using programmatic code for distillation purposes.
OpenAI points out that large-scale language models developed in mainland China are “actively taking shortcuts” in knowledge training, rather than relying on their own research and development. The company emphasizes that once it discovers users attempting to build competitive models through distillation, it will proactively remove the relevant violating accounts to protect its technology and usage policies.
Source: China Times, February 13, 2026
https://www.chinatimes.com/realtimenews/20260213002208-260410?chdtv