Cover Story: AI Accelerates Old Threats, Doesn’t Create Sci-Fi Crises
In October 2025, OpenAI, the world’s leading AI company, released a stark threat report. The report indicates that since launching its public threat reporting mechanism in February 2024, they have successfully disrupted and reported over 40 networks that were maliciously using its models, such as ChatGPT.
The central message of this report is clear: AI has not yet created entirely new offensive capabilities straight out of science fiction, but it is dramatically accelerating existing cybercrime, scams, and geopolitical coercion in speed and scale.
“Threat actors are merely ‘bolting AI onto’ old playbooks to move faster,” the OpenAI CEO stated. This acceleration means the defense sector’s reaction time is severely compressed, while the barrier and cost of launching attacks are significantly lowered.
State-Level Coercion: The Weaponization of AI, From Social Media Surveillance to Information Warfare
The boldest and most sensitive section of the report points the finger at state-level malicious activities, which constitute severe violations of OpenAI’s “National Security Policy.”
🚨 China: Targeting Social Surveillance and Intelligence Gathering
OpenAI revealed that several accounts linked to China-related entities (including suspected government or intelligence agencies) had made highly sensitive requests to ChatGPT:
• Social Media Monitoring Systems: Requesting the model to provide proposals or architectures for designing large-scale “social media listening” tools. These systems were intended to scan platforms like X, Facebook, and TikTok for political dissent, extremist speech, and religious content.
• Cyber Espionage: These accounts were also found to be using AI to refine malware components (such as Remote Access Trojans, or RATs) and optimize content for phishing campaigns, with activity patterns aligning with known Chinese intelligence requirements.
These actions expose how authoritarian regimes are attempting to use general AI to push digital surveillance capabilities to a new frontier.
🇷🇺 Russia: Using AI to Craft “Deepfake” Propaganda
The report also named threat actors of suspected Russian origin. These entities were found using ChatGPT to generate video scripts or prompts intended for use with other models, aiming to create Deepfake videos and propaganda content styled as fake news, which they then disseminated across social media for Covert Influence Operations (IO). This highlights how AI’s role in information warfare has evolved from simple text generation to the mass production of multimedia content.
Transnational Scams: AI as the Criminal Pipeline’s “Efficiency Tool”
Beyond geopolitical threats, large-scale organized scams and cybercrime remain a heavy-hit area for AI misuse.
The report details several transnational crime syndicates that have integrated AI into their criminal “assembly line”:
• Southeast Asian Scam Hubs: Criminal networks originating from Cambodia and Myanmar, along with scam groups from Nigeria, were major participants in AI abuse. They used AI for multilingual translation, bulk generation of fake social media account content, creating fraudulent investment personas, and even building professional-looking company websites and recruitment ads.
• Evasion of Detection: These sophisticated criminals, aware that AI-generated text might be detectable, instructed models to remove markers (such as specific em-dashes) from the output, attempting to obfuscate its origin and increase the realism of the scams.
OpenAI’s data shows that crime rings are leveraging the speed and low cost of AI to execute global financial fraud at an unprecedented scale and efficiency.
Conclusion and Challenges: The Boundaries of Responsible AI
OpenAI reiterated its mission in the report: to build “Democratic AI” grounded in “common-sense rules” to protect humanity. However, the report itself sparks a deeper discussion about the company’s global governance capability:
• Who Defines “Common Sense”? By banning accounts linked to the Chinese government, OpenAI is effectively exercising a form of global ethical and political regulatory power. The transparency and accountability of its policy-making will remain under intense international scrutiny.
• Defense in the AGI Era: The current threats are merely “accelerators” for old problems. If Artificial General Intelligence (AGI) truly matures, will it, as the report implicitly warns, unleash “novel offensive capabilities”? This is the ultimate challenge that all AI giants must continuously prepare for and address.
This report is a milestone in the field of AI safety, clearly marking the current front lines of AI misuse: on one side, efficiency-driven cybercriminals; on the other, powerful entities attempting to weaponize AI for state interests.
(This report is compiled based on the OpenAI publication Disrupting malicious uses of AI from October 2025 and related media coverage.)
