TLDR OpenAI issued a warning that its next-generation AI models present “high” cybersecurity risks and could create zero-day exploits GPT-5.1-Codex-Max achievedTLDR OpenAI issued a warning that its next-generation AI models present “high” cybersecurity risks and could create zero-day exploits GPT-5.1-Codex-Max achieved

OpenAI Warns Next-Generation AI Models Pose High Cybersecurity Risks

2025/12/11 21:18

TLDR

  • OpenAI issued a warning that its next-generation AI models present “high” cybersecurity risks and could create zero-day exploits
  • GPT-5.1-Codex-Max achieved 76% on cybersecurity tests in November 2025, a sharp increase from GPT-5’s 27% in August 2024
  • The company is rolling out Aardvark, a security-focused AI agent that identifies code vulnerabilities and suggests fixes
  • OpenAI plans to create a Frontier Risk Council with cybersecurity experts and offer tiered access to enhanced security features
  • Google and Anthropic have also strengthened their AI systems against cybersecurity threats in recent months

OpenAI released a warning on December 10 stating that its upcoming AI models could create serious cybersecurity risks. The company behind ChatGPT said these advanced models might build working zero-day remote exploits targeting well-defended systems.

The AI firm also noted these models could help with complex enterprise or industrial intrusion operations that lead to real-world consequences. OpenAI shared this information in a blog post addressing the growing capabilities of its technology.

The warning reflects concerns across the AI industry about potential misuse of increasingly powerful models. Several major tech companies have taken action to secure their AI systems against similar threats.

Google announced updates to Chrome browser security this week to block indirect prompt injection attacks on AI agents. The changes came before a wider rollout of Gemini agentic features in Chrome.

Anthropic revealed in November 2025 that threat actors, potentially linked to a Chinese state-sponsored group, had used its Claude Code tool for an AI-driven espionage operation. The company stopped the campaign before it caused damage.

AI Cybersecurity Skills Advancing Quickly

OpenAI shared data showing rapid progress in AI cybersecurity abilities. The company’s GPT-5.1-Codex-Max model hit 76% on capture-the-flag challenges in November 2025.

This represents a major jump from the 27% score GPT-5 achieved in August 2024. Capture-the-flag challenges measure how well systems can locate and exploit security weaknesses.

The improvement over just a few months shows how fast AI models are gaining advanced cybersecurity capabilities. These skills can be used for both defensive and offensive purposes.

New Security Tools and Protection Measures

OpenAI said it is building stronger models for defensive cybersecurity work. The company is developing tools to help security teams audit code and fix vulnerabilities more easily.

The Microsoft-backed firm is using multiple security layers including access controls, infrastructure hardening, egress controls, and monitoring systems. OpenAI is training its AI models to reject harmful requests while staying useful for education and defense work.

The company is expanding monitoring across all products using frontier models to catch potentially malicious cyber activity. OpenAI is partnering with expert red teaming groups to test and improve its safety systems.

Aardvark Tool and Advisory Council

OpenAI introduced Aardvark, an AI agent that works as a security researcher. The tool is in private beta testing and can scan code for vulnerabilities and recommend patches.

Maintainers can quickly implement the fixes Aardvark proposes. OpenAI plans to offer Aardvark free to selected non-commercial open source code repositories.

The company will launch a program giving qualified cyberdefense users and customers tiered access to enhanced capabilities. OpenAI is forming the Frontier Risk Council, bringing external cyber defenders and security experts to work with its internal teams.

The council will start by focusing on cybersecurity before expanding to other frontier capability areas. OpenAI will soon provide details on the trusted access program for users and developers working on cyberdefense.

The post OpenAI Warns Next-Generation AI Models Pose High Cybersecurity Risks appeared first on Blockonomi.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

U.S. Court Finds Pastor Found Guilty in $3M Crypto Scam

U.S. Court Finds Pastor Found Guilty in $3M Crypto Scam

The post U.S. Court Finds Pastor Found Guilty in $3M Crypto Scam appeared on BitcoinEthereumNews.com. Crime 18 September 2025 | 04:05 A Colorado judge has brought closure to one of the state’s most unusual cryptocurrency scandals, declaring INDXcoin to be a fraudulent operation and ordering its founders, Denver pastor Eli Regalado and his wife Kaitlyn, to repay $3.34 million. The ruling, issued by District Court Judge Heidi L. Kutcher, came nearly two years after the couple persuaded hundreds of people to invest in their token, promising safety and abundance through a Christian-branded platform called the Kingdom Wealth Exchange. The scheme ran between June 2022 and April 2023 and drew in more than 300 participants, many of them members of local church networks. Marketing materials portrayed INDXcoin as a low-risk gateway to prosperity, yet the project unraveled almost immediately. The exchange itself collapsed within 24 hours of launch, wiping out investors’ money. Despite this failure—and despite an auditor’s damning review that gave the system a “0 out of 10” for security—the Regalados kept presenting it as a solid opportunity. Colorado regulators argued that the couple’s faith-based appeal was central to the fraud. Securities Commissioner Tung Chan said the Regalados “dressed an old scam in new technology” and used their standing within the Christian community to convince people who had little knowledge of crypto. For him, the case illustrates how modern digital assets can be exploited to replicate classic Ponzi-style tactics under a different name. Court filings revealed where much of the money ended up: luxury goods, vacations, jewelry, a Range Rover, high-end clothing, and even dental procedures. In a video that drew worldwide attention earlier this year, Eli Regalado admitted the funds had been spent, explaining that a portion went to taxes while the remainder was used for a home renovation he claimed was divinely inspired. The judgment not only confirms that INDXcoin qualifies as a…
Share
BitcoinEthereumNews2025/09/18 09:14