Novee launches AI red teaming tool for LLM app risks
SecurityBrief reports on Novee’s launch of autonomous AI red teaming for LLM applications.
“The launch reflects a wider shift in cyber security as companies try to secure software that uses generative AI models. Unlike conventional applications, LLM-based tools can be influenced by natural language prompts, hidden instructions, manipulated context windows and interactions between multiple agents. That creates routes for misuse that differ from older software flaws.
The agent is intended to work across applications built on different model providers and architectures, including OpenAI, Anthropic and open-source systems. It can also connect to existing security testing processes and CI/CD pipelines, allowing teams to run tests as part of software development and release cycles.
Ido Geffen, Chief Executive Officer and Co-Founder of Novee, linked the launch to the speed at which attackers now move”.
“I’ve spent twenty years on the offensive side of cyber, inside government operations, protecting critical infrastructure, and now building AI systems that think like real attackers,” Geffen said.
“What we see consistently is that attackers compress timelines dramatically. The window between vulnerability and exploitation can shrink to minutes. Defending against that requires continuous testing, not periodic assessments.”
Read the full article at SecurityBrief→
Originally published in SecurityBrief on March 25, 2026 by Mark Tarre.