Novee introduces autonomous AI red teaming for LLM applications
SecurityWeek reports on Novee’s launch of autonomous AI red teaming for LLM applications.
“Novee has launched autonomous AI red teaming for LLM applications, designed to uncover flaws in chatbots, copilots, and agents. It tests for real AI-specific attack paths such as prompt injection, jailbreaks and agent manipulation rather than relying on traditional appsec approaches that were not built for AI behavior”.
Read the full article at SecurityWeek →
Originally published in SecurityWeek on March 23, 2026 by SecurityWeek News.