Your AI Apps Don’t Pentest Themselves

See How Novee AI Red Teams Your LLMs

Your AI Apps Don’t Pentest Themselves

See How Novee AI Red Teams Your LLMs

Training an AI agent to attack LLM applications like a real adversary

Help Net Security reports on Novee’s launch of autonomous AI red teaming for LLM applications.

Novee Marketing

1 min

Explore Article +

“Most enterprise software development teams now ship AI-powered applications faster than traditional penetration testing can keep up with. A security team with 500 applications may test each one once a year, or less. In the time between tests, the underlying models, integrations, and behaviors can change, with no corresponding security review.

Novee launched a product it calls AI Red Teaming for LLM Applications, an AI pentesting agent built specifically to probe LLM-powered software. The company introduced the product at RSAC 2026 Conference in San Francisco and is demonstrating it at booth S-0262.”.

Read the full article at Help Net Security→


Originally published in Help Net Security on March 25, 2026 by Mirko Zorz.

Stay updated

Get the latest insights on AI, cybersecurity, and continuous pentesting delivered to your inbox