Forum Discussion
j_weller_nv
Feb 20, 2026Iron Contributor
Agentic AI security: Prompt injection and manipulation attacks
As AI apps and autonomous agents gain more reasoning and independence, they also open new pathways for adversarial attacks. Join this webinar and hear how the most critical risks are broken down—prompt injection, goal hijacking, and memory poisoning—and how they the impact real AI applications.
Learn practical defenses your teams can implement today, including input validation, behavioral detection, and robust architectural patterns that keep agentic systems aligned and secure.
Learn more and sign up to attend this webinar or watch the recording after.
Agentic manipulation: Prompt injection, goal hijacking & memory poisoning | Microsoft Community Hub
No RepliesBe the first to reply