Your team will learn to:
- Identify LLM attack surfaces in chatbots, RAG, and agents
- Execute and recognize prompt injection, jailbreaks, and data-exfil patterns
- Detect and mitigate RAG poisoning and retrieval misuse
- Harden agent tool use, function calling, and workflow permissions
- Instrument monitoring, logging, and abuse-detection signals
- Map controls to your governance and risk frameworks
- Run AI-specific incident response and tabletop drills
- Produce repeatable playbooks and checklists for ongoing validation
Day 1 – Foundations, Theory & Attack Surface
-
Introduction to AI in Cybersecurity – AI adoption in the enterprise, unique threat dimensions of LLMs
-
LLM Fundamentals & Threat Implications – Tokenization, transformers, context windows, and inherent vulnerabilities
-
Enterprise AI Architecture Patterns – Security risks in RAG pipelines and agentic workflows
-
The LLM Threat Landscape – Prompt injection, data exfiltration, model extraction, jailbreaks, and agent exploitation
-
Live Attack Demonstrations – Prompt injection, RAG poisoning, Copilot data leakage, and malicious agent tool calls
-
Day 1 Labs: Red Teaming AI – Conduct adversarial attacks against vulnerable chatbots, RAG systems, and agents
Day 2 – Defenses, Governance & Incident Response
-
Technical Mitigations – Prompt sanitization, secure RAG design, content filters, and agent hardening
-
Organizational Controls & Policy – Acceptable use frameworks, RBAC, data classification, shadow AI monitoring
-
Incident Response for AI-driven Security Events – Detection, response workflows, post-mortem forensic analysis
-
Red Teaming & Continuous Testing – OWASP LLM Top 10, MITRE ATLAS, adversarial testing frameworks
-
Outlook & Wrap-Up – Future AI risks, compliance landscape (NIST AI RMF, EU AI Act)
-
Day 2 Labs: Defending Against LLM Attacks – Implement content moderation, harden RAG, restrict agent tool usage with human-in-the-loop safeguards
Security engineers, blue-team/red-team practitioners, AppSec leaders, architects, and platform owners responsible for AI-enabled systems. Ideal for teams evaluating or operating chatbots, RAG pipelines, and agents who need practical defenses, governance alignment, and incident-response readiness without testing against production environments or sensitive data.
Delivered as a two-day program (6–8 hours/day), available on campus, onsite, or virtual. Timing, labs, and examples are tailored to your stack and policies. Optional add-ons: pre-work primers, post-course clinics, or a tabletop incident-response exercise to reinforce adoption.
-
Day 1: Foundations of LLM vulnerabilities, attack demonstrations, red-team labs
-
Day 2: Defensive architectures, governance, incident response, continuous testing, hardening labs
How does this differ from standard cybersecurity training?
Traditional training does not address LLM-specific risks such as prompt injection, RAG poisoning, and agent exploitation. This program bridges that gap with enterprise-focused AI threat scenarios.
Can this be tailored to our organization’s systems?
Yes. Labs, demos, and case discussions can be adapted to your enterprise’s AI stack, security tools, and compliance requirements.
Does the program cover governance as well as technical defenses?
Yes. Participants will learn both engineering mitigations and organizational controls, ensuring a layered defense strategy.
What outcomes should we expect?
Your teams will leave with hands-on experience red-teaming AI systems, defensive playbooks, governance frameworks, and an AI-specific incident response model.