AI for Security Professionals: LLM Threats & Defense

AI Training for Cybersecurity Professionals in Defense and Enterprise

ALLM adoption expands the attack surface across chatbots, RAG pipelines, and agent workflows. This two-day program builds threat intuition with live demonstrations, then turns to practical defenses, governance, and AI-specific incident response. Security teams leave with tested playbooks, controls mapping, and repeatable procedures tailored to enterprise policies.

  • Learners Foundational
  • Time Client definable
  • Duration Client definable
  • Program Type Customizable Programs
  • Certificate Type Certificate
  • Format
    Any Format/Location
  • CEUS Available
  • PDUS Available
  • Program Number AI4SP-Custom
  • Fees Group Rate
  • See full course info

ALLM adoption expands the attack surface across chatbots, RAG pipelines, and agent workflows. This two-day program builds threat intuition with live demonstrations, then turns to practical defenses, governance, and AI-specific incident response. Security teams leave with tested playbooks, controls mapping, and repeatable procedures tailored to enterprise policies.

Print Page
AI for Security Professionals: LLM Threats & Defense

Program Experience

Across two days, Caltech instructors blend expert briefings, live demos, and hands-on labs to move teams from awareness to defense. Day 1 red-teams chatbots, RAG, and agent tools to reveal failure modes and abuse pathways. Day 2 implements mitigations, monitoring, and incident response, mapping controls to your environment. All labs run in sandboxed, non-production settings with sample data. Teams depart with prioritized threat/controls maps, checklists, and an AI incident-response mini-runbook ready for tabletop exercises.

View our instructors

Course Info

Benefits
Topics
Who Should Attend
Schedule
FAQ

Your team will learn to:

  • Identify LLM attack surfaces in chatbots, RAG, and agents
  • Execute and recognize prompt injection, jailbreaks, and data-exfil patterns
  • Detect and mitigate RAG poisoning and retrieval misuse
  • Harden agent tool use, function calling, and workflow permissions
  • Instrument monitoring, logging, and abuse-detection signals
  • Map controls to your governance and risk frameworks
  • Run AI-specific incident response and tabletop drills
  • Produce repeatable playbooks and checklists for ongoing validation

Day 1 – Foundations, Theory & Attack Surface

  • Introduction to AI in Cybersecurity – AI adoption in the enterprise, unique threat dimensions of LLMs

  • LLM Fundamentals & Threat Implications – Tokenization, transformers, context windows, and inherent vulnerabilities

  • Enterprise AI Architecture Patterns – Security risks in RAG pipelines and agentic workflows

  • The LLM Threat Landscape – Prompt injection, data exfiltration, model extraction, jailbreaks, and agent exploitation

  • Live Attack Demonstrations – Prompt injection, RAG poisoning, Copilot data leakage, and malicious agent tool calls

  • Day 1 Labs: Red Teaming AI – Conduct adversarial attacks against vulnerable chatbots, RAG systems, and agents

Day 2 – Defenses, Governance & Incident Response

  • Technical Mitigations – Prompt sanitization, secure RAG design, content filters, and agent hardening

  • Organizational Controls & Policy – Acceptable use frameworks, RBAC, data classification, shadow AI monitoring

  • Incident Response for AI-driven Security Events – Detection, response workflows, post-mortem forensic analysis

  • Red Teaming & Continuous Testing – OWASP LLM Top 10, MITRE ATLAS, adversarial testing frameworks

  • Outlook & Wrap-Up – Future AI risks, compliance landscape (NIST AI RMF, EU AI Act)

  • Day 2 Labs: Defending Against LLM Attacks – Implement content moderation, harden RAG, restrict agent tool usage with human-in-the-loop safeguards

Security engineers, blue-team/red-team practitioners, AppSec leaders, architects, and platform owners responsible for AI-enabled systems. Ideal for teams evaluating or operating chatbots, RAG pipelines, and agents who need practical defenses, governance alignment, and incident-response readiness without testing against production environments or sensitive data.

  •  
  •  

Delivered as a two-day program (6–8 hours/day), available on campus, onsite, or virtual. Timing, labs, and examples are tailored to your stack and policies. Optional add-ons: pre-work primers, post-course clinics, or a tabletop incident-response exercise to reinforce adoption.

  • Day 1: Foundations of LLM vulnerabilities, attack demonstrations, red-team labs

  • Day 2: Defensive architectures, governance, incident response, continuous testing, hardening labs

How does this differ from standard cybersecurity training?
Traditional training does not address LLM-specific risks such as prompt injection, RAG poisoning, and agent exploitation. This program bridges that gap with enterprise-focused AI threat scenarios.

Can this be tailored to our organization’s systems?
Yes. Labs, demos, and case discussions can be adapted to your enterprise’s AI stack, security tools, and compliance requirements.

Does the program cover governance as well as technical defenses?
Yes. Participants will learn both engineering mitigations and organizational controls, ensuring a layered defense strategy.

What outcomes should we expect?
Your teams will leave with hands-on experience red-teaming AI systems, defensive playbooks, governance frameworks, and an AI-specific incident response model.

Instructors