AI-Assisted Development

Generative AI is reshaping software engineering, but organizations face a critical challenge: ensuring developers use large language models (LLMs) responsibly, efficiently, and in ways that accelerate—not compromise—delivery. Caltech CTME’s one-day workshop, AI-Assisted Development equips technical teams to apply LLMs as disciplined collaborators in coding, testing, and DevOps.

This hands-on experience shows enterprises how to embed a repeatable “LLM-in-the-loop” workflow, balancing innovation speed with maintainability, compliance, and operational safety.

  • Learners Foundational
  • Time Client definable
  • Duration 1 Day
  • Program Type Customizable Programs
  • Certificate Type Certificate
  • Format
    Any Format/Location
  • CEUS Available
  • PDUS Available
  • Program Number P4D-Custom
  • Fees Group Rate
  • See full course info

Generative AI is reshaping software engineering, but organizations face a critical challenge: ensuring developers use large language models (LLMs) responsibly, efficiently, and in ways that accelerate—not compromise—delivery. Caltech CTME’s one-day workshop, AI-Assisted Development equips technical teams to apply LLMs as disciplined collaborators in coding, testing, and DevOps.

This hands-on experience shows enterprises how to embed a repeatable “LLM-in-the-loop” workflow, balancing innovation speed with maintainability, compliance, and operational safety.

Print Page
AI-Assisted Development

Program Experience

Guided by Caltech expert practitioners, this one-day sprint concentrates best practices into focused labs on real codebases. Teams practice structured prompting, verification, and policy-aware workflows while completing Python refactors, FastAPI test generation, React scaffolding, and CI/CD failure triage. We tailor examples, tools, and guardrails to your stack and governance. Formats include on-campus, onsite, or virtual. Teams leave with reusable checklists, starter templates, and an “LLM-in-the-loop” operating pattern they can apply immediately in their organization's AI-augmented SDLC..

View our instructors

Course Info

Benefits
Topics
Who Should Attend
Schedule
FAQ

Your team will learn to:

  • Design prompt patterns for coding, testing, and DevOps tasks

  • Decide when to trust, verify, or override model output

  • Refactor legacy Python safely with invariants and review gates

  • Generate FastAPI tests to raise coverage and catch edge cases

  • Scaffold React features with accessibility and QA checks

  • Triage CI/CD failures via log analysis and model triangulation

  • Implement guardrails (policies, logging, approvals, traceability)

  • Align LLM workflows to security, compliance, and audit needs

The workshop is structured around three integrated activity cycles, reinforced with guided prompts, checklists, and applied deliverables:

  • Python Refactoring in Legacy Codebases – Use LLMs for code navigation, safe refactoring, and invariants preservation

  • FastAPI Test-Driven Development – Apply AI-assisted test generation, edge-case validation, and coverage improvement

  • React Greenfield Applications – Scaffold modern front-end projects via stepwise prompting, with accessibility and QA guardrails

  • DevOps Troubleshooting – Diagnose CI/CD failures with log analysis, LLM triangulation, and rollback planning

  • Operational Guardrails – Build a sustainable “LLM-in-the-loop” workflow with prompt libraries, verification triggers, and documentation standards

Modules can be customized to align with your organization’s preferred languages, frameworks, and DevOps environments.

Engineering leaders, software architects, technical managers, and DevOps leads responsible for delivery quality and compliance; professional developers comfortable with Git and modern workflows who want to embed LLM-assisted practices into daily engineering. Mixed cohorts are welcome; split tracks for managers vs. developers are available.

Designed as a one-day intensive lab (6–8 hours) delivered on campus, onsite, or virtual. Cohorts can add optional clinics or spaced sessions for project-based reinforcement. We tailor timing, tools, and examples to your location, modality, stack, and governance requirements.

How does this workshop differ from consumer-level AI tutorials?
This is an enterprise-oriented program designed for software teams—coding, testing, and DevOps—emphasizing enterprise guardrails, compliance, verification, and adoption in existing toolchains. 

What tools and environments are required?
Participants should have Git, Python 3.x, Node/npm, and a preferred LLM interface (ChatGPT, Claude, Copilot, etc.). Setup includes VS Code or Cursor and a shared GitHub repository.

Can the workshop be tailored to our environment?
Yes. Caltech CTME customizes examples, frameworks, and DevOps integrations to match your enterprise technology stack.

What deliverables do participants leave with?
Participants produce annotated code scaffolds, refactored modules, AI-assisted test suites, DevOps troubleshooting worksheets, and an operational LLM workflow checklist.

Our Educators

Our team of educators and guides are experts in their field – engineering pioneers, applied science visionaries, Ted-Talkers, professional facilitators, pilots, problem solvers, marketing mavens, and award-winning authors – who bring academic knowledge, practical approaches, and proven solutions to their programs.

Collectively, they have decades of experience in aerospace, communications, defense, electronics, energy, government, high-tech, pharma/medical devices, and precision manufacturing. 

Picture of Joshua Cook

Joshua Cook

Artificial Intelligence, Machine Learning, Data Science

Instructors

Picture of Joshua Cook

Joshua Cook

Artificial Intelligence, Machine Learning, Data Science