Enterprise AI Engineering Foundations

AI Engineering Foundations: Enterprise Training for Technical Teams

Establish a durable AI foundation to enable your engineering teams to design and build state-of-the-art AI systems. Participants learn modern LLM concepts, prompting methods, and parameter-efficient fine-tuning, with options for closed-API or open-weight models in on-premises or air-gapped contexts. This course is part of Caltech’s Enterprise AI Engineering series (Foundations, Intermediate, Advanced), a sequence that organizations can target to different skill levels; learners may progress through all three over time. All three tiers belong to the same series and can be sequenced or spaced to cascade learning across roles—enabling cohorts to advance from foundational literacy to applied prototyping and, ultimately, governed multi-agent design.

  • Learners Foundational
  • Time Client definable
  • Duration 5 Days; Definable
  • Program Type Customizable Programs
  • Certificate Type Certificate
  • Format
    Any Format/Location
  • CEUS Available
  • PDUS Available
  • Program Number AIEngF-Custom
  • Fees Group Rate
  • See full course info

Establish a durable AI foundation to enable your engineering teams to design and build state-of-the-art AI systems. Participants learn modern LLM concepts, prompting methods, and parameter-efficient fine-tuning, with options for closed-API or open-weight models in on-premises or air-gapped contexts. This course is part of Caltech’s Enterprise AI Engineering series (Foundations, Intermediate, Advanced), a sequence that organizations can target to different skill levels; learners may progress through all three over time. All three tiers belong to the same series and can be sequenced or spaced to cascade learning across roles—enabling cohorts to advance from foundational literacy to applied prototyping and, ultimately, governed multi-agent design.

Print Page
Enterprise AI Engineering Foundations

Program Experience

This hands-on course blends instructor-led sessions with guided labs and team exercises grounded in your domain. We tailor examples to your processes so skills transfer to daily workflows. Activities culminate in a working prototype: adapting an open-weight model and deploying a private, internal code assistant for testing. Participants leave with reusable notebooks, reference checklists, and a light 30/60/90-day outline to sustain momentum internally. 

View our instructors

Course Info

Benefits
Topics
Who Should Attend
Schedule
FAQ

Your team will learn to:

  • Explain core LLM components (embeddings, attention) and when to use them

  • Apply zero/few-shot, chain-of-thought, and ReAct/Reflexion prompting for reliable behavior

  • Execute PEFT/LoRA-style fine-tuning to adapt open-weight models to enterprise context

  • Stand up a private code assistant for internal evaluation (non-production)

  • Differentiate demo versus production requirements and define validation gates

  • Draft a 30/60/90-day internal action plan

  • LLM evolution; embeddings, attention; supervised/unsupervised/RL basics

  • Prompt engineering: zero-/few-shot, chain-of-thought, ReAct, Reflexion

  • Fine-tuning with LoRA/PEFT; working with APIs and open-source hubs

  • Private code-assistant setup with on-prem open-weight models

  • Capstone (education prototype): fine-tune an open-weight model; deploy a private code assistant for internal testing

Engineering leaders and practitioners—software, platform, data, product managers, and DevSecOps—along with managers defining AI direction and governance who need a rigorous baseline before internal productionization or vendor selection.

This customizable program can be delivered remotely, on-site, or on the Caltech campus. The program is typically 40 hours but can be tailored for your organization’s needs.

Do participants build a production model in this course? No. All artifacts are produced for educational purposes during the course.

Can our teams use enterprise data during the labs? Yes, if the data is non-confidential or appropriately anonymized. No production integrations occur during class.

Can this course run behind our firewall and use our internal enterprise model endpoints? Yes. We can deliver the course on-premises, in a VPC, or in air-gapped environments and connect to your enterprise-approved model endpoints (for example, Azure OpenAI or other hosted foundation models, Bedrock in a private VPC, or self-hosted/open-weight deployments). In-class access is used for research and evaluation during the course; no production integrations or changes are performed in class. Any production access, promotion, or change management follows your enterprise policies and remains outside the scope of the course.

Which frameworks and tools are covered? We introduce open- and closed-weight LLMs, Hugging Face resources, and prompt-engineering methods; we also cover the basics of LangChain, LlamaIndex, and LangGraph/LangSmith.

How is governance addressed in the Foundations tier? We cover policy guardrails, role-based access, audit trails, evaluation basics, and reliability considerations appropriate to foundational work.

Who owns the materials and outputs created in class? Your organization owns the creations your participants produce during class; Caltech retains ownership of course materials and templates.

How does this course relate to the Enterprise AI Engineering series? Foundations is the first tier of a three-course series designed to help companies target content to different skill levels.

Can learners progress through all three tiers over time? Yes. The series is structured so learners can begin with Foundations and move into Intermediate and Advanced as their roles and responsibilities evolve.

Our Educators

Our team of educators and guides are experts in their field – engineering pioneers, applied science visionaries, Ted-Talkers, professional facilitators, pilots, problem solvers, marketing mavens, and award-winning authors – who bring academic knowledge, practical approaches, and proven solutions to their programs.

Collectively, they have decades of experience in aerospace, communications, defense, electronics, energy, government, high-tech, pharma/medical devices, and precision manufacturing. 

Photo of Harish Kashyap

Harish Kashyap

Artificial Intelligence, Machine Learning, Data Science

Picture of Mike Frantz, PhD

Mike Frantz, PhD

Artificial Intelligence, Machine Learning, Data Science

Instructors

Picture of Mike Frantz, PhD

Mike Frantz, PhD

Artificial Intelligence, Machine Learning, Data Science

Photo of Harish Kashyap

Harish Kashyap

Artificial Intelligence, Machine Learning, Data Science