How mature is your Data & AI organization?Take the diagnostic
All trainings

AI TRAINING

Claude Code for Engineering Workflows

Engineer teams gain hands-on fluency with Claude Code's agentic CLI to automate complex coding tasks safely.

Format
bootcamp
Duration
12–20h
Level
practitioner
Group size
6–16
Price / participant
€2K–€3K
Group price
€12K–€30K
Audience
Software engineers, DevOps engineers, and platform teams with active development workflows
Prerequisites
Proficiency in at least one programming language (Python, TypeScript, or similar) and familiarity with CLI tools and version control (Git)

What it covers

This practitioner-level training equips software engineers and DevOps professionals with deep, hands-on skills in Claude Code, Anthropic's CLI-based agentic coding tool. Participants learn to orchestrate subagents, configure Model Context Protocol (MCP) servers, write custom slash commands, and design safe autonomous loops with permission hooks. Sessions alternate between structured instruction and live lab exercises, covering real engineering scenarios such as automated code review pipelines, repository-wide refactoring, and CI/CD integration. By the end, teams can design, audit, and maintain Claude Code workflows that run reliably in production environments.

What you'll be able to do

  • Configure a Claude Code project with MCP servers, custom tools, and permission policies from scratch
  • Build a multi-step subagent workflow that autonomously completes a defined engineering task within safe boundaries
  • Write and publish custom slash commands that encapsulate team-specific coding patterns
  • Integrate Claude Code into a CI/CD pipeline to automate code review and test generation on pull requests
  • Design and validate permission hooks that prevent unsafe file modifications or external calls in autonomous loops

Topics covered

  • Claude Code CLI setup, authentication, and project configuration
  • Subagent orchestration and multi-step agentic task design
  • Model Context Protocol (MCP): server setup, tool registration, and context management
  • Custom slash commands and workflow automation scripting
  • Permission hooks and safe autonomous loop design
  • Integrating Claude Code into CI/CD pipelines (GitHub Actions, GitLab CI)
  • Automated code review, test generation, and repository-wide refactoring
  • Monitoring, observability, and failure recovery for agentic workflows

Delivery

Delivered as a 2–3 day intensive bootcamp, available in-person or as live virtual instructor-led sessions. Each half-day block is split roughly 30% instruction and 70% hands-on lab work in participants' own repositories or provided sandboxes. Participants require a machine with Claude Code installed and API access provisioned in advance. Lab exercises use real codebases where possible; synthetic repositories are provided as fallback. A shared Slack or Teams channel is maintained for async Q&A during and after the bootcamp.

What makes it work

  • Start with tightly scoped, read-only agentic tasks before expanding to write or execute permissions
  • Establish a shared library of approved slash commands and MCP configurations in a team repository from day one
  • Instrument agentic workflows with logging and cost-tracking from the outset to catch anomalies early
  • Run a blameless retrospective after the first production Claude Code workflow to capture learnings and refine permission policies

Common mistakes

  • Granting overly broad file-system or shell permissions to agents without scoping hooks, leading to unintended destructive operations
  • Skipping MCP server validation and assuming Claude Code inherits safe defaults from the IDE environment
  • Building autonomous loops without circuit-breakers or human-in-the-loop checkpoints, causing runaway API cost spikes
  • Treating slash commands as one-off scripts rather than versioned, documented team assets stored in the repository

When NOT to take this

This training is not the right fit for teams that have not yet adopted Claude or any LLM tooling in their workflow — they will lack the context to evaluate agentic risks and would benefit more from a general AI literacy or prompt engineering foundation first.

Providers to consider

Sources

This training is part of a Data & AI catalog built for leaders serious about execution. Take the free diagnostic to see which trainings your team needs.