Quel est le niveau de maturité de votre organisation Data & IA ?Faites le diagnostic
Toutes les formations

FORMATION IA

Google Vertex AI pour les équipes ML

Construisez, déployez et surveillez des modèles ML en production sur Google Vertex AI.

Format
bootcamp
Durée
16–24h
Niveau
practitioner
Taille de groupe
6–16
Prix / participant
€2K–€3K
Prix groupe
€12K–€28K
Public
ML engineers, data scientists, and MLOps practitioners already working within Google Cloud Platform
Prérequis
Hands-on experience with Python and ML model training; familiarity with GCP basics (IAM, GCS, BigQuery); prior exposure to any ML framework (TensorFlow, PyTorch, or scikit-learn)

Ce qu'elle couvre

Cette formation de niveau praticien guide les ingénieurs ML axés sur GCP à travers la plateforme Vertex AI dans son intégralité : environnements Workbench, Model Garden, Pipelines (basés sur Kubeflow), Feature Store et patterns de déploiement de modèles incluant le serving en ligne et par lots. Les participants travaillent sur des labs pratiques couvrant des scénarios réels de cycle de vie des modèles, du suivi des expérimentations à la surveillance du drift en production. La formation compare également Vertex AI aux alternatives open source telles que MLflow, Airflow ou les stacks Kubernetes autogérées. Elle combine des sessions animées par un instructeur avec des environnements sandbox GCP et des notebooks de lab préconfigurés.

À l'issue, vous saurez

  • Configure and launch a Vertex AI Workbench environment with experiment tracking integrated into a model training workflow
  • Build and run a multi-step ML pipeline using Vertex AI Pipelines and Kubeflow components
  • Register, version, and deploy a model to a Vertex AI online endpoint with autoscaling and traffic splitting
  • Set up Vertex AI Model Monitoring to detect feature skew and prediction drift in production
  • Evaluate trade-offs between Vertex AI managed services and open-source MLOps tooling for a given team context

Sujets abordés

  • Vertex AI Workbench: managed notebooks and experiment tracking
  • Model Garden: foundation models, fine-tuning, and deployment
  • Vertex AI Pipelines: building and orchestrating Kubeflow-based ML pipelines
  • Feature Store: creating, sharing, and serving features at scale
  • Online and batch prediction endpoints: deployment patterns and autoscaling
  • Model monitoring: drift detection, skew detection, and alerting
  • Vertex AI vs open-source stacks: MLflow, Airflow, self-managed Kubernetes
  • Cost optimisation and resource management on GCP

Modalité

Typically delivered over 2-3 consecutive days, either on-site or via virtual instructor-led sessions using Google Meet or Zoom. Each participant requires a GCP project with billing enabled (or a trainer-provisioned sandbox account). Labs account for approximately 60% of total time; lecture and discussion make up the remaining 40%. Pre-reading materials covering GCP fundamentals are distributed one week in advance. A shared Git repository with starter notebooks is provided on day one.

Ce qui fait que ça marche

  • Bring a real internal use case or dataset to the training so lab exercises map directly to participants' actual work
  • Assign a GCP champion within the team who maintains sandbox environments and propagates best practices after the training
  • Agree on a team-wide pipeline template and Feature Store naming convention before scaling beyond the first Vertex AI project
  • Schedule a follow-up review session 4-6 weeks post-training to address blockers encountered in real deployments

Erreurs fréquentes

  • Treating Vertex AI Pipelines as a direct drop-in for existing Airflow DAGs without redesigning task granularity and artifact passing
  • Ignoring IAM and VPC Service Controls until late in the project, causing blocked deployments in security-conscious environments
  • Over-relying on AutoML endpoints for all use cases without understanding latency, cost, and customisation limitations
  • Skipping model monitoring setup post-deployment, leaving production drift undetected until model quality degrades noticeably

Quand NE PAS suivre cette formation

Teams that are not yet committed to GCP as their primary cloud provider — investing in deep Vertex AI expertise before cloud strategy is settled creates rework risk if the organisation later migrates to AWS SageMaker or Azure ML.

Fournisseurs à considérer

Sources

Cette formation fait partie d'un catalogue Data & IA construit pour les leaders sérieux sur l'exécution. Lancez le diagnostic gratuit pour voir quelles formations sont prioritaires pour votre équipe.