Quel est le niveau de maturité de votre organisation Data & IA ?Faites le diagnostic
Toutes les formations

FORMATION IA

Comprendre les LLM : Concepts, Limites et Coûts

Repartez en sachant précisément ce que les LLM peuvent ou ne peuvent pas faire, et ce qu'ils coûtent réellement.

Format
workshop
Durée
6–8h
Niveau
literacy
Taille de groupe
8–20
Prix / participant
€400–€800
Prix groupe
€5K–€12K
Public
Managers, product managers, and engineering leaders who make or influence AI investment and build decisions
Prérequis
No machine learning background required; basic familiarity with software product development is helpful

Ce qu'elle couvre

Cette formation offre aux managers, chefs de produit et responsables techniques une base conceptuelle solide sur le fonctionnement des grands modèles de langage, sans prérequis en machine learning. Les participants explorent l'architecture transformer de manière intuitive, les mécanismes de tokenisation, les contraintes de fenêtre contextuelle et les raisons structurelles des hallucinations. La session alterne courtes présentations et analyses de cas guidées afin que les décideurs puissent évaluer les fournisseurs LLM, cadrer des projets réalistes et challenger les hypothèses techniques en toute confiance.

À l'issue, vous saurez

  • Explain to a non-technical stakeholder why an LLM hallucinates and name two architectural reasons it cannot self-correct reliably
  • Calculate an approximate monthly API cost for a defined LLM use case using token estimates and published pricing
  • Identify at least three context-window constraints that would affect the design of a proposed product feature
  • Distinguish between benchmark performance and real-world task performance when evaluating a vendor's model claims
  • Decide whether a given use case requires a frontier model, a smaller open-weight model, or a fine-tuned model

Sujets abordés

  • Transformer architecture and next-token prediction explained intuitively
  • Tokenisation mechanics and token-cost economics (input vs output pricing)
  • Context window sizes, limits, and implications for application design
  • Hallucination failure modes: why they occur and how to detect them
  • Model families and capability tiers (frontier vs open-weight vs fine-tuned)
  • Retrieval-Augmented Generation as a mitigation strategy overview
  • Latency, throughput, and cost trade-offs across deployment options
  • Evaluating LLM vendor claims and benchmark literacy

Modalité

Delivered as a full-day in-person or live-virtual workshop (6-8 hours including breaks). Approximately 40% lecture with interactive Q&A, 40% structured case analysis using real-world product scenarios, and 20% guided cost-modelling exercise using public API pricing sheets. Participants receive a reference card summarising token economics, context window limits by major model, and a hallucination taxonomy. No laptop coding required; a spreadsheet tool is used for the cost exercise.

Ce qui fait que ça marche

  • Bringing real internal use-case candidates into the cost-modelling exercise so learning is immediately applicable
  • Including both product and engineering leaders in the same session to align on shared vocabulary and assumptions
  • Following up with a short internal FAQ or decision checklist that leaders can use when reviewing LLM proposals
  • Revisiting pricing and model capability assumptions quarterly, given the speed of market change

Erreurs fréquentes

  • Treating LLM output as deterministic and reliable by default, leading to under-engineered validation layers
  • Underestimating token costs at scale by only modelling input tokens and ignoring output and context overhead
  • Assuming the largest frontier model is always the right choice without considering latency and cost trade-offs
  • Confusing benchmark accuracy scores with real task performance for the organisation's specific data and language

Quand NE PAS suivre cette formation

This workshop is not the right fit if participants already build and deploy LLM pipelines in production — they need practitioner-level training on prompt engineering, RAG architecture, or MLOps, not a conceptual foundations session.

Fournisseurs à considérer

Sources

Cette formation fait partie d'un catalogue Data & IA construit pour les leaders sérieux sur l'exécution. Lancez le diagnostic gratuit pour voir quelles formations sont prioritaires pour votre équipe.