AI TRAINING
Understanding LLMs: Concepts, Limits and Costs
Walk away knowing exactly what LLMs can and cannot do, and what they cost your organisation.
What it covers
This training gives managers, product owners, and engineering leaders a rigorous conceptual grounding in how large language models work, without requiring a machine-learning background. Participants explore transformer architecture at an intuitive level, token economics, context window constraints, and the structural reasons why hallucinations occur. The session combines short lectures with guided case analysis so leaders can evaluate LLM vendors, scope realistic projects, and challenge engineering assumptions with confidence.
What you'll be able to do
- Explain to a non-technical stakeholder why an LLM hallucinates and name two architectural reasons it cannot self-correct reliably
- Calculate an approximate monthly API cost for a defined LLM use case using token estimates and published pricing
- Identify at least three context-window constraints that would affect the design of a proposed product feature
- Distinguish between benchmark performance and real-world task performance when evaluating a vendor's model claims
- Decide whether a given use case requires a frontier model, a smaller open-weight model, or a fine-tuned model
Topics covered
- Transformer architecture and next-token prediction explained intuitively
- Tokenisation mechanics and token-cost economics (input vs output pricing)
- Context window sizes, limits, and implications for application design
- Hallucination failure modes: why they occur and how to detect them
- Model families and capability tiers (frontier vs open-weight vs fine-tuned)
- Retrieval-Augmented Generation as a mitigation strategy overview
- Latency, throughput, and cost trade-offs across deployment options
- Evaluating LLM vendor claims and benchmark literacy
Delivery
Delivered as a full-day in-person or live-virtual workshop (6-8 hours including breaks). Approximately 40% lecture with interactive Q&A, 40% structured case analysis using real-world product scenarios, and 20% guided cost-modelling exercise using public API pricing sheets. Participants receive a reference card summarising token economics, context window limits by major model, and a hallucination taxonomy. No laptop coding required; a spreadsheet tool is used for the cost exercise.
What makes it work
- Bringing real internal use-case candidates into the cost-modelling exercise so learning is immediately applicable
- Including both product and engineering leaders in the same session to align on shared vocabulary and assumptions
- Following up with a short internal FAQ or decision checklist that leaders can use when reviewing LLM proposals
- Revisiting pricing and model capability assumptions quarterly, given the speed of market change
Common mistakes
- Treating LLM output as deterministic and reliable by default, leading to under-engineered validation layers
- Underestimating token costs at scale by only modelling input tokens and ignoring output and context overhead
- Assuming the largest frontier model is always the right choice without considering latency and cost trade-offs
- Confusing benchmark accuracy scores with real task performance for the organisation's specific data and language
When NOT to take this
This workshop is not the right fit if participants already build and deploy LLM pipelines in production — they need practitioner-level training on prompt engineering, RAG architecture, or MLOps, not a conceptual foundations session.
Providers to consider
Sources
This training is part of a Data & AI catalog built for leaders serious about execution. Take the free diagnostic to see which trainings your team needs.