AI TRAINING
AI Video Generation with Runway and Sora
Master text-to-video workflows using Runway and Sora to produce polished, on-brand video content efficiently.
What it covers
Participants learn to operate Runway Gen-3 and OpenAI Sora for end-to-end video production, from prompt engineering and style control to post-production integration. The training covers practical workflows for marketing campaigns, social content, and branded storytelling, alongside honest benchmarking of what current AI video tools can and cannot reliably deliver. Legal topics including copyright, model terms of service, and deepfake disclosure obligations are addressed directly. Sessions combine live tool demonstrations with hands-on briefs so participants leave with finished assets and repeatable processes.
What you'll be able to do
- Write structured text-to-video prompts that reliably produce consistent style, motion, and composition in Runway and Sora
- Select the right AI video tool and generation mode for a given brief, budget, and quality threshold
- Integrate AI-generated footage into an existing post-production workflow without quality degradation
- Identify current hard limits of AI video generation (duration, character consistency, physics) and set client expectations accordingly
- Apply EU AI Act disclosure requirements and platform terms of service to a live campaign asset
Topics covered
- Runway Gen-3 interface, modes, and prompt structure
- OpenAI Sora capabilities, access tiers, and limitations
- Text-to-video and image-to-video prompt engineering
- Style control: camera motion, mood, colour grading via prompts
- Post-production integration with Premiere Pro, After Effects, and DaVinci Resolve
- Realistic capability benchmarking: quality, duration, and consistency limits
- Legal considerations: copyright, model ToS, deepfake disclosure rules (EU AI Act)
- Brand safety and approval workflows for AI-generated video
Delivery
Typically delivered as a 1- or 2-day in-person or live-virtual workshop. Each participant needs a laptop with browser access; facilitators provide a shared Runway team workspace for the session. The hands-on ratio is approximately 60% practice to 40% instruction. Participants work through three real briefs — a social short, a product demo clip, and a brand film teaser — and leave with exported assets. A follow-up async review channel (Slack or equivalent) for 2 weeks post-workshop is recommended.
What makes it work
- Defining a clear content type scope (social clips, storyboards, product demos) before the workshop so exercises match real use cases
- Establishing an internal approval checklist for AI video before the training so policy is ready to enforce immediately after
- Pairing a creative and a production operations person in each team so both prompt craft and workflow integration are covered
- Revisiting tool capabilities quarterly — Runway and Sora release significant model updates every few months
Common mistakes
- Treating AI video as a replacement for video production rather than a rapid prototyping and augmentation layer
- Ignoring character and object consistency limitations, leading to unusable assets discovered only at delivery
- Skipping legal review of generated content before publishing — especially for likeness and music sync
- Over-investing in prompt complexity without first testing what the model defaults produce
When NOT to take this
This training is not the right fit for teams that have not yet established a baseline video production process — if the organisation cannot currently brief, shoot, and edit a simple video, foundational production skills should be addressed before layering in AI tooling.
Providers to consider
Sources
This training is part of a Data & AI catalog built for leaders serious about execution. Take the free diagnostic to see which trainings your team needs.