The Problem: Fast-paced SaaS products change frequently—sometimes several times a week. Traditional screencast software is either video-input based (requiring manual re-recording after every release) or lacks essential features for polished marketing videos. We needed a development-driven solution that reacts to UI changes instantly.
The Solution: Playwright Marketing Videos lets you define screencasts as code. Update your UI? Your screencast updates automatically. No manual video editing, no re-recording—just run your test suite and deploy.
Playwright tools for creating polished marketing videos with realistic mouse movements, typing animations, audio voice-overs, AI video overlays, banners, and element highlights.
Drop-in replacement for @playwright/test that automatically enhances every page interaction with smooth, human-like animations — perfect for product demos, feature showcases, and marketing screencasts.
- Table of Contents
- Quick Example
- Installation
- Quick Start
- Playwright Config
- Examples
- Audio / Voice-Over
- Video Overlays
- Timeline (Segment-based Recording)
- API
test/expectshowBanner(page, title, options?)showChapter(page, title, options?)startScreencast(page, options?)/stopScreencast(page)showActionAnnotations(page, options?)highlightElement(page, locator, options?)moveMouse(page, options)moveMouseInNiceCurve(page, start, end, options?)animatedType(page, locator, text)showClickAnimation(page, point)- Cursor Management
- Scroll Animations
- Types
- License
A minimal example with voice-over narration — the timeline fixture records video in segments and composes them with audio in post-production, so narration is always perfectly synced regardless of page load times:
import { test, showBanner } from "playwright-marketing-videos";
test("quick product intro", async ({ page, timeline }) => {
// Setup — not recorded
await page.goto("https://your-app.com");
// Start recording
await timeline.start();
// Show a banner with voice-over
await showBanner(page, { text: "Acme — Ship Faster", duration: 3000 });
await timeline.playAudio({ text: "Welcome to Acme — the fastest way to ship." });
// Interactions are automatically animated (smooth cursor, typing, ripples)
await page.getByRole("button", { name: "Get Started" }).click();
await page.getByLabel("Email").fill("user@example.com");
});
// → marketing-videos/quick-product-intro.mp4npm install playwright-marketing-videosPeer dependency:
@playwright/test>= 1.59.0 must be installed in your project.
Replace your usual @playwright/test import with this package:
import { test, expect, showBanner } from "playwright-marketing-videos";
test("product demo", async ({ page, timeline }) => {
// Setup — not recorded
await page.goto("https://your-app.com");
// Start recording
await timeline.start();
// Show a title banner with fade-in/out
await showBanner(page, { text: "My Awesome Feature", duration: 3000 });
// Narrate — audio is synced to video segments automatically
await timeline.playAudio({ text: "Let me show you this awesome feature." });
// All interactions are now automatically animated:
// - Mouse moves in smooth bezier curves
// - Clicks show ripple effects
// - Typing is character-by-character with realistic timing
await page.getByRole("button", { name: "Get Started" }).click();
await page.getByLabel("Email").fill("user@example.com");
});
// → marketing-videos/product-demo.mp4Create a dedicated Playwright config for marketing videos.
Since v1.59, you can use the screencast API for higher-resolution recordings with more control:
// playwright.marketing.config.ts
import { defineConfig, devices } from "@playwright/test";
export default defineConfig({
testMatch: "**/*.marketing-video.ts",
use: {
...devices["Desktop Chrome"],
viewport: { width: 1920, height: 1080 },
screenshot: "off",
locale: "en-US"
},
timeout: 120_000,
outputDir: "marketing-videos"
});Then use startScreencast() in your test for full control over resolution and quality:
import { test, startScreencast, stopScreencast } from "playwright-marketing-videos";
test("my marketing video", async ({ page }) => {
const recording = await startScreencast(page, {
path: "marketing-videos/demo.webm",
size: { width: 1920, height: 1080 },
});
// ... perform actions ...
await recording.dispose(); // or: await stopScreencast(page);
});Alternatively, the classic video config still works:
use: {
viewport: { width: 1920, height: 1080 },
video: { mode: "on", size: { width: 1920, height: 1080 } },
},Run with:
npx playwright test --config playwright.marketing.config.tsA complete example combining banners, voice-over narration, UI interactions, and highlights. Audio is synced to video segments automatically via the Timeline:
import {
test,
showBanner,
highlightElement,
} from "playwright-marketing-videos";
test("full product demo", async ({ page, timeline }) => {
// Setup — not recorded
await page.goto("https://acme.example.com");
// Start recording
await timeline.start();
await showBanner(page, { text: "Acme — Project Management", duration: 4000 });
await timeline.playAudio({ text: "Welcome to Acme — the fastest way to manage your projects." });
await page.waitForLoadState("networkidle");
await timeline.playAudio({ text: "Let me show you how easy it is to create a new project." });
await page.getByRole("button", { name: "New Project" }).click();
await page.getByLabel("Project name").fill("My First Project");
// Highlight a key feature
await highlightElement(page, page.locator(".template-picker"), {
borderColor: "#4f46e5",
zoomScale: 1.08
});
await timeline.playAudio({ text: "Choose from dozens of pre-built templates to get started instantly." });
await page.getByRole("button", { name: "Create" }).click();
});Use a generated AI video as a cinematic intro before your product walkthrough:
import {
test,
generateVideoOverlay,
playVideoOverlay,
showBanner,
type VideoOverlay
} from "playwright-marketing-videos";
let introVideo: VideoOverlay;
test.beforeAll(async () => {
introVideo = await generateVideoOverlay({
prompt: "Cinematic zoom into a glowing laptop screen showing a beautiful dashboard, soft blue light, professional office background",
durationSec: 5,
aspectRatio: "16:9"
});
});
test("video intro demo", async ({ page, timeline }) => {
await page.goto("https://your-app.com");
await timeline.start();
await playVideoOverlay(page, introVideo);
await timeline.playAudio({ text: "Introducing the next generation of project analytics." });
// Continue with the product demo
await showBanner(page, { text: "Real-Time Analytics Dashboard", duration: 3000 });
await page.getByRole("link", { name: "Dashboard" }).click();
});Play voice-over narration while performing animated interactions — the audio is synced to this segment in post-production:
import { test } from "playwright-marketing-videos";
test("background narration", async ({ page, timeline }) => {
await page.goto("https://your-app.com/settings");
await timeline.start();
// Narrate — audio plays over the interactions that follow
await timeline.playAudio({
text: "The settings page gives you full control over notifications, privacy, and appearance.",
});
// These interactions are recorded as the segment for the audio above
await page.getByRole("tab", { name: "Notifications" }).click();
await page.getByLabel("Email alerts").check();
await page.getByRole("tab", { name: "Appearance" }).click();
await page.getByLabel("Dark mode").check();
});Chain multiple AI-generated video clips to create a story arc:
import {
test,
generateVideoOverlay,
playVideoOverlay,
showBanner,
type VideoOverlay
} from "playwright-marketing-videos";
let problemVideo: VideoOverlay;
let solutionVideo: VideoOverlay;
test.beforeAll(async () => {
problemVideo = await generateVideoOverlay({
prompt: "Frustrated person drowning in spreadsheets and sticky notes, messy desk, overwhelmed expression",
durationSec: 5
});
solutionVideo = await generateVideoOverlay({
prompt: "Clean modern workspace with a sleek app on screen, person smiling confidently, minimal design",
durationSec: 5
});
});
test("multi-scene video", async ({ page }) => {
await page.goto("https://your-app.com");
// Scene 1: Problem statement
await playVideoOverlay(page, problemVideo);
await showBanner(page, "There's a better way.");
// Scene 2: Solution reveal
await playVideoOverlay(page, solutionVideo);
await showBanner(page, "Meet Acme.", { duration: 3000 });
// Continue with live product demo...
await page.getByRole("button", { name: "Get Started" }).click();
});Use a hosted video file as an overlay — great for brand intros, stock footage, or pre-rendered animations:
import {
test,
generateVideoOverlay,
playVideoOverlay,
showBanner,
UrlVideoProvider,
type VideoOverlay
} from "playwright-marketing-videos";
let brandIntro: VideoOverlay;
test.beforeAll(async () => {
brandIntro = await generateVideoOverlay({
prompt: "Brand intro video",
provider: new UrlVideoProvider("https://cdn.example.com/videos/brand-intro.mp4")
});
});
test("branded intro from URL", async ({ page, timeline }) => {
await page.goto("https://your-app.com");
await timeline.start();
await playVideoOverlay(page, brandIntro);
await timeline.playAudio({ text: "Built by developers, for developers." });
// Continue with the live product demo
await showBanner(page, { text: "Let's dive in.", duration: 3000 });
await page.getByRole("button", { name: "Get Started" }).click();
});Create portrait-oriented videos for social media (TikTok, Reels, Shorts):
import { test, generateVideoOverlay, playVideoOverlay, type VideoOverlay } from "playwright-marketing-videos";
import { defineConfig, devices } from "@playwright/test";
// In your playwright config, use a vertical viewport:
// viewport: { width: 720, height: 1280 }
let video: VideoOverlay;
test.beforeAll(async () => {
video = await generateVideoOverlay({
prompt: "Vertical video of a hand swiping through a beautiful mobile app interface",
durationSec: 5,
aspectRatio: "9:16" // Vertical aspect ratio
});
});
test("mobile promo", async ({ page }) => {
await page.goto("https://your-app.com/mobile");
await playVideoOverlay(page, video);
});Use ElevenLabs for higher-quality, multilingual narration:
import { test, showBanner } from "playwright-marketing-videos";
test("premium voice demo", async ({ page, timeline }) => {
await page.goto("https://your-app.com");
await timeline.start();
await showBanner(page, { text: "Productivity Reimagined", duration: 3000 });
await timeline.playAudio({
provider: "elevenlabs",
text: "Welcome to the future of productivity. Let us show you what's possible.",
voiceId: "21m00Tcm4TlvDq8ikWAM",
modelId: "eleven_multilingual_v2"
});
await page.getByRole("button", { name: "Start Tour" }).click();
});Generate text-to-speech audio files and play them in your marketing videos. Two TTS providers are supported:
- Kokoro (default) — free, local, high-quality neural TTS via kokoro-js. No API key needed.
- ElevenLabs — cloud-based TTS with premium voices. Requires an API key.
Install the Kokoro package:
npm install kokoro-js// Default provider — no API key needed!
// First call downloads an ~86MB model (cached after that)
await timeline.playAudio({
text: "Welcome to our product demo!",
voice: "af_sky", // Optional (default: "af_heart")
});Available options:
voice— Kokoro voice ID (default:"af_heart")dtype— Model precision:"fp32","q8","q4"(default:"q8")modelId— HuggingFace model ID (default:"onnx-community/Kokoro-82M-v1.0-ONNX")
Install the ElevenLabs package and set your API key:
npm install @elevenlabs/elevenlabs-js
export ELEVENLABS_API_KEY="your-api-key-here"await timeline.playAudio({
provider: "elevenlabs",
text: "Welcome to our product demo!",
voiceId: "21m00Tcm4TlvDq8ikWAM",
modelId: "eleven_multilingual_v2" // Optional (this is the default)
});Get an API key at elevenlabs.io.
Generates an audio file from text using the configured TTS provider.
Returns: AudioLayer object with { filePath, text, voiceId? }.
Generated audio files are cached locally in an __audio_cache/ directory (created in the current working directory). Cache keys are SHA-256 hashes of the provider configuration, so:
- Identical requests are served instantly from disk
- Changing the text, voice, or model generates a new file
- The cache directory can be safely deleted to regenerate all audio
- Add
__audio_cache/to.gitignoreif you don't want to commit cached audio files, or commit them to avoid regenerating in CI
If you were using ElevenLabs (the previous default), add provider: "elevenlabs" to your generateAudioLayer() calls:
// Before (v0.2.x)
const audio = await generateAudioLayer({ text: "Hello", voiceId: "..." });
// After (v0.3.x)
const audio = await generateAudioLayer({ provider: "elevenlabs", text: "Hello", voiceId: "..." });Generate AI video clips from text prompts — or use existing video files from any URL — and play them as full-screen overlays in your marketing videos. Videos are rendered directly in the browser viewport so Playwright's native video recording captures them — no external video editing required.
Set your API key:
export RUNWAYML_API_KEY="your-api-key-here"import { generateVideoOverlay, playVideoOverlay } from "playwright-marketing-videos";
const video = await generateVideoOverlay({
prompt: "A smooth camera fly-through of a modern SaaS dashboard with charts animating in",
durationSec: 5, // Video length in seconds (default: 5)
aspectRatio: "16:9" // "16:9" (default) or "9:16" for vertical videos
});
await playVideoOverlay(page, video);The default Runway provider uses the Gen-4 Turbo model. You can customize the model or API key by providing your own RunwayVideoProvider instance:
import { generateVideoOverlay, RunwayVideoProvider } from "playwright-marketing-videos";
const video = await generateVideoOverlay({
prompt: "Colorful particles forming a company logo",
provider: new RunwayVideoProvider({
model: "gen4_turbo",
apiKey: "rk-my-specific-key" // Override the environment variable
})
});Get an API key at runwayml.com.
Use UrlVideoProvider to download and cache any hosted video file (from a CDN, S3, direct link, etc.) — no AI generation needed:
import { generateVideoOverlay, playVideoOverlay, UrlVideoProvider } from "playwright-marketing-videos";
const video = await generateVideoOverlay({
prompt: "Company brand intro", // Used only for logging/cache key
provider: new UrlVideoProvider("https://cdn.example.com/videos/brand-intro.mp4")
});
await playVideoOverlay(page, video);The video is downloaded once and cached locally. Subsequent runs with the same URL serve the file from disk instantly.
Generates a short AI video clip from a text prompt.
| Option | Type | Default | Description |
|---|---|---|---|
prompt |
string |
required | Text prompt describing the video to generate |
durationSec |
number |
5 |
Video duration in seconds |
aspectRatio |
"16:9" | "9:16" |
"16:9" |
Aspect ratio — use "9:16" for vertical/mobile videos |
provider |
VideoProvider |
RunwayVideoProvider |
Video generation provider instance |
Returns: VideoOverlay object with { filePath, prompt, durationSec }.
Plays a video overlay as a full-screen layer on the Playwright page. The video is injected as a base64-encoded <video> element that covers the entire viewport.
// Play video and wait for it to finish (default)
await playVideoOverlay(page, video);
// Play video and continue immediately (e.g. to animate UI beneath)
await playVideoOverlay(page, video, false);You can implement the VideoProvider interface to add support for other video generation APIs (e.g. Kling, Luma, Stability, Pika):
import type { VideoProvider } from "playwright-marketing-videos";
class MyCustomProvider implements VideoProvider {
readonly name = "my-provider";
async generate(options: {
prompt: string;
durationSec: number;
aspectRatio: string;
cacheDir?: string;
cacheKey?: string;
}): Promise<Buffer> {
// Call your preferred video generation API here
// Return the video file as a Buffer
const response = await fetch("https://my-video-api.com/generate", {
method: "POST",
body: JSON.stringify({ prompt: options.prompt, duration: options.durationSec })
});
return Buffer.from(await response.arrayBuffer());
}
}
const video = await generateVideoOverlay({
prompt: "An abstract gradient animation",
provider: new MyCustomProvider()
});Generated videos are cached locally in a __video_cache/ directory (created in the current working directory). Cache keys are SHA-256 hashes of the prompt + duration + aspect ratio + provider name, so:
- Identical requests are served instantly from disk
- Changing the prompt, duration, aspect ratio, or provider generates a new file
- If a generation task times out, the provider can store intermediate state (e.g. pending task IDs) in the cache directory so the next run resumes polling instead of creating a new task
- The cache directory can be safely deleted to regenerate all videos
- Add
__video_cache/to.gitignoreif you don't want to commit cached video files, or commit them to avoid regenerating in CI
Playwright cannot record audio, and page load times are non-deterministic (0-2 seconds). This makes it impossible to pre-compose a fixed audio timeline that stays in sync with the video.
The Timeline system solves this by recording video in discrete segments using the screencast API. Each call to timeline.playAudio() creates a "cut point" — it stops the current segment and starts a new one. Audio is attached to the segment it triggered, and everything is stitched together with ffmpeg in post-production. Since each audio clip is paired with its exact video segment, timing is always perfect regardless of load times.
Requirements: ffmpeg and ffprobe must be installed on your system for composition.
Destructure timeline from the test fixture. Recording does NOT start automatically — call timeline.start() when you're ready. This lets you set up test data without recording the setup:
import { test } from "playwright-marketing-videos";
test("product demo", async ({ page, timeline }) => {
// Setup phase — not recorded
await page.goto("https://app.example.com/setup");
await createDemoUser(page);
// Start recording
await timeline.start();
await page.goto("https://app.example.com");
await page.getByRole("button", { name: "New Project" }).click();
await page.getByLabel("Name").fill("My Project");
});
// → marketing-videos/product-demo.mp4 is composed automaticallyThe output file is derived from the test title ("product demo" → product-demo.mp4) and placed in marketing-videos/.
timeline.playAudio() generates TTS audio and creates a cut point. The audio is attached to the segment that starts after the call — it plays over the upcoming actions:
import { test, showBanner } from "playwright-marketing-videos";
test("demo with narration", async ({ page, timeline }) => {
await page.goto("https://app.example.com/setup");
await seedTestData(page);
await timeline.start();
await showBanner(page, { text: "Acme — Ship Faster", duration: 3000 });
await timeline.playAudio({ text: "Welcome to Acme. Let me show you around." });
await page.goto("https://app.example.com");
await page.waitForLoadState("networkidle");
await timeline.playAudio({ text: "Creating a new project is easy." });
await page.getByRole("button", { name: "New Project" }).click();
await page.getByLabel("Name").fill("My Project");
await timeline.playAudio({ text: "And you're done!" });
await page.getByRole("button", { name: "Create" }).click();
});This produces a video with three narrated segments, each perfectly synced regardless of how long page loads take.
Add per-segment fade-in/fade-out transitions via timeline.cut() or timeline.playAudio():
// Cut with a fade-out on the current segment and fade-in on the next
await timeline.cut({ fadeOutMs: 500, fadeInMs: 300 });
// Or attach fades to an audio cut point
await timeline.playAudio(
{ text: "Next chapter..." },
{ fadeInMs: 400, fadeOutMs: 400 }
);Insert a banner as its own discrete segment using timeline.addSegment(). The banner is recorded via the screencast and composited like any other segment:
await timeline.addSegment({
type: "banner",
bannerOptions: { text: "Chapter 2: Advanced Features", duration: 3000 },
durationMs: 3000,
fadeInMs: 300,
fadeOutMs: 300,
});| Method | Description |
|---|---|
timeline.start() |
Start recording the first screencast segment |
timeline.stop() |
Stop recording and finalize the last segment |
timeline.playAudio(audioOptions, segmentOptions?) |
Generate TTS audio, cut the screencast, attach audio to the next segment |
timeline.cut(options?) |
Cut the current segment and start a new one. Options: audio, fadeInMs, fadeOutMs |
timeline.addSegment(segment) |
Insert a non-screencast segment (banner, video overlay) |
timeline.compose(outputPath) |
Stitch all segments + audio into a final mp4 using ffmpeg |
timeline.getSegments() |
Get all segments in order |
timeline.writeManifest(outputFile) |
Write a timeline.json manifest to disk |
The timeline.json manifest contains all segment metadata (paths, durations, offsets, audio references) for debugging or external tooling.
Extended Playwright test fixture. When you use test from this package, all page locator methods (click, fill, locator, getByRole, getByText, getByTestId, getByLabel, getByPlaceholder, getByAltText, getByTitle) are automatically wrapped with marketing animations:
- Clicks move the cursor in a smooth curve to the target, show a ripple animation, then click.
- Fill/type moves the cursor, shows a click animation, then types character-by-character with realistic timing (50-150ms per keystroke).
- Scrolling is handled automatically with a scroll indicator animation when elements are off-screen.
The cursor icon changes contextually: arrow (default), pointer (over buttons/links), text cursor (over inputs).
Displays a full-screen banner overlay with fade-in/out animations.
await showBanner(page, "Feature Showcase", {
duration: 3000, // Display duration in ms (default: 2000)
fadeInMs: 500, // Fade-in duration (default: 300)
fadeOutMs: 500, // Fade-out duration (default: 300)
backgroundColor: "#1e212b", // Background color (default: "#1e212b")
textColor: "#ffffff", // Text color (default: "#ffffff")
fontSize: "48px", // Font size (default: "48px")
callback: async () => {
// Optional: runs while the banner is shown (e.g. navigate to a page)
await page.goto("https://your-app.com");
}
});Shows a chapter overlay with a blurred backdrop, centered on the page — ideal for section titles.
Uses Playwright v1.59's built-in screencast.showChapter() API.
await showChapter(page, "Chapter 1: Getting Started", {
description: "Setting up the project",
duration: 3000, // default: 2000ms
});Start and stop high-resolution video recording using Playwright v1.59's screencast API. Defaults to 1920x1080 for Full HD output.
const recording = await startScreencast(page, {
path: "output/demo.webm",
size: { width: 1920, height: 1080 },
quality: 90,
});
// ... perform actions ...
await recording.dispose(); // or: await stopScreencast(page);Enables visual action annotations on the recording. Each Playwright action is annotated with a label overlay.
const actions = await showActionAnnotations(page, {
position: "top-right",
fontSize: 20,
duration: 800,
});
// ... perform actions with visible annotations ...
await actions.dispose(); // stop showing annotationsWhen a callback is provided, the banner is injected before the callback runs and persists across page navigations (re-injected on every load event). This is useful for showing a banner during a page transition.
Highlights a page element with a zoom-in effect and colored border.
await highlightElement(page, page.locator(".feature-card"), {
duration: 2000, // How long the highlight stays (default: 2000)
borderColor: "#ff6b35", // Highlight border color (default: "#ff6b35")
borderWidth: 4, // Border width in px (default: 4)
zoomScale: 1.05 // Zoom factor (default: 1.05)
});Moves the visible cursor in a smooth bezier curve to a target.
// Move to a locator (auto-scrolls into view)
await moveMouse(page, { to: page.getByText("Click me") });
// Move to specific coordinates
await moveMouse(page, { to: { x: 500, y: 300 } });
// Move from a specific starting point
await moveMouse(page, {
from: { x: 100, y: 100 },
to: page.getByRole("button"),
durationMs: 1000
});Lower-level function for moving between two specific points with bezier curves.
await moveMouseInNiceCurve(page, { x: 0, y: 0 }, { x: 500, y: 400 }, {
durationMs: 800, // Animation duration (auto-calculated from distance if omitted)
steps: 60, // Number of interpolation steps (auto-calculated if omitted)
seed: 42 // Deterministic randomness seed for reproducible curves
});Types text character-by-character with realistic timing. Moves the cursor to the field, clicks to focus, then types with 50-150ms delays between keystrokes.
await animatedType(page, page.getByLabel("Search"), "playwright marketing");Shows a ripple effect at the given coordinates. Used internally by the click wrapper, but available for manual use.
await showClickAnimation(page, { x: 640, y: 360 });import { addVisibleCursor, hideCursor, showCursor, updateCursorPosition } from "playwright-marketing-videos";
await addVisibleCursor(page); // Inject the visible cursor (called automatically by test fixture)
await hideCursor(page); // Temporarily hide the cursor
await showCursor(page); // Show the cursor again
await updateCursorPosition(page, 100, 200); // Manually set cursor positionimport { showScrollAnimation, hideScrollAnimation } from "playwright-marketing-videos";
await showScrollAnimation(page); // Show a mouse-scroll indicator near the cursor
await hideScrollAnimation(page); // Remove the scroll indicatorAll types are exported for use in your own code:
import type {
MousePoint,
MouseTarget,
AudioLayer,
VideoOverlay,
GenerateAudioLayerOptions,
GenerateVideoOverlayOptions,
KokoroOptions,
ShowBannerOptions,
ShowChapterOptions,
ScreencastOptions,
HighlightElementOptions,
MoveMouseOptions,
MoveMouseInNiceCurveOptions,
VideoProvider,
UrlVideoProvider,
Timeline,
TimelineOptions,
TimelineSegment,
TimelineManifest,
ScreencastSegment,
BannerSegment,
VideoOverlaySegment
} from "playwright-marketing-videos";MIT