# Assets Source: https://developers.heygen.com/assets Upload files for use across the HeyGen API. Upload images, videos, audio, or PDFs to get an `asset_id` you can reference in other endpoints — like `POST /v3/video-agents`, `POST /v3/videos`, or `POST /v3/avatars`. ## Upload an Asset ```bash theme={null} curl -X POST https://api.heygen.com/v3/assets \ -H "x-api-key: YOUR_API_KEY" \ -F "file=@./my-photo.png" ``` ```json Response theme={null} { "data": { "asset_id": "ast_abc123", "url": "https://files.heygen.com/asset/ast_abc123.png", "mime_type": "image/png", "size_bytes": 204800 } } ``` ## Supported File Types | Category | Formats | | -------- | --------- | | Image | PNG, JPEG | | Video | MP4, WebM | | Audio | MP3, WAV | | Document | PDF | Max file size: **32 MB**. MIME type is auto-detected from file bytes. ## Using Assets Once uploaded, reference the `asset_id` anywhere the API accepts asset inputs: ```json theme={null} // In POST /v3/video-agents (file attachments) { "prompt": "Explain this diagram", "files": [{ "type": "asset_id", "asset_id": "ast_abc123" }] } ``` ```json theme={null} // In POST /v3/avatars (photo avatar) { "type": "photo", "name": "My Avatar", "file": { "type": "asset_id", "asset_id": "ast_abc123" } } ``` Anywhere that accepts an asset also accepts a direct URL (`{"type": "url", "url": "https://..."}`) or base64 (`{"type": "base64", "media_type": "image/png", "data": "..."}`). Use `asset_id` when you need to reuse the same file across multiple requests. # Automated Broadcast Source: https://developers.heygen.com/automated-broadcast Build a pipeline that generates and distributes video content on a schedule — daily news, weekly updates, recurring series. ## The Problem Publishing regular video content — daily news roundups, weekly company updates, recurring educational series — is unsustainable without a production team. But consistency is what builds an audience. ## How It Works ``` Schedule triggers → Aggregate content → LLM writes script → Video Agent renders → Auto-distribute ``` A fully automated pipeline that runs on a schedule, collects fresh content from your sources, generates a video, and delivers it to your audience — no human in the loop. ## Build It Pull content from whatever sources feed your broadcast. ```python theme={null} import requests from datetime import datetime def aggregate_content(): stories = [] # RSS feeds import feedparser feed = feedparser.parse("https://news.ycombinator.com/rss") for entry in feed.entries[:5]: stories.append({ "title": entry.title, "summary": entry.get("summary", ""), "source": "Hacker News", "url": entry.link, }) # APIs (example: your internal metrics) metrics = requests.get("https://api.yourapp.com/weekly-stats").json() stories.append({ "title": f"This week: {metrics['new_users']} new users, {metrics['revenue']} revenue", "summary": f"Growth of {metrics['growth_pct']}% week over week", "source": "Internal", }) return stories stories = aggregate_content() ``` ```python theme={null} import anthropic client = anthropic.Anthropic() story_text = "\n".join( f"- {s['title']} ({s['source']}): {s['summary']}" for s in stories ) message = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1500, messages=[{ "role": "user", "content": f"""Create a HeyGen Video Agent prompt for a 60-second news/update video. Date: {datetime.now().strftime('%B %d, %Y')} Stories to cover: {story_text} Structure: - Intro (5s): "Here's your [daily/weekly] update for [date]" - Stories (45s): Cover the top 3 stories with text overlays for key stats - Sign-off (10s): "That's your update. See you [tomorrow/next week]." Tone: Authoritative but approachable. Clean, news-desk style background. Keep pacing brisk — one story every 15 seconds.""" }], ) video_prompt = message.content[0].text ``` ```python theme={null} resp = requests.post( "https://api.heygen.com/v3/video-agents", headers={ "X-Api-Key": HEYGEN_API_KEY, "Content-Type": "application/json", }, json={"prompt": video_prompt}, ) video_id = resp.json()["data"]["video_id"] # Poll until complete import time while True: status = requests.get( f"https://api.heygen.com/v3/videos/{video_id}", headers={"X-Api-Key": HEYGEN_API_KEY}, ).json()["data"] if status["status"] == "completed": video_url = status["video_url"] break elif status["status"] == "failed": raise Exception(f"Video failed: {status.get('failure_message')}") time.sleep(15) ``` Deliver the video to your audience wherever they are. ```python theme={null} # Telegram import telegram bot = telegram.Bot(token=TELEGRAM_TOKEN) bot.send_video(chat_id=CHANNEL_ID, video=video_url, caption="Daily Update") # Slack requests.post(SLACK_WEBHOOK, json={ "text": f"Daily update is ready: {video_url}", }) # Email (via your ESP) send_email( to=subscriber_list, subject=f"Your Daily Update — {datetime.now().strftime('%B %d')}", html=f'', ) ``` Run the pipeline on a schedule using cron, GitHub Actions, or a cloud function. ```yaml theme={null} # .github/workflows/daily-broadcast.yml name: Daily Video Broadcast on: schedule: - cron: '0 17 * * 1-5' # 5 PM UTC, weekdays jobs: broadcast: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - run: pip install -r requirements.txt - run: python broadcast.py env: HEYGEN_API_KEY: ${{ secrets.HEYGEN_API_KEY }} ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} TELEGRAM_TOKEN: ${{ secrets.TELEGRAM_TOKEN }} ``` ## Real-World Example STUDIO 47, a German broadcaster, reported these results after adopting HeyGen for automated video production (via [HeyGen customer stories](https://www.heygen.com/customer-stories/studio-47)): * Significantly faster content creation * 24/7 production capability * Substantial cost reduction vs traditional production * Expanded into multilingual content that wasn't feasible before ## Resilient Delivery Build fallbacks for when things go wrong: ```python theme={null} def deliver(video_url, caption): try: # Try primary: send video by URL bot.send_video(chat_id=CHANNEL_ID, video=video_url, caption=caption) except Exception: try: # Fallback: download and upload as file video_data = requests.get(video_url).content bot.send_video(chat_id=CHANNEL_ID, video=video_data, caption=caption) except Exception: # Last resort: send text with link bot.send_message(chat_id=CHANNEL_ID, text=f"{caption}\n\n{video_url}") ``` ## Broadcast Types | Type | Schedule | Content source | Duration | | --------------------- | -------------- | -------------------------------- | -------- | | **Daily news** | Every morning | RSS, APIs, web scrape | 45–60s | | **Weekly roundup** | Monday morning | Internal metrics + industry news | 90s | | **Product changelog** | Each release | Git commits, release notes | 30–45s | | **Company all-hands** | Weekly/monthly | Meeting notes, updates | 60–90s | | **Social digest** | Daily | Trending topics in your niche | 30s | ## Variations * **Multi-language:** Generate once, [translate](/cookbook/video-agent/multilingual-content) for regional audiences * **Different avatars per topic:** Use different presenters for different content categories * **Audience segmentation:** Generate different versions for different subscriber segments *** ## Next Steps Repurpose existing content instead of aggregating new content. Trigger video generation from code changes instead of a schedule. # Automated Video Pipeline Source: https://developers.heygen.com/automated-pipeline Generate videos programmatically from data — in CI/CD, on a schedule, or triggered by events. ## The Problem You need to generate the same type of video repeatedly with different data — weekly reports, personalized onboarding videos, per-customer dashboards, changelog announcements. Doing this manually doesn't scale. ## How It Works ``` Data event → Template composition + injected data → Hyperframes render → Distribute ``` Hyperframes compositions are just HTML files. You can template them, inject data, and render programmatically — no browser, no human, no AI agent in the loop. ## Build It Build one great composition with your AI agent, then extract the variable parts: ```html theme={null}
{{ACTIVE_USERS}} active users this week
{{REVENUE}} revenue
```
```python theme={null} import subprocess import shutil from pathlib import Path def generate_report_video(data: dict, output_path: str): """Generate a weekly report video from data.""" # Copy template work_dir = Path(f"/tmp/report-{data['week']}") shutil.copytree("templates/weekly-report", work_dir, dirs_exist_ok=True) # Inject data into template html = (work_dir / "index.html").read_text() html = html.replace("{{ACTIVE_USERS}}", f"{data['active_users']:,}") html = html.replace("{{REVENUE}}", f"${data['revenue']:,.0f}") html = html.replace("{{GROWTH}}", f"{data['growth_pct']:.1f}%") (work_dir / "index.html").write_text(html) # Render subprocess.run([ "npx", "hyperframes", "render", "--output", output_path, "--quality", "standard", "--fps", "30", ], cwd=str(work_dir), check=True) # Cleanup shutil.rmtree(work_dir) return output_path ``` **GitHub Actions:** ```yaml theme={null} # .github/workflows/weekly-report.yml name: Weekly Report Video on: schedule: - cron: '0 9 * * 1' # Every Monday at 9am jobs: generate: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: '22' - run: sudo apt-get install -y ffmpeg - run: python scripts/generate_report.py - uses: actions/upload-artifact@v4 with: name: weekly-report path: renders/*.mp4 ``` **Webhook-triggered:** ```python theme={null} from flask import Flask, request app = Flask(__name__) @app.route("/webhook/new-signup", methods=["POST"]) def on_new_signup(): user = request.json generate_welcome_video( name=user["name"], company=user["company"], output=f"renders/welcome-{user['id']}.mp4" ) # Upload to CDN, send via email, etc. return {"status": "ok"} ```
## Pipeline Patterns | Trigger | Data source | Output | Example | | ----------------- | ---------------------- | --------------------------- | ------------------------- | | **Cron schedule** | Database query | Weekly/monthly report video | Monday metrics recap | | **Webhook** | Event payload | Per-user personalized video | Welcome onboarding | | **Git push** | Changelog / commit log | Release announcement | "What's new in v2.4" | | **API call** | Request parameters | On-demand custom video | Customer dashboard export | ## Combine with Video Agent For the best of both worlds — motion graphics + avatar narration: ```python theme={null} def generate_narrated_report(data): # Step 1: Render the motion graphics with Hyperframes graphics_path = generate_report_video(data, "renders/graphics.mp4") # Step 2: Generate avatar narration with Video Agent narration = requests.post( "https://api.heygen.com/v3/video-agents", headers={"X-Api-Key": HEYGEN_API_KEY}, json={ "prompt": f"""Narrate this weekly report: {data['active_users']:,} active users (up {data['growth_pct']:.0f}%), ${data['revenue']:,.0f} revenue. Keep it under 15 seconds, upbeat and concise.""", }, ).json() # Step 3: Composite in Hyperframes (avatar PiP over graphics) # ... or use ffmpeg to overlay ``` Start simple — get one template working end-to-end, then add automation. A working pipeline that generates one video type reliably is more valuable than a complex system that handles everything. *** ## Next Steps Build the animated visualizations that feed into your pipeline. Similar automation pattern using Video Agent for avatar-based content. # Changelog Source: https://developers.heygen.com/changelog Product updates and announcements for the HeyGen API and developer platform. **Comprehensive API Documentation Updates** We have updated the endpoint descriptions across our entire V3 API to provide clearer guidance, better parameter context, and more precise functionality definitions. While the underlying API logic remains consistent, the improved documentation clarifies how to integrate with our latest engine versions and features. * **Video Generation**: `POST /v3/videos` now officially documents support for the Avatar IV engine and upcoming Avatar V. Note that Avatar III generation will be deprecated by the end of July 2026. * **Avatars**: Clarified workflows for `POST /v3/avatars` (asynchronous training) and added guidance on the mandatory consent flow for private avatars via `POST /v3/avatars/{group_id}/consent`. * **Video Agent**: Streamlined descriptions for session-based interactions, clearly distinguishing between `generate` (one-shot) and `chat` (multi-turn) modes. * **Lipsync & Translation**: Updated documentation for `POST /v3/lipsyncs` and `POST /v3/video-translations` to emphasize the `speed` vs. `precision` mode selection for output quality. * **Webhooks**: Clarified that `PATCH /v3/webhooks/endpoints/{endpoint_id}` performs a full replacement of the event types array. * **Assets**: Updated supported MIME types for `POST /v3/assets` to include refined file type lists. **Added caption\_url to Lipsync and Video Translation responses** You can now retrieve the `caption_url` for generated lipsyncs and video translations, providing direct access to the generated caption files. * `GET /v3/lipsyncs` and `GET /v3/lipsyncs/{lipsync_id}` * `PATCH /v3/lipsyncs/{lipsync_id}` * `GET /v3/video-translations` and `GET /v3/video-translations/{video_translation_id}` * `PATCH /v3/video-translations/{video_translation_id}` **Updated documentation for avatar consent** Clarified the implementation details for the avatar consent flow to ensure a smoother user experience. * `POST /v3/avatars/{group_id}/consent`: Updated documentation to clarify that the returned URL must be presented directly to the user in a browser to complete the consent process. **Support for avatar-default voices** You can now generate videos using an avatar's default voice without explicitly specifying a `voice_id`. When creating a video, if `voice_id` is omitted while `avatar_id` is present, the system will automatically use the avatar's default voice. * Updated `POST /v3/videos`: The `voice_id` requirement has been relaxed for both `CreateVideoFromAvatar` and `CreateVideoFromImage` schemas, allowing the system to fall back to the avatar's default voice. **Enhanced capabilities for Video Agent interactions** We have updated the description and scope of the `POST /v3/video-agents/{session_id}` endpoint to better reflect its versatility in managing agent-led workflows. * Updated the endpoint description to clarify support for answering agent-posed questions and requesting specific edits or revisions. * The request body schema has been updated to better align with these extended conversational and editing capabilities. **New 'thinking' status for Video Agents** We have introduced a new `thinking` state to the Video Agent response object to provide better visibility into agent processing workflows. * Updated `POST /v3/video-agents` * The `status` field in the response now includes the `thinking` enum value. * Integration note: Ensure your client-side parsers are prepared to handle this new status value in the response body. **Updated Video Agent session retrieval and new video listing** We have refactored how resource data is handled in Video Agent sessions to improve performance. Additionally, we have introduced a new endpoint to fetch videos associated with a session. * **Breaking Change:** The `resources` property has been removed from the response body of `GET /v3/video-agents/{session_id}`. * **Migration:** To access resource details previously found in the session object, please use the new `GET /v3/video-agents/{session_id}/resources/{resource_id}` endpoint. * **New Endpoint:** Added `GET /v3/video-agents/{session_id}/videos` to retrieve a list of videos generated within a specific agent session. **Breaking change: Restructured Video Agent session management** We have updated the Video Agent API to simplify session handling. Please note that the previous `/v3/video-agents/sessions` path structure is deprecated and removed. * **Removed endpoints:** `POST /v3/video-agents/sessions`, `GET /v3/video-agents/sessions/{session_id}`, `POST /v3/video-agents/sessions/{session_id}/messages`, `GET /v3/video-agents/sessions/{session_id}/resources`, and `POST /v3/video-agents/sessions/{session_id}/stop` have been removed. * **Migration:** Replace existing calls with the new flattened endpoints under `/v3/video-agents/{session_id}`. * **New endpoints added:** * `GET /v3/video-agents/{session_id}` * `POST /v3/video-agents/{session_id}` * `GET /v3/video-agents/{session_id}/resources/{resource_id}` * `POST /v3/video-agents/{session_id}/stop` **New configuration options for Video Agent sessions** The `POST /v3/video-agents` endpoint now supports advanced control over session flow. * Added `mode`: Supports `generate` (default, one-shot) and `chat` (multi-turn, allows revisions and follow-ups). * Added `auto_proceed`: Enables automated progression through storyboards. * Added `skip_agentic_stop`: Provides granular control over agent stopping behavior. **API Operation ID update** The operation ID for `GET /v3/users/me` has been updated from `getUserMeV3` to `getCurrentUserV3` to maintain consistency across our SDKs. **Added support for custom voice creation** We have introduced a new endpoint to allow developers to programmatically create and add new voices to their HeyGen account. * Added `POST /v3/voices` to the API. **Refactored POST /v3/videos request body** We have updated the `POST /v3/videos` endpoint to use a discriminated union for improved type safety and flexibility. This change replaces the legacy flat request structure with dedicated schemas for creating videos from avatars versus images. * **Breaking Change:** The request body structure has been completely overhauled. You must now specify a type discriminator: use `CreateVideoFromAvatar` for digital twins/avatars or `CreateVideoFromImage` for custom image animation. * **Migration:** All properties previously passed at the top level of the request (e.g., `avatar_id`, `image_url`, `voice_id`, `script`) must now be nested within the appropriate schema based on the video source. * The operation ID for this endpoint has been updated from `createAvatarVideoV3` to `createVideo`. **Enhanced error messaging across all endpoints** We have updated the error response schemas and examples across the entire API suite. Developers can now expect more consistent and detailed error responses for common issues, including: * Improved `400 Bad Request` messages with clearer parameter validation feedback. * Standardized `401 Unauthorized` responses when API keys are missing or expired. * Consistent `429 Rate Limited` responses that align with standard retry headers. * Better descriptive error messages for resource-specific failures (e.g., `404 Not Found` for specific IDs). These updates ensure that your integrations can better handle exceptions and debugging. **HeyGen for Developers — New v3 API Surface** We've launched a new set of v3 endpoints across the HeyGen API, bringing a consistent interface, cursor-based pagination, and a unified asset input model to all major resources. What's new: * All v3 endpoints share a standard error format, cursor-based pagination (`has_more` / `next_token`), and consistent authentication via `X-Api-Key` or OAuth bearer token. * Asset inputs now use a type-discriminated union — pass files as `{ "type": "url", "url": "..." }`, `{ "type": "asset_id", "asset_id": "..." }`, or `{ "type": "base64", "media_type": "...", "data": "..." }` across all endpoints. * New and updated endpoints include: Video Agent (`POST /v3/video-agents`), Videos (`POST /v3/videos`), Voices (`GET /v3/voices`, `POST /v3/voices/speech`), Video Translations (`POST /v3/video-translations`), Overdub (`POST /v3/overdubs`), Avatars (`POST /v3/avatars`), Assets (`POST /v3/assets`), Webhooks (`/v3/webhooks/*`), and User (`GET /v3/users/me`). The v1/v2 endpoints continue to work, but we recommend migrating to v3 for all new integrations. # Overview Source: https://developers.heygen.com/cli Get from zero to a generated video in minutes — right from your terminal. The HeyGen CLI gives developers and AI agents command-line access to HeyGen's video platform. It wraps the v3 API, outputs structured JSON by default, and works out of the box in scripts, CI pipelines, and agent workflows. ## 1. Install the CLI ```bash theme={null} curl -fsSL https://static.heygen.ai/cli/install.sh | bash ``` This installs the latest stable release into `~/.local/bin`. Verify the installation: ```bash theme={null} heygen --version ``` The CLI ships as a single binary with no runtime prerequisites. macOS (Apple Silicon and Intel) and Linux (x64 and arm64) are supported. Windows support is coming soon — WSL is recommended in the meantime. ## 2. Authenticate Log in with your API key from [API dashboard](https://app.heygen.com/settings/api?nav=API): ```bash theme={null} heygen auth login ``` Paste your API key when prompted. The key is stored locally at `~/.heygen/credentials`. For CI/Docker/agent environments, set the environment variable instead — it takes precedence over stored credentials: ```bash theme={null} export HEYGEN_API_KEY=your-api-key ``` Verify your credentials: ```bash theme={null} heygen auth status ``` ## 3. Create a Video Send a prompt to the Video Agent and let it handle avatar, voice, and layout: ```bash theme={null} heygen video-agent create --prompt "A presenter explaining our product launch in 30 seconds" ``` ```json Output theme={null} { "data": { "session_id": "sess_abc123", "status": "generating", "video_id": "vid_xyz789", "created_at": 1711288320 } } ``` The CLI returns immediately with structured JSON. Your video is generating in the background. For full control over every parameter, use `video create` with a JSON body: ```bash theme={null} heygen video create -d '{ "type": "avatar", "avatar_id": "avt_angela_01", "script": "Welcome to our Q4 earnings call.", "voice_id": "1bd001e7e50f421d891986aad5e3e5d2" }' ``` Use `--request-schema` on any command to discover the expected JSON fields — no auth required: ```bash theme={null} heygen video create --request-schema heygen video-agent create --request-schema ``` ## 4. Check Status Poll for the result using the `video_id` returned from step 3: ```bash theme={null} heygen video get vid_xyz789 ``` ```json Output theme={null} { "data": { "id": "vid_xyz789", "title": "Product launch explainer", "status": "completed", "video_url": "https://files.heygen.com/video/vid_xyz789.mp4", "thumbnail_url": "https://files.heygen.com/thumb/vid_xyz789.jpg", "duration": 32.5, "created_at": 1711288320, "completed_at": 1711288452 } } ``` Status moves through `pending` → `processing` → `completed` or `failed`. If the video fails, the response includes `failure_code` and `failure_message` fields. **Tip:** Add `--wait` to the create command to block until the video is ready instead of polling manually. The default timeout is 20 minutes — override with `--timeout 30m`. On timeout, the CLI exits with code `4` and prints the last known resource state along with a hint to resume polling manually. ## 5. Download the Video Once complete, download to a local file: ```bash theme={null} heygen video download vid_xyz789 --output-path ./launch-video.mp4 ``` ```json Output theme={null} { "asset": "video", "message": "Downloaded video to ./launch-video.mp4", "path": "./launch-video.mp4" } ``` If the video was created with captions enabled, you can download the captioned version: ```bash theme={null} heygen video download vid_xyz789 --asset captioned --output-path ./launch-captioned.mp4 ``` # Commands Source: https://developers.heygen.com/commands Complete command reference for the HeyGen CLI, organized by resource. All commands follow the pattern `heygen `. The command surface is auto-generated from HeyGen's OpenAPI specification — when new v3 endpoints ship, the CLI picks them up automatically. Run `heygen --help` for detailed usage and examples on any command. Use `--request-schema` or `--response-schema` on any command to see the full JSON schema for its request or response — no auth required. ## Video Agent Create videos from text prompts using AI. The agent picks avatar, voice, and layout automatically. | Command | API Endpoint | Description | | ---------------------------------------------------------- | ------------------------------------------------------ | ------------------------------- | | `heygen video-agent create` | `POST /v3/video-agents` | Create a video from a prompt | | `heygen video-agent styles list` | `GET /v3/video-agents/styles` | List available video styles | | `heygen video-agent sessions create` | `POST /v3/video-agents/sessions` | Create an interactive session | | `heygen video-agent sessions get ` | `GET /v3/video-agents/sessions/{session_id}` | Get session status and messages | | `heygen video-agent sessions messages create ` | `POST /v3/video-agents/sessions/{session_id}/messages` | Send a follow-up message | | `heygen video-agent sessions resources get ` | `GET /v3/video-agents/sessions/{session_id}/resources` | Get session resources | | `heygen video-agent sessions stop ` | `POST /v3/video-agents/sessions/{session_id}/stop` | Stop an in-progress session | ### Flags for `video-agent create` | Flag | Description | | ----------------------- | -------------------------------------------------------- | | `--prompt ` | The message/prompt for video generation (required) | | `--avatar-id ` | Specific avatar ID to use | | `--voice-id ` | Specific voice ID to use for narration | | `--style-id ` | Style ID from `video-agent styles list` | | `--orientation ` | `landscape` or `portrait` (auto-detected if omitted) | | `--incognito-mode` | Disable memory injection and extraction for this session | | `--callback-url ` | Webhook URL for completion/failure notifications | | `--callback-id ` | ID echoed back in the webhook payload | ## Videos Create, list, retrieve, and delete avatar videos with full parameter control. | Command | API Endpoint | Description | | ---------------------------------- | ------------------------------ | --------------------------------------- | | `heygen video create` | `POST /v3/videos` | Create a video with explicit parameters | | `heygen video list` | `GET /v3/videos` | List your videos | | `heygen video get ` | `GET /v3/videos/{video_id}` | Get video details and status | | `heygen video delete ` | `DELETE /v3/videos/{video_id}` | Delete a video | | `heygen video download ` | Client-side | Download a video file to disk | ### Flags for `video create` `video create` uses a discriminated union request body — the `type` field determines which fields are valid. Pass the full body with `-d`: ```bash theme={null} # Avatar-based video heygen video create -d '{ "type": "avatar", "avatar_id": "avt_angela_01", "script": "Hello world", "voice_id": "1bd001e7e50f421d891986aad5e3e5d2" }' # Image-based video heygen video create -d '{ "type": "image", "image": {"type": "url", "url": "https://example.com/photo.jpg"}, "script": "Hello", "voice_id": "1bd001e7e50f421d891986aad5e3e5d2" }' ``` Run `heygen video create --request-schema` to see all available fields. ### Flags for `video list` | Flag | Description | | ------------------ | --------------------------------------------------------- | | `--limit ` | Maximum items per page (1–100, default 10) | | `--token ` | Pagination cursor from a previous response's `next_token` | | `--folder-id ` | Filter videos by folder ID | | `--title ` | Filter videos by title substring | ### Flags for `video download` | Flag | Description | | ---------------------- | -------------------------------------------- | | `--output-path ` | Output file path (default: `{video-id}.mp4`) | | `--asset ` | `video` (default) or `captioned` | ## Avatars Browse and manage avatars and their looks (outfits/styles). | Command | API Endpoint | Description | | ----------------------------------------- | ------------------------------------- | ------------------------ | | `heygen avatar create` | `POST /v3/avatars` | Create an avatar | | `heygen avatar list` | `GET /v3/avatars` | List avatar groups | | `heygen avatar get ` | `GET /v3/avatars/{group_id}` | Get avatar group details | | `heygen avatar looks list` | `GET /v3/avatars/looks` | List avatar looks | | `heygen avatar looks get ` | `GET /v3/avatars/looks/{look_id}` | Get avatar look details | | `heygen avatar looks update ` | `PATCH /v3/avatars/looks/{look_id}` | Rename an avatar look | | `heygen avatar consent create ` | `POST /v3/avatars/{group_id}/consent` | Initiate a consent flow | ### Filter flags for `avatar list` | Flag | Description | | --------------------- | ----------------------------------------- | | `--ownership ` | `public` or `private` | | `--limit ` | Maximum items per page (1–50, default 20) | | `--token ` | Pagination cursor | ### Filter flags for `avatar looks list` | Flag | Description | | ---------------------- | -------------------------------------------------- | | `--group-id ` | Filter by avatar group | | `--avatar-type ` | `studio_avatar`, `video_avatar`, or `photo_avatar` | | `--ownership ` | `public` or `private` | | `--limit ` | Maximum items per page (1–50, default 20) | | `--token ` | Pagination cursor | The `id` field on a look is what you pass as `avatar_id` to `video create`. The look's `avatar_type` field determines which engines and request parameters are compatible. ## Voices Browse voices and generate speech audio. | Command | API Endpoint | Description | | ---------------------------- | ------------------------ | -------------------------------------------------- | | `heygen voice list` | `GET /v3/voices` | List voices | | `heygen voice create` | `POST /v3/voices` | Design a voice from a natural language description | | `heygen voice speech create` | `POST /v3/voices/speech` | Generate speech audio from text | ### Filter flags for `voice list` | Flag | Description | | ------------------- | ---------------------------------------- | | `--type ` | `public` (default) or `private` | | `--engine ` | Filter by voice engine (e.g. `starfish`) | | `--language ` | Filter by language name (e.g. `English`) | | `--gender ` | `male` or `female` | | `--limit ` | Results per page (1–100, default 20) | | `--token ` | Pagination cursor | ### Flags for `voice create` | Flag | Description | | ------------------ | --------------------------------------------------------------- | | `--prompt ` | Natural language description of the desired voice (required) | | `--gender ` | `male` or `female` | | `--locale ` | BCP-47 locale tag (e.g. `en-US`) | | `--seed ` | Increment to get a different batch of voice results (default 0) | ### Flags for `voice speech create` | Flag | Description | | --------------------- | -------------------------------------------------- | | `--text ` | Text to synthesize (required) | | `--voice-id ` | Voice ID to use (required) | | `--speed ` | Playback speed multiplier, `0.5`–`2.0` (default 1) | | `--language ` | Base language code (auto-detected if omitted) | | `--locale ` | BCP-47 locale tag | | `--input-type ` | `text` (default) or `ssml` | ## Lipsync Dub or replace audio on existing videos. | Command | API Endpoint | Description | | ------------------------------------ | ---------------------------------- | ------------------------------ | | `heygen lipsync create` | `POST /v3/lipsyncs` | Create a lipsync job | | `heygen lipsync list` | `GET /v3/lipsyncs` | List lipsync jobs | | `heygen lipsync get ` | `GET /v3/lipsyncs/{lipsync_id}` | Get lipsync details and status | | `heygen lipsync update ` | `PATCH /v3/lipsyncs/{lipsync_id}` | Update a lipsync title | | `heygen lipsync delete ` | `DELETE /v3/lipsyncs/{lipsync_id}` | Delete a lipsync | `lipsync create` requires a complex request body (video and audio sources use discriminated unions). Use `-d`: ```bash theme={null} cat request.json | heygen lipsync create -d - ``` Run `heygen lipsync create --request-schema` to see all available fields. ## Video Translate Translate videos into other languages with lip-sync. | Command | API Endpoint | Description | | --------------------------------------------------- | ------------------------------------------------------ | ----------------------------- | | `heygen video-translate create` | `POST /v3/video-translations` | Create a video translation | | `heygen video-translate list` | `GET /v3/video-translations` | List translations | | `heygen video-translate get ` | `GET /v3/video-translations/{id}` | Get translation details | | `heygen video-translate update ` | `PATCH /v3/video-translations/{id}` | Update a translation title | | `heygen video-translate delete ` | `DELETE /v3/video-translations/{id}` | Delete a translation | | `heygen video-translate caption get ` | `GET /v3/video-translations/{id}/caption` | Get translation caption file | | `heygen video-translate languages list` | `GET /v3/video-translations/languages` | List supported languages | | `heygen video-translate proofreads create` | `POST /v3/video-translations/proofreads` | Create a proofread session | | `heygen video-translate proofreads get ` | `GET /v3/video-translations/proofreads/{id}` | Get proofread status | | `heygen video-translate proofreads generate ` | `POST /v3/video-translations/proofreads/{id}/generate` | Generate video from proofread | | `heygen video-translate proofreads srt get ` | `GET /v3/video-translations/proofreads/{id}/srt` | Download proofread SRT | | `heygen video-translate proofreads srt update ` | `PUT /v3/video-translations/proofreads/{id}/srt` | Upload edited SRT | ### Flags for `video-translate create` | Flag | Description | | ---------------------------- | -------------------------------------------------------------------------------------------------------- | | `--output-languages ` | Target language names, comma-separated (required). Use `video-translate languages list` for valid values | | `--mode ` | `speed` or `precision` | | `--speaker-num ` | Number of speakers in source (improves separation) | | `--translate-audio-only` | Translate audio without lip-sync | | `--enable-caption` | Add captions to translated video | | `--input-language ` | Source language code (auto-detected if omitted) | | `--callback-url ` | Webhook URL for completion notifications | | `--title ` | Title for the translation job | ### Flags for `video-translate caption get` | Flag | Description | | ---------------- | ------------------------- | | `--format ` | `srt` or `vtt` (required) | ## Webhooks Manage webhook endpoints for event notifications. | Command | API Endpoint | Description | | --------------------------------------------- | ------------------------------------------------ | -------------------------- | | `heygen webhook endpoints create` | `POST /v3/webhooks/endpoints` | Create a webhook endpoint | | `heygen webhook endpoints list` | `GET /v3/webhooks/endpoints` | List webhook endpoints | | `heygen webhook endpoints update ` | `PATCH /v3/webhooks/endpoints/{id}` | Update a webhook endpoint | | `heygen webhook endpoints delete ` | `DELETE /v3/webhooks/endpoints/{id}` | Delete a webhook endpoint | | `heygen webhook endpoints rotate-secret ` | `POST /v3/webhooks/endpoints/{id}/rotate-secret` | Rotate signing secret | | `heygen webhook event-types list` | `GET /v3/webhooks/event-types` | List available event types | | `heygen webhook events list` | `GET /v3/webhooks/events` | List delivered events | ### Flags for `webhook endpoints create` | Flag | Description | | ------------------ | ----------------------------------------------------------------- | | `--url ` | Publicly accessible HTTPS URL (required) | | `--events ` | Comma-separated event types to subscribe to (omit for all events) | | `--entity-id ` | Scope this endpoint to a specific resource | Store the `secret` returned by `endpoints create` and `endpoints rotate-secret` securely — it is used to verify webhook signatures and will not be shown again. ## Assets Upload files for use in video creation. | Command | API Endpoint | Description | | --------------------- | ----------------- | -------------------------------- | | `heygen asset create` | `POST /v3/assets` | Upload a file to get an asset ID | ### Flags for `asset create` | Flag | Description | | --------------- | ------------------------------------------------------------------------------------------------------------------------ | | `--file ` | Local file to upload (required). Max 32 MB. Supported types: image (png, jpeg), video (mp4, webm), audio (mp3, wav), pdf | ## User | Command | API Endpoint | Description | | -------------------- | ------------------ | ------------------------------------------- | | `heygen user me get` | `GET /v3/users/me` | Get current user info, credits, and billing | ## Authentication | Command | Description | | -------------------- | ------------------------------------------------ | | `heygen auth login` | Authenticate interactively (prompts for API key) | | `heygen auth status` | Verify stored credentials and show account info | For CI/Docker, use the `HEYGEN_API_KEY` environment variable instead. It takes precedence over stored credentials. ## Utility Commands | Command | Description | | --------------------------------- | -------------------------------------------- | | `heygen config set ` | Set a persistent config value | | `heygen config get ` | Read a config value | | `heygen config list` | Show all config values and their sources | | `heygen update` | Self-update to the latest version | | `heygen update --version ` | Update to a specific version (e.g. `v0.1.0`) | ### Config keys | Key | Values | Description | | ----------- | --------------- | ------------------------------------------- | | `output` | `json`, `human` | Default output format (default: `json`) | | `analytics` | `true`, `false` | Enable or disable anonymous usage analytics | # Content Repurposing Source: https://developers.heygen.com/content-repurposing Turn blog posts, articles, and newsletters into video — reach audiences that don't read. ## The Problem You invest hours writing a great blog post. It reaches your readers — but misses the much larger audience that consumes content through video. Manually converting articles to video takes almost as long as writing them. ## How It Works ``` Written content → LLM extracts key points → Video Agent renders → Distribute on video platforms ``` An LLM reads your content and writes a production-quality video prompt — extracting the most compelling points and restructuring them for video. The same article can become a 90-second YouTube explainer, a 30-second TikTok, and a 60-second LinkedIn post. ## Build It Pull the article from your CMS, a URL, or a local file. ```python theme={null} # From a file with open("article.md") as f: article = f.read() # Or from a URL (use a proper extraction library for production) import requests article = requests.get("https://yourblog.com/posts/your-article").text ``` The LLM acts as a producer — extracting the most engaging points and structuring them for video. ```python theme={null} import anthropic client = anthropic.Anthropic() message = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1024, messages=[{ "role": "user", "content": f"""You are a video producer converting a written article into a HeyGen Video Agent prompt. Read this article and create a 60-second video prompt that: 1. Opens with the most compelling insight or stat (hook) 2. Covers the 3 most important points — not everything, the best bits 3. Uses specific visual descriptions — what the viewer sees on screen 4. Ends with a CTA to read the full article 5. Matches the tone of the original Article: {article} Output ONLY the Video Agent prompt.""" }], ) video_prompt = message.content[0].text ``` **Don't summarize — adapt.** The LLM shouldn't just compress the article. It should identify the most *visual* and *engaging* points and restructure them for video. A great blog point might be boring on video, and vice versa. Submit the prompt. Attach any images or charts from the article as file inputs. ```python theme={null} resp = requests.post( "https://api.heygen.com/v3/video-agents", headers={ "X-Api-Key": HEYGEN_API_KEY, "Content-Type": "application/json", }, json={ "prompt": video_prompt, "files": [ {"type": "url", "url": "https://yourblog.com/images/chart.png"}, ], }, ) video_id = resp.json()["data"]["video_id"] ``` Then poll for completion — see [Video Agent docs](/docs/video-agent). One article can become multiple videos for different platforms: ```python theme={null} formats = [ {"platform": "YouTube", "duration": "90s", "orientation": "landscape", "style": "in-depth"}, {"platform": "TikTok/Reels", "duration": "30s", "orientation": "portrait", "style": "hook-driven"}, {"platform": "LinkedIn", "duration": "60s", "orientation": "landscape", "style": "professional"}, ] for fmt in formats: # Regenerate the LLM prompt with platform-specific instructions platform_prompt = generate_prompt_for(article, fmt) # Submit to Video Agent with the right orientation submit_video(platform_prompt, orientation=fmt["orientation"]) ``` ## Content Types That Convert Well | Content type | Video style | Tips | | -------------------- | -------------------- | ------------------------------------------------- | | **How-to articles** | Tutorial walkthrough | Step-by-step with text overlays | | **Listicles** | Quick tips | One point every 5–7 seconds, great for short-form | | **Opinion/analysis** | Thought leadership | Presenter-driven, conversational | | **Case studies** | Story-driven | Before/after structure, stats as highlights | | **Newsletters** | Weekly digest | Cover 3–5 highlights, keep it breezy | ## Automating the Pipeline ``` Blog CMS webhook → "New post published" ↓ Fetch article content ↓ LLM generates video prompt ↓ Video Agent renders ↓ Upload to YouTube / post to social ↓ Add video embed to original article ``` Trigger from a CMS webhook, cron job, or CI/CD. See [Automated Broadcast](/cookbook/video-agent/automated-broadcast) for scheduling and distribution patterns. ## Variations * **Teaser + full:** 15-second teaser for social, 90-second deep dive for YouTube * **Multi-language:** Generate in English, then [translate](/cookbook/video-agent/multilingual-content) for global audiences * **Podcast-to-video:** Extract audio highlights → write visual prompt → avatar presents the key takeaways *** ## Next Steps Generate original social content, not just repurposed articles. Automate the entire content → video → distribute pipeline. # Data Visualization Videos Source: https://developers.heygen.com/data-to-video Turn datasets, metrics, and algorithms into animated videos — charts that move, dashboards that update, patterns that evolve. ## Examples