Skip to main content
blindcast upload uploads encrypted segments and the manifest to an S3-compatible bucket. Run it after blindcast encrypt.

Usage

blindcast upload <directory> --bucket <name> [flags]

Example

blindcast upload ./segments/encrypted --bucket my-video-bucket --content-id my-video-001
Uploading 12 segments to s3://my-video-bucket/content/my-video-001/...
  [100%] 12/12 segments uploaded
Manifest uploaded.

Done: s3://my-video-bucket/content/my-video-001/manifest.m3u8

Flags

FlagDefaultDescription
--bucket <name>(required)S3 bucket name
--content-id <id>(required)Content identifier (used as key prefix)
--prefix <path>content/<content-id>/S3 key prefix
--region <region>$AWS_REGION or us-east-1AWS region
--endpoint <url>Custom S3 endpoint (for MinIO, R2, etc.)
--concurrency <n>4Number of parallel uploads
--jsonOutput uploaded URLs as JSON

AWS credentials

The CLI uses the standard AWS SDK credential chain:
  1. Environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
  2. AWS credentials file: ~/.aws/credentials
  3. IAM instance role (EC2, ECS, Lambda)
No explicit credential flags — configure credentials the same way you would for the AWS CLI.

S3-compatible storage

For non-AWS S3-compatible services (MinIO, Cloudflare R2, Backblaze B2), use the --endpoint flag:
# MinIO
blindcast upload ./encrypted --bucket videos --endpoint http://localhost:9000

# Cloudflare R2
blindcast upload ./encrypted --bucket videos --endpoint https://<account-id>.r2.cloudflarestorage.com

JSON output

blindcast upload ./encrypted --bucket videos --content-id demo --json
{
  "manifestUrl": "s3://videos/content/demo/manifest.m3u8",
  "segmentUrls": [
    "s3://videos/content/demo/seg-0.ts",
    "s3://videos/content/demo/seg-1.ts"
  ],
  "segmentCount": 2
}