Command Palette
Search for a command to run

Fleet

Warm pools of pre-initialized cloud nodes — eliminate cold-start latency for machines and jobs.

Outpost Fleets are named pools of compute nodes that stay warm between jobs. Instead of waiting 1–3 minutes for instance provisioning every time you launch a machine or submit a job, a fleet keeps a configurable minimum number of nodes initialized and ready. Work targeted at a fleet starts in seconds.

Key features

  • Warm nodes — keep min_nodes instances booted and idle at all times. Jobs start immediately instead of waiting for provisioning.
  • Auto-scaling — fleet scales up to max_nodes under load and scales back down during idle periods.
  • Idle timeout — nodes above min_nodes are terminated after a configurable idle period, controlling standby cost.
  • Spot support — run warm spot instances at 60–90% reduced cost. Outpost replaces preempted nodes automatically.
  • Labels — tag fleet nodes for job targeting and organization.

Quick start

# Create a GPU fleet with 2 warm nodes curl -X POST https://outpost.run/auth/v1/seed/acme/fleets -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" -d '{ "name": "gpu-pool", "cloud": "aws", "region": "us-east-1", "gpus": "A100:1", "min_nodes": 2, "max_nodes": 10 }' # List fleets curl https://outpost.run/auth/v1/seed/acme/fleets -H "Authorization: Bearer $TOKEN"

How it works

  1. Create — define a fleet with hardware requirements, min/max node counts, and idle timeout.
  2. Warm up — Outpost provisions min_nodes instances and keeps them ready.
  3. Submit work — jobs targeting the fleet are assigned to idle nodes immediately.
  4. Scale — fleet scales up when demand exceeds the pool, and scales down when nodes become idle past the timeout.

Node lifecycle

Fleet created ↓ min_nodes provisioned and kept warm ↓ Job submitted → idle node assigned → job runs ↓ Job completes → node returns to idle pool ↓ Idle > idle_timeout_secs AND nodes > min_nodes → node terminated

min_nodes vs max_nodes

Setting Effect
min_nodes: 0 No warm nodes. Fleet exists but has no standing cost. Nodes provision on demand.
min_nodes: 2 2 nodes always warm. First 2 concurrent jobs start immediately.
max_nodes: 10 Fleet auto-scales up to 10 nodes under load. Additional work is queued.

Spot fleets

Set spot: true to keep warm spot instances. Outpost automatically replaces preempted nodes to maintain min_nodes. Do not use for latency-sensitive workloads where cold-start on replacement is unacceptable.

Use cases

  • Batch training pipelines — keep GPU nodes warm for recurring training jobs to avoid provisioning delays.
  • Inference burst capacity — pre-warm a pool of GPU nodes for burst inference demand.
  • CI/CD compute — maintain a pool of test runners that start immediately on commit.
  • Multi-tenant compute — share a warm pool across team members in a namespace.

Next steps

Previous Custom Domains

Next Load Balancer