Command Palette
Search for a command to run

Load Balancer

Pingora-powered proxy gateway — deploy a managed reverse proxy to route and distribute traffic across services in a namespace.

Outpost Load Balancers are Pingora-based reverse proxies that sit in front of your services and distribute traffic across replicas. Each load balancer is a named, standalone resource in a namespace. It provides a stable endpoint that persists across service redeployments, making it the right choice when you need consistent routing regardless of what's happening to the backing services.

Key features

  • Pingora gateway — built on Cloudflare Pingora , a high-performance Rust proxy framework.
  • Health-aware routing — unhealthy replicas are automatically removed from the rotation.
  • Stable endpoint — the load balancer endpoint persists even when backing services are redeployed or scaled.
  • TLS termination — TLS is terminated at the proxy layer.
  • Namespace-scoped — each load balancer lives inside a namespace and routes to services in that namespace.

Quick start

# Launch a load balancer curl -X POST https://outpost.run/auth/v1/seed/acme/proxies -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" -d '{ "name": "main-gateway", "cloud": "aws", "region": "us-east-1" }' # Check status curl https://outpost.run/auth/v1/seed/acme/proxies/main-gateway -H "Authorization: Bearer $TOKEN"

How it works

  1. Launch — provision a load balancer in a namespace with a name, cloud, and region.
  2. Route — the load balancer distributes incoming requests across healthy service replicas.
  3. Monitor — check status and readiness via the API or dashboard.
  4. Delete — terminate the load balancer when no longer needed.

Load Balancer vs Service endpoint

Every Outpost service gets its own auto-assigned endpoint. Use a load balancer when you need:

  • A single stable URL that persists across service redeployments.
  • A shared gateway consolidating traffic for multiple services in a namespace.
  • Custom routing logic managed at the proxy layer.

Use cases

  • Inference gateway — route LLM inference traffic across multiple model replicas.
  • API gateway — centralize traffic for a set of microservices behind one endpoint.
  • Canary routing — gradually shift traffic to a new service version.
  • Multi-model serving — route requests to different models based on path or headers.

Next steps

Previous Fleet

Next Sandbox