Command Palette
Search for a command to run

Quickstart

Get up and running with Outpost in 5 minutes.

This guide walks you through installing the CLI, creating a repository, launching a GPU machine, and deploying a service.

Install the CLI

macOS

brew install outpostkit

Linux

curl -fsSL https://outpost.run/install.sh | sh

Then configure your remote and authenticate:

outpost config --set-remote origin https://outpost.run outpost config --auth outpost.run YOUR_TOKEN
Getting your token
Generate a personal access token from your account settings at outpost.run. See [Access Tokens](/docs/teams/access-tokens) for details.

Create a repository and push data

Create a new repository on Outpost, then clone it locally:

outpost create-remote --name my-namespace/my-first-repo outpost clone https://outpost.run/my-namespace/my-first-repo cd my-first-repo

Add your files — models, datasets, code, anything — and push:

outpost add model.safetensors dataset/ train.py outpost commit -m "initial model and training data" outpost push

Your files are now versioned and available at outpost.run/my-namespace/my-first-repo. No file size limits — a 50GB model checkpoint is handled the same as a 2KB script.

Launch a GPU machine

Spin up a dev environment with a GPU attached:

outpost dev launch --name dev-box --cloud aws --region us-east-1 --gpus A100

Once the machine is running, SSH in:

ssh dev-box

You can also attach VS Code Remote SSH or JetBrains Gateway. Machines come with CUDA and common ML libraries pre-configured.

To stop the machine when you're done:

outpost dev stop dev-box

Run a training job

Define a batch job that runs to completion and stops billing automatically:

outpost jobs launch --name train-v1 --gpus A100:4 --cloud aws --region us-east-1 --command "torchrun --nproc_per_node=4 train.py"

Monitor the job:

outpost jobs logs train-v1 outpost jobs status train-v1

Deploy a service

Deploy a model as an auto-scaling HTTP endpoint:

outpost serve launch --name llama-api --cloud aws --region us-east-1 --gpus A100 --port 8080

Outpost provisions the infrastructure, configures load balancing, and gives you a public endpoint. The service scales based on traffic and scales to zero when idle.

Check the service status:

outpost serve status llama-api

Next steps

  • Repositories — Branching, merging, and versioning large files
  • Machines — IDE integration, auto-stop, and SSH configuration
  • Services — Autoscaling, custom domains, and production deployments
  • Jobs — Distributed training and batch processing
  • CLI Reference — Complete command documentation

Previous Introduction

Next Overview