Command Palette
Search for a command to run

Compute Commands

Complete reference for Outpost CLI compute commands — dev, serve, jobs, and sandbox.

Compute commands let you provision GPU machines, deploy inference services, run batch jobs, and manage workloads from the command line. Every compute resource can be created, inspected, and torn down without leaving your terminal.

Config-driven workflows
For reproducible setups, define your infrastructure in a `seed.toml` file and use the `--config` flag. This gives you a declarative specification that you can version-control alongside your code.

outpost dev launch

Launch a new development machine with GPU support.

Syntax

outpost dev launch [OPTIONS]

Options

Option Description
--name / -n Name for the machine
--cloud Cloud provider (e.g., aws, azure)
--region Cloud region (e.g., us-east-1)
--gpus GPU type and optional count (e.g., A100, V100:4, H100)
--cpus Number of CPUs to allocate
--memory Amount of memory in GB (e.g., 64)
--disk-size Disk size in GB (e.g., 200)
--image Container image
--spot Use spot instances
--config / -c Config file
-s, --namespace Namespace (global flag)
-h, --help Display help

Examples

# Launch a machine with an A100 GPU outpost dev launch --name dev-box --cloud aws --region us-east-1 --gpus A100 # Launch with custom resource allocation outpost dev launch --name training-env --cloud aws --region us-east-1 --gpus H100 --cpus 16 --memory 64 --disk-size 500 # Launch with a config file outpost dev launch --config seed.toml

outpost dev list

List all machines.

Syntax

outpost dev list

Examples

# List all machines outpost dev list

outpost dev status

Get the current status of a machine.

Syntax

outpost dev status [NAME]

Examples

# Check status of a specific machine outpost dev status dev-box # Check status of all machines outpost dev status

The output includes the machine's state (running, stopped, creating), GPU type, resource allocation, and uptime.


outpost dev start

Start a previously stopped machine. The machine's disk is preserved, so you resume exactly where you left off.

Syntax

outpost dev start NAME

Examples

# Start a stopped machine outpost dev start dev-box

outpost dev stop

Stop a running machine. The machine's disk is preserved, so you can start it again later without losing data.

Syntax

outpost dev stop NAME

Examples

# Stop a machine to save costs outpost dev stop dev-box
Save on compute costs
Stop machines when you're not actively using them. You are not billed for GPU time while a machine is stopped, but disk storage charges still apply.

outpost dev delete

Delete a machine permanently.

Syntax

outpost dev delete NAME

Examples

# Delete a machine outpost dev delete dev-box
Permanent deletion
`outpost dev delete` permanently removes the machine and all its data. Push your work before deleting. Use `outpost dev stop` to preserve the disk.

outpost dev ssh

SSH into a running machine.

Syntax

outpost dev ssh NAME

Examples

# SSH into a machine outpost dev ssh dev-box

outpost dev exec

Execute a command on a running machine.

Syntax

outpost dev exec NAME -- COMMAND

Examples

# Run a command on the machine outpost dev exec dev-box -- nvidia-smi # Run a script outpost dev exec dev-box -- python train.py

outpost dev logs

View logs from a running or stopped machine.

Syntax

outpost dev logs NAME

Examples

# View machine logs outpost dev logs dev-box

outpost serve launch

Deploy a new persistent inference service.

Syntax

outpost serve launch [OPTIONS]

Options

Option Description
--name / -n Name for the service
--cloud Cloud provider (e.g., aws, azure)
--region Cloud region (e.g., us-east-1)
--gpus GPU type and optional count (e.g., A10G, A100:4)
--cpus Number of CPUs to allocate
--memory Amount of memory in GB
--disk-size Disk size in GB
--port Port the service listens on
--command Command to start the HTTP server
--replicas Number of replicas
--image Container image
--config / -c Config file
-s, --namespace Namespace (global flag)
-h, --help Display help

Examples

# Deploy a model serving endpoint outpost serve launch --name inference-api --cloud aws --region us-east-1 --gpus A10G --port 8080 # Deploy with custom resources outpost serve launch --name llm-api --cloud aws --region us-east-1 --gpus A100 --memory 64 --port 8000 # Deploy with a config file outpost serve launch --config seed.toml

outpost serve list

List all services.

Syntax

outpost serve list

Examples

# List all services outpost serve list

outpost serve status

Get the current status of a deployed service.

Syntax

outpost serve status [NAME]

Examples

# Check status of a specific service outpost serve status llama-api # Check status of all services outpost serve status

Shows the service's state, URL, GPU type, replica count, and request metrics.


outpost serve scale

Scale the number of replicas for a service.

Syntax

outpost serve scale --name NAME --replicas N

Examples

# Scale a service to 3 replicas outpost serve scale --name llama-api --replicas 3 # Scale to zero outpost serve scale --name llama-api --replicas 0

outpost serve delete

Delete a service.

Syntax

outpost serve delete NAME

Examples

# Delete a service outpost serve delete llama-api

outpost jobs launch

Run a new batch processing job. Jobs run to completion on dedicated hardware and automatically terminate when finished, so you only pay for actual compute time.

Syntax

outpost jobs launch [OPTIONS]

Options

Option Description
--name / -n Name for the job
--cloud Cloud provider (e.g., aws, azure)
--region Cloud region (e.g., us-east-1)
--gpus GPU type and optional count (e.g., A100, A100:4)
--cpus Number of CPUs to allocate
--memory Amount of memory in GB
--disk-size Disk size in GB
--command Command to execute (required)
--image Container image
--spot Use spot instances
--config / -c Config file
-s, --namespace Namespace (global flag)
-h, --help Display help

Examples

# Run a training job outpost jobs launch --name train-resnet --gpus A100 --cloud aws --region us-east-1 --command "python train.py" # Run a multi-GPU job with extra disk outpost jobs launch --name preprocess --gpus A100:4 --cloud aws --region us-east-1 --disk-size 200 --command "torchrun train.py" # Run a job on spot instances outpost jobs launch --name eval --gpus A10G --cloud aws --region us-east-1 --spot --command "python eval.py"
Self-terminating
Jobs automatically stop and release GPU resources when finished. You are only billed for the time the job is actively running.

outpost jobs list

List all jobs.

Syntax

outpost jobs list

Examples

# List all jobs outpost jobs list

outpost jobs status

Get the current status of a job.

Syntax

outpost jobs status [NAME]

Examples

# Check status of a specific job outpost jobs status train-resnet # Check status of all jobs outpost jobs status

Shows the job's state (running, completed, failed), GPU type, duration, and exit code.


outpost jobs logs

View logs from a running or completed job.

Syntax

outpost jobs logs NAME

Examples

# Stream logs from a running job outpost jobs logs train-resnet

outpost jobs cancel

Cancel a running job.

Syntax

outpost jobs cancel NAME

Examples

# Cancel a running job outpost jobs cancel train-resnet

outpost jobs delete

Delete a job.

Syntax

outpost jobs delete NAME

Examples

# Delete a completed job outpost jobs delete train-resnet

outpost sandbox launch

Launch a throwaway sandbox environment.

Syntax

outpost sandbox launch [OPTIONS]

Options

Option Description
--image Container image (e.g., pytorch:latest)
-h, --help Display help

Examples

# Launch a sandbox with PyTorch outpost sandbox launch --image pytorch:latest

outpost sandbox list

List all sandboxes.

Syntax

outpost sandbox list

outpost sandbox status

Get the current status of a sandbox.

Syntax

outpost sandbox status [ID]

outpost sandbox delete

Delete a sandbox.

Syntax

outpost sandbox delete ID

seed.toml configuration

For reproducible, version-controlled infrastructure, define your workloads in a seed.toml file at the root of your repository.

Machine configuration

type = "machine" name = "dev-box" cloud = "aws" region = "us-east-1" gpus = "A100" cpus = 8 memory = 32 disk_size = 200

Service configuration

type = "service" name = "inference-api" cloud = "aws" region = "us-east-1" gpus = "A10G" cpus = 4 memory = 16 disk_size = 100 port = 8080

Job configuration

type = "job" name = "train-resnet" cloud = "aws" region = "us-east-1" gpus = "A100:4" cpus = 16 memory = 64 disk_size = 200 command = "torchrun train.py"
Version your configs
Store your `seed.toml` files in your Outpost repository alongside your code. This makes infrastructure reproducible across team members and environments.

Common workflows

Interactive development

# Spin up a GPU machine outpost dev launch --name gpu-box --cloud aws --region us-east-1 --gpus A100 --disk-size 200 # Check status outpost dev status gpu-box # View logs outpost dev logs gpu-box # SSH in outpost dev ssh gpu-box # When done for the day, stop to save costs outpost dev stop gpu-box # Resume the next day outpost dev start gpu-box

Train and deploy

# Run a training job outpost jobs launch --name train-v3 --gpus A100 --cloud aws --region us-east-1 --command "python train.py" # Monitor training progress outpost jobs logs train-v3 # Check if the job completed outpost jobs status train-v3 # Deploy the trained model as a service outpost serve launch --name model-v3 --cloud aws --region us-east-1 --gpus A10G --port 8080 # Verify the service is running outpost serve status model-v3 outpost serve list

Tear down resources

# Cancel a running job outpost jobs cancel train-v3 # Delete a service outpost serve delete model-v3 # Delete a machine outpost dev delete gpu-box

Previous Repository Commands