Quickstart
Get up and running with Outpost in 5 minutes.
This guide walks you through installing the CLI, creating a repository, launching a GPU machine, and deploying a service.
Install the CLI
macOS
Linux
Then configure your remote and authenticate:
Create a repository and push data
Create a new repository on Outpost, then clone it locally:
Add your files — models, datasets, code, anything — and push:
Your files are now versioned and available at outpost.run/my-namespace/my-first-repo. No file size limits — a 50GB model checkpoint is handled the same as a 2KB script.
Launch a GPU machine
Spin up a dev environment with a GPU attached:
Once the machine is running, SSH in:
You can also attach VS Code Remote SSH or JetBrains Gateway. Machines come with CUDA and common ML libraries pre-configured.
To stop the machine when you're done:
Run a training job
Define a batch job that runs to completion and stops billing automatically:
Monitor the job:
Deploy a service
Deploy a model as an auto-scaling HTTP endpoint:
Outpost provisions the infrastructure, configures load balancing, and gives you a public endpoint. The service scales based on traffic and scales to zero when idle.
Check the service status:
Next steps
- Repositories — Branching, merging, and versioning large files
- Machines — IDE integration, auto-stop, and SSH configuration
- Services — Autoscaling, custom domains, and production deployments
- Jobs — Distributed training and batch processing
- CLI Reference — Complete command documentation
Previous → Introduction
Next Overview →