Commands Reference
Complete reference for all GPU CLI commands
Commands Reference
Complete reference for all GPU CLI commands. Run gpu --help for a quick overview, or gpu --help-all to see hidden commands.
gpu run
Execute a command on a remote GPU. This is the primary command for running workloads.
gpu run <COMMAND>Basic Examples
# Run a Python script
gpu run python train.py
# Run with arguments
gpu run python train.py --epochs 100 --batch-size 32
# Run any command
gpu run uv run python inference.pyFlags
| Flag | Description |
|---|---|
--attach, -a <JOB_ID> | Reattach to an existing job |
--status, -s | Show pod status and recent jobs |
--cancel <JOB_ID> | Cancel a running job |
--gpu-type <TYPE> | Override GPU type (e.g., "RTX 4090") |
--gpu-count <N> | Request multiple GPUs (1-8) |
--min-vram <GB> | Minimum VRAM for GPU fallback |
--detach, -d | Run in background (don't wait for completion) |
--interactive, -i | Interactive mode (allocate PTY) |
--publish, -p <[LOCAL:]REMOTE> | Port forwarding (docker-style) |
--env, -e <KEY=VALUE> | Set environment variables (can be repeated) |
--output, -o <PATHS> | Override output paths to sync back |
--no-output | Disable output syncing |
--sync | Wait for all output files to sync before exiting |
--rebuild | Force pod recreation if Dockerfile changed |
--force-sync | Full sync (ignore change detection) |
--remote-path <PATH> | Override remote workspace path |
--no-port-forward | Disable automatic port detection |
--tail, -n <N> | Show last N lines when attaching |
Advanced Examples
# Run in background
gpu run -d python long_training.py
# Port forward for web UI
gpu run -p 8080:8080 python server.py
# Multiple port forwards
gpu run -p 8080:8080 -p 6006:6006 python app.py
# Set environment variables
gpu run -e API_KEY=xxx -e DEBUG=true python app.py
# Reattach to a job
gpu run --attach job_abc123
# Check job status
gpu run --status
# Cancel a job
gpu run --cancel job_abc123
# Force specific GPU
gpu run --gpu-type "RTX 4090" python train.py
# Multi-GPU training
gpu run --gpu-count 4 python distributed_train.py
# Interactive shell
gpu run -i bashgpu login
Authenticate with GPU CLI via browser.
gpu loginFlags
| Flag | Description |
|---|---|
--timeout <SECONDS> | Browser auth timeout (default: 300) |
--staging | Use staging environment |
gpu logout
Remove GPU CLI authentication.
gpu logoutFlags
| Flag | Description |
|---|---|
--yes, -y, --force | Skip confirmation prompt |
gpu auth
Manage cloud provider and model hub credentials.
auth login
Add your RunPod API key.
gpu auth login| Flag | Description |
|---|---|
--profile <NAME> | Profile name for credential isolation |
--generate-ssh-keys | Generate profile-specific SSH keys |
auth logout
Remove provider credentials.
gpu auth logout| Flag | Description |
|---|---|
--force | Skip confirmation |
auth status
Show current authentication status.
gpu auth statusauth add
Add model hub credentials (HuggingFace, Civitai).
gpu auth add hf # HuggingFace
gpu auth add civitai # Civitai| Flag | Description |
|---|---|
--token <VALUE> | Token/API key (prompts if not provided) |
auth remove
Remove model hub credentials.
gpu auth remove hfauth hubs
List configured model hub credentials.
gpu auth hubsgpu init
Initialize GPU CLI for the current project. Creates a gpu.jsonc configuration file.
gpu initFlags
| Flag | Description |
|---|---|
--gpu-type <TYPE> | Default GPU type for project |
--max-price <PRICE> | Maximum hourly price |
--profile <NAME> | Profile to use |
--encryption | Enable LUKS encryption |
--no-encryption | Disable LUKS encryption |
--force, -f | Force reinitialization |
Example
cd my-ml-project
gpu init --gpu-type "RTX 4090" --max-price 0.50gpu inventory
List available GPU types and their current availability.
gpu inventoryFlags
| Flag | Description |
|---|---|
--available, -a | Only show GPUs with available stock |
--min-vram <GB> | Minimum VRAM filter |
--max-price <PRICE> | Maximum price per hour |
--region <REGION> | Filter by region |
--gpu-type <TYPE> | Filter by GPU type (fuzzy match) |
--cloud-type <TYPE> | Cloud type: secure, community, all |
--json | Output as JSON |
Examples
# Show available GPUs only
gpu inventory --available
# Filter by VRAM and price
gpu inventory --min-vram 24 --max-price 1.00
# JSON output for scripting
gpu inventory --jsongpu status
Show project status including pods, jobs, and costs.
gpu statusFlags
| Flag | Description |
|---|---|
--project <PROJECT> | Filter to specific project |
--json | Output as JSON |
gpu dashboard
Launch the interactive TUI dashboard for managing pods and jobs.
gpu dashboardKeybindings
| Key | Action |
|---|---|
j / Down | Navigate down |
k / Up | Navigate up |
Tab | Switch panel |
Enter | Expand / View logs |
Esc | Back / Close |
a | Attach to job |
c | Cancel job |
s | Stop pod |
e | Events view |
? | Help |
q | Quit |
gpu logs
View job output (stdout/stderr) from the current project.
gpu logs [OPTIONS]Flags
| Flag | Description |
|---|---|
-j, --job <JOB_ID> | Filter to specific job |
-f, --follow | Follow output in real-time |
--tail <N> | Show last N lines (default: 100) |
--type <TYPE> | Filter by type: output, lifecycle, hook, sync, agent, all |
--json | Output as JSON |
--include-agent | Include agent internal logs |
Examples
# Latest job output
gpu logs
# Follow running job
gpu logs -f
# Last 50 lines of specific job
gpu logs --job job_abc123 --tail 50
# Only lifecycle events
gpu logs --type lifecycle
# Agent logs (internal pod agent activity)
gpu logs --type agentgpu volume
Manage network volumes for persistent storage.
volume list
List all network volumes.
gpu volume list| Flag | Description |
|---|---|
--detailed, -d | Show detailed volume information |
--json | Output as JSON |
volume create
Create a new network volume.
gpu volume create --name my-models --size 200| Flag | Description |
|---|---|
--name, -n <NAME> | Volume name (default: gpu-cli-{username}) |
--size, -s <GB> | Size in GB (default: 100) |
--datacenter, -d <ID> | Datacenter ID (prompted if not specified) |
--set-global | Set as global volume after creation |
--yes, -y | Skip confirmation |
volume delete
Delete a network volume.
gpu volume delete <VOLUME>The VOLUME argument can be the volume name or ID.
| Flag | Description |
|---|---|
--force, -f | Force delete even if pods are attached |
volume extend
Increase the size of a volume.
gpu volume extend <VOLUME> --size 300| Flag | Description |
|---|---|
--size, -s <GB> | New size (must be larger than current) |
--yes, -y | Skip confirmation |
volume set-global
Set a volume as the global default for all projects.
gpu volume set-global <VOLUME>volume status
Show volume usage statistics.
gpu volume status| Flag | Description |
|---|---|
--volume <VOLUME> | Specify volume (defaults to global) |
--json | Output as JSON |
volume migrate
Migrate a volume to a different datacenter.
gpu volume migrate <VOLUME> --to <DATACENTER>| Flag | Description |
|---|---|
--to <DATACENTER> | Target datacenter ID (required) |
--name <NAME> | Name for new volume (default: {original}-{datacenter}) |
--skip-verify | Skip verification after migration |
volume sync
Sync data between two existing volumes.
gpu volume sync <SOURCE> <DEST>| Flag | Description |
|---|---|
--method <METHOD> | Force transfer method: tar or rsync (auto-detected by default) |
--skip-verify | Skip verification after sync |
volume cancel-migration
Cancel an ongoing migration or sync operation.
gpu volume cancel-migration <TRANSFER_ID>The transfer ID is displayed when you start a migration or sync.
Examples
# List all volumes with details
gpu volume list --detailed
# Create a 500GB volume and set as global
gpu volume create --name shared-models --size 500 --set-global
# Check usage of global volume
gpu volume status
# Extend a volume
gpu volume extend my-models --size 500gpu config
Inspect and manage configuration.
config show
Show the merged configuration for current project.
gpu config showconfig validate
Validate configuration against the JSON schema.
gpu config validateconfig schema
Print the JSON schema for gpu.jsonc files.
gpu config schemaconfig set
Set a global configuration option.
gpu config set <KEY> <VALUE>Supported keys:
updates.channel- Update channel (stable, beta)updates.auto_update- Enable auto-updates (true/false)updates.check_interval_hours- Update check intervaldefault_provider- Default cloud providerdefault_profile- Default profile name
config get
Get a global configuration value.
gpu config get <KEY>gpu stop
Stop the active pod immediately.
gpu stopFlags
| Argument/Flag | Description |
|---|---|
[POD_ID] | Pod ID to stop (positional, optional - auto-detects from project if not specified) |
--yes, -y, --force | Skip confirmation prompt |
--no-sync | Don't sync outputs before stopping |
gpu use
Run a template or resume a session.
gpu use [TEMPLATE]If no template is specified, resumes the last session.
Flags
| Flag | Description |
|---|---|
--name <NAME> | Project/session name |
--yes | Skip interactive prompts |
--dry-run | Show what would be created |
--input <KEY=VALUE> | Provide input values (can be repeated) |
gpu update
Update GPU CLI to the latest version.
gpu updateFlags
| Flag | Description |
|---|---|
--check | Check for updates without installing |
--target-version <VERSION> | Install a specific version |
--force | Force reinstall even if up-to-date |
--dismiss | Dismiss update notification for 24 hours |
gpu changelog
View the changelog for a specific version.
gpu changelog [VERSION]Flags
| Flag | Description |
|---|---|
--from <VERSION> | Show changes from this version (exclusive) |
--to <VERSION> | Show changes up to this version (inclusive) |
Examples
# Latest changelog
gpu changelog
# Specific version
gpu changelog 0.8.0
# Range of versions
gpu changelog --from 0.7.0 --to 0.8.0gpu daemon
Manage the GPU CLI daemon (background service).
daemon status
Show daemon status.
gpu daemon statusdaemon start
Start the daemon.
gpu daemon startdaemon stop
Stop the daemon gracefully.
gpu daemon stopdaemon restart
Restart the daemon.
gpu daemon restartdaemon logs
View daemon logs.
gpu daemon logs| Flag | Description |
|---|---|
--follow, -f | Follow log output (like tail -f) |
--tail, -n <N> | Show last N lines |
--all, -a | Show all rotated logs |
Global Flags
These flags work with all commands:
| Flag | Description |
|---|---|
-v, --verbose | Increase logging verbosity (-v, -vv, -vvv) |
-q, --quiet | Minimal output (command output only) |
--progress-style <STYLE> | Progress display: panel, pipeline, minimal, verbose |
--help-all | Show all commands including hidden ones |
--no-auto-update | Disable auto-update check |