GPU CLI

Commands Reference

Complete reference for all GPU CLI commands

Commands Reference

Complete reference for all GPU CLI commands. Run gpu --help for a quick overview, or gpu --help-all to see hidden commands.

gpu run

Execute a command on a remote GPU. This is the primary command for running workloads.

gpu run <COMMAND>

Basic Examples

# Run a Python script
gpu run python train.py

# Run with arguments
gpu run python train.py --epochs 100 --batch-size 32

# Run any command
gpu run uv run python inference.py

Flags

FlagDescription
--attach, -a <JOB_ID>Reattach to an existing job
--status, -sShow pod status and recent jobs
--cancel <JOB_ID>Cancel a running job
--gpu-type <TYPE>Override GPU type (e.g., "RTX 4090")
--gpu-count <N>Request multiple GPUs (1-8)
--min-vram <GB>Minimum VRAM for GPU fallback
--detach, -dRun in background (don't wait for completion)
--interactive, -iInteractive mode (allocate PTY)
--publish, -p <[LOCAL:]REMOTE>Port forwarding (docker-style)
--env, -e <KEY=VALUE>Set environment variables (can be repeated)
--output, -o <PATHS>Override output paths to sync back
--no-outputDisable output syncing
--syncWait for all output files to sync before exiting
--rebuildForce pod recreation if Dockerfile changed
--force-syncFull sync (ignore change detection)
--remote-path <PATH>Override remote workspace path
--no-port-forwardDisable automatic port detection
--tail, -n <N>Show last N lines when attaching

Advanced Examples

# Run in background
gpu run -d python long_training.py

# Port forward for web UI
gpu run -p 8080:8080 python server.py

# Multiple port forwards
gpu run -p 8080:8080 -p 6006:6006 python app.py

# Set environment variables
gpu run -e API_KEY=xxx -e DEBUG=true python app.py

# Reattach to a job
gpu run --attach job_abc123

# Check job status
gpu run --status

# Cancel a job
gpu run --cancel job_abc123

# Force specific GPU
gpu run --gpu-type "RTX 4090" python train.py

# Multi-GPU training
gpu run --gpu-count 4 python distributed_train.py

# Interactive shell
gpu run -i bash

gpu login

Authenticate with GPU CLI via browser.

gpu login

Flags

FlagDescription
--timeout <SECONDS>Browser auth timeout (default: 300)
--stagingUse staging environment

gpu logout

Remove GPU CLI authentication.

gpu logout

Flags

FlagDescription
--yes, -y, --forceSkip confirmation prompt

gpu auth

Manage cloud provider and model hub credentials.

auth login

Add your RunPod API key.

gpu auth login
FlagDescription
--profile <NAME>Profile name for credential isolation
--generate-ssh-keysGenerate profile-specific SSH keys

auth logout

Remove provider credentials.

gpu auth logout
FlagDescription
--forceSkip confirmation

auth status

Show current authentication status.

gpu auth status

auth add

Add model hub credentials (HuggingFace, Civitai).

gpu auth add hf      # HuggingFace
gpu auth add civitai # Civitai
FlagDescription
--token <VALUE>Token/API key (prompts if not provided)

auth remove

Remove model hub credentials.

gpu auth remove hf

auth hubs

List configured model hub credentials.

gpu auth hubs

gpu init

Initialize GPU CLI for the current project. Creates a gpu.jsonc configuration file.

gpu init

Flags

FlagDescription
--gpu-type <TYPE>Default GPU type for project
--max-price <PRICE>Maximum hourly price
--profile <NAME>Profile to use
--encryptionEnable LUKS encryption
--no-encryptionDisable LUKS encryption
--force, -fForce reinitialization

Example

cd my-ml-project
gpu init --gpu-type "RTX 4090" --max-price 0.50

gpu inventory

List available GPU types and their current availability.

gpu inventory

Flags

FlagDescription
--available, -aOnly show GPUs with available stock
--min-vram <GB>Minimum VRAM filter
--max-price <PRICE>Maximum price per hour
--region <REGION>Filter by region
--gpu-type <TYPE>Filter by GPU type (fuzzy match)
--cloud-type <TYPE>Cloud type: secure, community, all
--jsonOutput as JSON

Examples

# Show available GPUs only
gpu inventory --available

# Filter by VRAM and price
gpu inventory --min-vram 24 --max-price 1.00

# JSON output for scripting
gpu inventory --json

gpu status

Show project status including pods, jobs, and costs.

gpu status

Flags

FlagDescription
--project <PROJECT>Filter to specific project
--jsonOutput as JSON

gpu dashboard

Launch the interactive TUI dashboard for managing pods and jobs.

gpu dashboard

Keybindings

KeyAction
j / DownNavigate down
k / UpNavigate up
TabSwitch panel
EnterExpand / View logs
EscBack / Close
aAttach to job
cCancel job
sStop pod
eEvents view
?Help
qQuit

gpu logs

View job output (stdout/stderr) from the current project.

gpu logs [OPTIONS]

Flags

FlagDescription
-j, --job <JOB_ID>Filter to specific job
-f, --followFollow output in real-time
--tail <N>Show last N lines (default: 100)
--type <TYPE>Filter by type: output, lifecycle, hook, sync, agent, all
--jsonOutput as JSON
--include-agentInclude agent internal logs

Examples

# Latest job output
gpu logs

# Follow running job
gpu logs -f

# Last 50 lines of specific job
gpu logs --job job_abc123 --tail 50

# Only lifecycle events
gpu logs --type lifecycle

# Agent logs (internal pod agent activity)
gpu logs --type agent

gpu volume

Manage network volumes for persistent storage.

volume list

List all network volumes.

gpu volume list
FlagDescription
--detailed, -dShow detailed volume information
--jsonOutput as JSON

volume create

Create a new network volume.

gpu volume create --name my-models --size 200
FlagDescription
--name, -n <NAME>Volume name (default: gpu-cli-{username})
--size, -s <GB>Size in GB (default: 100)
--datacenter, -d <ID>Datacenter ID (prompted if not specified)
--set-globalSet as global volume after creation
--yes, -ySkip confirmation

volume delete

Delete a network volume.

gpu volume delete <VOLUME>

The VOLUME argument can be the volume name or ID.

FlagDescription
--force, -fForce delete even if pods are attached

volume extend

Increase the size of a volume.

gpu volume extend <VOLUME> --size 300
FlagDescription
--size, -s <GB>New size (must be larger than current)
--yes, -ySkip confirmation

volume set-global

Set a volume as the global default for all projects.

gpu volume set-global <VOLUME>

volume status

Show volume usage statistics.

gpu volume status
FlagDescription
--volume <VOLUME>Specify volume (defaults to global)
--jsonOutput as JSON

volume migrate

Migrate a volume to a different datacenter.

gpu volume migrate <VOLUME> --to <DATACENTER>
FlagDescription
--to <DATACENTER>Target datacenter ID (required)
--name <NAME>Name for new volume (default: {original}-{datacenter})
--skip-verifySkip verification after migration

volume sync

Sync data between two existing volumes.

gpu volume sync <SOURCE> <DEST>
FlagDescription
--method <METHOD>Force transfer method: tar or rsync (auto-detected by default)
--skip-verifySkip verification after sync

volume cancel-migration

Cancel an ongoing migration or sync operation.

gpu volume cancel-migration <TRANSFER_ID>

The transfer ID is displayed when you start a migration or sync.

Examples

# List all volumes with details
gpu volume list --detailed

# Create a 500GB volume and set as global
gpu volume create --name shared-models --size 500 --set-global

# Check usage of global volume
gpu volume status

# Extend a volume
gpu volume extend my-models --size 500

gpu config

Inspect and manage configuration.

config show

Show the merged configuration for current project.

gpu config show

config validate

Validate configuration against the JSON schema.

gpu config validate

config schema

Print the JSON schema for gpu.jsonc files.

gpu config schema

config set

Set a global configuration option.

gpu config set <KEY> <VALUE>

Supported keys:

  • updates.channel - Update channel (stable, beta)
  • updates.auto_update - Enable auto-updates (true/false)
  • updates.check_interval_hours - Update check interval
  • default_provider - Default cloud provider
  • default_profile - Default profile name

config get

Get a global configuration value.

gpu config get <KEY>

gpu stop

Stop the active pod immediately.

gpu stop

Flags

Argument/FlagDescription
[POD_ID]Pod ID to stop (positional, optional - auto-detects from project if not specified)
--yes, -y, --forceSkip confirmation prompt
--no-syncDon't sync outputs before stopping

gpu use

Run a template or resume a session.

gpu use [TEMPLATE]

If no template is specified, resumes the last session.

Flags

FlagDescription
--name <NAME>Project/session name
--yesSkip interactive prompts
--dry-runShow what would be created
--input <KEY=VALUE>Provide input values (can be repeated)

gpu update

Update GPU CLI to the latest version.

gpu update

Flags

FlagDescription
--checkCheck for updates without installing
--target-version <VERSION>Install a specific version
--forceForce reinstall even if up-to-date
--dismissDismiss update notification for 24 hours

gpu changelog

View the changelog for a specific version.

gpu changelog [VERSION]

Flags

FlagDescription
--from <VERSION>Show changes from this version (exclusive)
--to <VERSION>Show changes up to this version (inclusive)

Examples

# Latest changelog
gpu changelog

# Specific version
gpu changelog 0.8.0

# Range of versions
gpu changelog --from 0.7.0 --to 0.8.0

gpu daemon

Manage the GPU CLI daemon (background service).

daemon status

Show daemon status.

gpu daemon status

daemon start

Start the daemon.

gpu daemon start

daemon stop

Stop the daemon gracefully.

gpu daemon stop

daemon restart

Restart the daemon.

gpu daemon restart

daemon logs

View daemon logs.

gpu daemon logs
FlagDescription
--follow, -fFollow log output (like tail -f)
--tail, -n <N>Show last N lines
--all, -aShow all rotated logs

Global Flags

These flags work with all commands:

FlagDescription
-v, --verboseIncrease logging verbosity (-v, -vv, -vvv)
-q, --quietMinimal output (command output only)
--progress-style <STYLE>Progress display: panel, pipeline, minimal, verbose
--help-allShow all commands including hidden ones
--no-auto-updateDisable auto-update check

On this page