Cloud GPUs are easy to rent but painful to manage. We fixed the loop with GPU-CLI: a single tool that handles SSH keys, smart syncing, and tunnels automatically. It feels like the hardware is under your desk, but it lives in the cloud.
We built GPU-CLI because we were tired of the "Remote Tax."
Getting a cloud GPU is easy now (thanks to providers like RunPod), but the workflow is still painful. You manage SSH keys, SCP files, manually sync environments, and - worst of all - you pay the "ADHD tax" when you forget to shut down a pod for three days.
We are fixing the loop.
Why we built this (and why you need it):
No More Surprise Bills: This is our killer feature. Auto-Stop detects when your process goes idle and terminates the pod automatically. Walk away without anxiety.
The "Localhost" Experience: We handle the region fallback, GPU selection, and tunnels. It feels like the hardware is under your desk, but it lives in the cloud.
Smart Sync: Your local code and remote environment stay in lockstep automatically.
One Tool, Many Camps.
We support many workflows, but we are focused on these three for our Beta launch:
1. The ComfyUI Creators
The Command:gpu comfyui
The Upgrade: Stops you from juggling IP addresses. We boot the GPU and tunnel the UI securely to your localhost. No public internet exposure, just instant creation.
2. The Open-Source Explorers (Hugging Face / LLMs)
The Command:gpu llm
The Upgrade: Want to test GLM 4.7 Flash or DeepSeek? We run a guided setup (Ollama/vLLM) on a high-end remote GPU so you can test quickly without melting your laptop.
3. The Custom Builders
The Command:gpu run
The Upgrade: For when you need to go your own way. Skip the manual setup and turn any Python script into a remote execution. No environment headaches - just raw compute for your custom jobs.