It understands your project first
- Auto-detection: infer workload and GPU requirements from repo context.
- Reliable fallback: optional LLM intelligence plus heuristic fallback.
- Repeatable envs: snapshot and resume without redoing setup.
For AI builders who need rented compute now
`gpu-lol` is your AI compute-rental launch layer. It handles detection, GPU targeting, provider routing, and setup so your team can ship instead of provisioning.
Why AI teams use gpu-lol
Flow
Step 1
Workload signals, packages, model hints, and existing env specs are evaluated first.
Step 2
GPU class, provider route, and launch profile are selected based on fit and cost.
Step 3
Cluster comes up, checks run, and your environment is ready for immediate work.
Command-first
gpu-lol up
gpu-lol up --dry-run
gpu-lol up --yes
gpu-lol up --gpu A40
gpu-lol up --gpus 4
gpu-lol up --detach
gpu-lol up --detach --watch
gpu-lol up --stop-after 4
gpu-lol up --template porpoise
gpu-lol up --assets /workspace/models:/root/.cache/huggingface/hub
Multi-provider compute rental
Run launches across multiple providers without rewriting your workflow each time capacity shifts.
any_of: runpod | vast | lambda
Estimated hourly pricing by GPU class for fast budget-aware decisions.
| GPU | VRAM | Approx $/hr |
|---|---|---|
| RTX3090 | 24GB | $0.22 |
| RTX4090 | 24GB | $0.34 |
| A40 | 48GB | $0.40 |
| A6000 | 48GB | $0.50 |
| A100-SXM4 | 80GB | $1.19 |
| H100-SXM | 80GB | $2.49 |
Claude Code Native
`gpu-lol` includes skill-oriented structure and a scriptable CLI, so your analyze -> launch -> validate -> snapshot loops stay consistent across sessions.
Built for repeatable workflows, not one-off terminal archaeology.
Start in under 2 minutes
If you can run three commands, you can launch production-grade AI compute.
pip install git+https://github.com/miike-lol/gpu-lol
gpu-lol secrets init
gpu-lol up