gpu-lol GitHub

For AI builders who need rented compute now

Rent AI compute. Launch in minutes. Keep moving.

`gpu-lol` is your AI compute-rental launch layer. It handles detection, GPU targeting, provider routing, and setup so your team can ship instead of provisioning.

Get started
No manual compute shopping Cost-aware routing Claude Code ready

Why AI teams use gpu-lol

Less setup. Faster experiments. Cleaner operations.

It understands your project first

  • Auto-detection: infer workload and GPU requirements from repo context.
  • Reliable fallback: optional LLM intelligence plus heuristic fallback.
  • Repeatable envs: snapshot and resume without redoing setup.

It keeps speed and cost balanced

  • Smart routing: choose viable capacity across RunPod, Vast.ai, Lambda.
  • Ops controls: validate, inspect, and manage lifecycle quickly.
  • Safer defaults: dry-runs, confirmations, encrypted secrets setup.

Flow

How launch orchestration behaves in practice

Step 1

Read repo context

Workload signals, packages, model hints, and existing env specs are evaluated first.

Step 2

Target the right compute

GPU class, provider route, and launch profile are selected based on fit and cost.

Step 3

Launch and validate

Cluster comes up, checks run, and your environment is ready for immediate work.

Command-first

Everything important is one command away

gpu-lol up
gpu-lol up --dry-run
gpu-lol up --yes
gpu-lol up --gpu A40
gpu-lol up --gpus 4
gpu-lol up --detach
gpu-lol up --detach --watch
gpu-lol up --stop-after 4
gpu-lol up --template porpoise
gpu-lol up --assets /workspace/models:/root/.cache/huggingface/hub

Multi-provider compute rental

AI-ready compute, sourced where it makes sense

No lock-in to one GPU vendor

Run launches across multiple providers without rewriting your workflow each time capacity shifts.

any_of: runpod | vast | lambda

Live cost profile

Estimated hourly pricing by GPU class for fast budget-aware decisions.

GPU VRAM Approx $/hr
RTX309024GB$0.22
RTX409024GB$0.34
A4048GB$0.40
A600048GB$0.50
A100-SXM480GB$1.19
H100-SXM80GB$2.49

Claude Code Native

Built to feel natural inside Claude Code

`gpu-lol` includes skill-oriented structure and a scriptable CLI, so your analyze -> launch -> validate -> snapshot loops stay consistent across sessions.

Built for repeatable workflows, not one-off terminal archaeology.

Start in under 2 minutes

Install, connect keys, rent compute

If you can run three commands, you can launch production-grade AI compute.

pip install git+https://github.com/miike-lol/gpu-lol
gpu-lol secrets init
gpu-lol up