approved GPU
Is my PC powerful enough?
THE TRUTH ABOUT MODERN GPUs AS NODES
A node must be able to:
- Run 2+ simultaneous agents or customers
- Without choking on VRAM limitations
- Without starving the Grid of compute power
Otherwise, it’s a liability.
WHICH GPUS ARE “APPROVED”?
These GPUs are STRONG enough to be REAL grid contributors.
Tier 1 — Fully Qualified (Excellent Nodes)
These give 2–10+ customers per node depending on model sizes.
- NVIDIA
- RTX 3090 / 3090 Ti (24GB)
- RTX 4090 (24GB)
- RTX 5000 ADA (32GB)
- Tesla V100 16/32GB
- Tesla A100 40/80GB
- L40S (48GB)
- RTX 6000 ADA (48GB)
Why approved:
- High VRAM
- Fast FP16/FP32
- Handle LLMs easily
- Can host multiple agents/customers
- Strong for training, inference, SD
- Stable drivers
Tier 2 — Conditionally Approved (Good Nodes with Limits)
These handle 1–3 customers, depending on workload:
- RTX 3080 / 3080 Ti (10–12GB)
- RTX 4080 / 4080 SUPER (16GB)
- RTX 4070 Ti (12GB)
- RTX 3070 Ti (8GB)
- RTX 3060 12GB
- Tesla P100 16GB
- CMP 100-210 (16GB HBM2)
Why conditionally approved:
- Enough VRAM for 7B–13B inference
- Solid performance
- Great for distributed grid work
- Limited for many parallel customers
- Some (like 3080 10GB) can’t hold large models
These are common among PC gamers — great marketing funnel.
Tier 3 — Limited Contribution (Entry-Level, One-Customer Nodes)
These can help but won’t add grid strength.
Still acceptable so we don’t exclude too many people.
These can help but won’t add grid strength.
Still acceptable so we don’t exclude too many people.
- RTX 2060 (6GB)
- RTX 2070 / 2070 Super (8GB)
- RTX 2080 (8GB)
- GTX 1080 Ti (11GB)
- RTX 3050 (8GB)
Why limited:
- Small VRAM
- No tensor core acceleration (older gens)
- Only support one lightweight agent or one customer
These should be allowed but labeled as “ENTRY NODES.”
Tier 4 — NOT APPROVED (They drain the Grid)
These GPUs are NOT allowed because they consume more than they provide.
These GPUs are NOT allowed because they consume more than they provide.
NOT ALLOWED:
- GTX 1050 / 1050 Ti
- GTX 1650
- GTX 1060 / 1070
- AMD GPUs (all models) — unstable for LLM workloads
- Intel ARC — drivers are not mature
- GPUs with 4GB or less VRAM
- Mobile GPUs (laptops)
- MacBooks (Metal backend unusable for now)
Why:
- Too little VRAM
- Too slow for inference
- Poor driver support
- No tensor cores
- No consistent performance
These users become consumers, not contributors.
Baseline Requirement for The Grid
Minimum specs to become a “Contributor Node”:
- 8GB VRAM minimum
- Tensor Core support required (Turing or newer)
- PCIe x8/x16 preferred
- Stable Nvidia drivers
This ensures:
- Every contributor actually contributes
- No weak nodes choke Ryker
- Consistent performance for officers, agents, customers
- Node credit system remains fair
HOW MANY CUSTOMERS CAN EACH GPU HANDLE?
- 3090 / 4090 / V100 / L40S
(4–10 customers per node)
- 3080, 4080, 4070 Ti
(2–4 customers per node)
- 3060 12GB, P100, CMP 100-210
(1–2 customers per node)
- Entry GPUs (2060 / 3050 / 2070)
(1 customer per node max)
- Not approved
0 customers (drains resources)