Choose the right models to maximize your earnings
Check Active Models
Identify High-Demand Models
Test Your Hardware
dkn-compute-launcher measure
to test them on your device.Check Performance
Start Running the Model
dkn-compute-launcher settings
to run high-demand models that your hardware can support.dkn-compute-launcher measure
command to ensure they meet the performance requirements. The model names for use with the launcher are:
gemma3:4b
gemma3:12b
gemma3:27b
llama3.3:70b-instruct
llama3.1:8b-instruct
llama3.2:1b-instruct
mistral-nemo:12b
measure
command, as their performance depends on the API provider, not your local hardware. You only need to measure locally hosted Ollama models.Gemma3 4B
Configuration | Estimated TPS |
---|---|
A10 (24GB) | ~65 TPS |
A5000 | ~60 TPS |
A40 | ~60 TPS |
A40 (48GB) | ~60 TPS |
A5000 (24GB) | ~60 TPS |
Gemma3 12B
Configuration | Estimated TPS |
---|---|
A100 SXM (40GB) | ~55 TPS |
A100 SXM (80GB) | ~55 TPS |
A100 PCIe (80GB) | ~50 TPS |
A100 SXM4 | ~50 TPS |
A100 80GB | ~50 TPS |
Gemma3 27B
Configuration | Estimated TPS |
---|---|
2× H100 SXM | ~55-60 TPS |
2× H100 NVLink | ~55 TPS |
Llama 3.1 8B Instruct
Configuration | Estimated TPS |
---|---|
A100 SXM (80GB) | ~50 TPS |
A100 SXM (40GB) | ~50 TPS |
A100 SXM4 | ~50 TPS |
A100 80GB | ~50 TPS |
A100 PCIe (80GB) | ~45 TPS |
Llama 3.2 1B Instruct
Configuration | Estimated TPS |
---|---|
A10 (24GB) | ~60 TPS |
A40 (48GB) | ~55 TPS |
A40 | ~55 TPS |
Llama 3.3 70B Instruct
Configuration | Estimated TPS |
---|---|
8× H100 SXM | ~110 TPS |
4× H100 SXM | ~80 TPS |
4× H100 NVLink | ~80 TPS |
4× H100 SXM (80GB) | ~80 TPS |
Mistral Nemo 12B Instruct
Configuration | Estimated TPS |
---|---|
A100 SXM4 | ~130 TPS |
A100 SXM (80GB) | ~130 TPS |
A100 SXM (40GB) | ~125 TPS |
A100 80GB | ~125 TPS |