How to Earn More Points?
Points are updated in real-time at Dria Edge AI Dashboard.1
Check Active Models
Visit dria.co/edge-ai and check all the models listed under “Tasks Completed by Models (7 Days)” and the live “Data Generation Logs”. The currently supported models are also listed below.
2
Identify High-Demand Models
Observe which models have been getting the most tasks recently from these two sections.
3
Test Your Hardware
For locally hosted models, run
dkn-compute-launcher measure to test them on your device.4
Check Performance
If a model achieves an Eval TPS score higher than 15, your device can likely run that model effectively for the network.
5
Start Running the Model
Configure your settings with
dkn-compute-launcher settings to run high-demand models that your hardware can support.Supported Models
Here is the list of currently supported models you can run. All models are served locally via Ollama and require testing on your hardware using thedkn-compute-launcher measure command to ensure they meet the performance requirements.
The model names for use with the launcher are:
gemma3:4bgemma3:12bgemma3:27bllama3.3:70b-instructllama3.1:8b-instructllama3.2:1b-instructmistral-nemo:12b
Changing Models
- Linux / macOS
- Windows
Run the following command and select models from the menu:
What is TPS?
TPS stands for Tokens Per Second. It’s a measure of how fast the AI model can process text. A higher TPS generally means better performance. For Dria, the Eval TPS measured by the launcher is the key metric for local models.Hardware Performance Benchmarks
Below are some performance benchmarks for running supported Ollama models on various cloud GPU configurations.Gemma3 4B
Gemma3 4B
| Configuration | Estimated TPS |
|---|---|
| A10 (24GB) | ~65 TPS |
| A5000 | ~60 TPS |
| A40 | ~60 TPS |
| A40 (48GB) | ~60 TPS |
| A5000 (24GB) | ~60 TPS |
Gemma3 12B
Gemma3 12B
| Configuration | Estimated TPS |
|---|---|
| A100 SXM (40GB) | ~55 TPS |
| A100 SXM (80GB) | ~55 TPS |
| A100 PCIe (80GB) | ~50 TPS |
| A100 SXM4 | ~50 TPS |
| A100 80GB | ~50 TPS |
Gemma3 27B
Gemma3 27B
| Configuration | Estimated TPS |
|---|---|
| 2× H100 SXM | ~55-60 TPS |
| 2× H100 NVLink | ~55 TPS |
Llama 3.1 8B Instruct
Llama 3.1 8B Instruct
| Configuration | Estimated TPS |
|---|---|
| A100 SXM (80GB) | ~50 TPS |
| A100 SXM (40GB) | ~50 TPS |
| A100 SXM4 | ~50 TPS |
| A100 80GB | ~50 TPS |
| A100 PCIe (80GB) | ~45 TPS |
Llama 3.2 1B Instruct
Llama 3.2 1B Instruct
| Configuration | Estimated TPS |
|---|---|
| A10 (24GB) | ~60 TPS |
| A40 (48GB) | ~55 TPS |
| A40 | ~55 TPS |
Llama 3.3 70B Instruct
Llama 3.3 70B Instruct
| Configuration | Estimated TPS |
|---|---|
| 8× H100 SXM | ~110 TPS |
| 4× H100 SXM | ~80 TPS |
| 4× H100 NVLink | ~80 TPS |
| 4× H100 SXM (80GB) | ~80 TPS |
Mistral Nemo 12B Instruct
Mistral Nemo 12B Instruct
| Configuration | Estimated TPS |
|---|---|
| A100 SXM4 | ~130 TPS |
| A100 SXM (80GB) | ~130 TPS |
| A100 SXM (40GB) | ~125 TPS |
| A100 80GB | ~125 TPS |
