Selecting Models
Choose the right models to maximize your earnings
How to Earn More Points?
Points are updated in real-time at Dria Edge AI Dashboard.
Check Active Models
Visit dria.co/edge-ai and check all the models listed under “Tasks Completed by Models (7 Days)” and the live “Data Generation Logs”.
Identify High-Demand Models
Observe which models have been getting the most tasks recently from these two sections.
Test Your Hardware
Run dkn-compute-launcher measure
to test Ollama models on your device.
Check Performance
If a model achieves an Eval TPS score higher than 15, your device can likely run that model effectively for the network.
Start Running the Model
Configure your settings with dkn-compute-launcher settings
to run high-demand models that your hardware can support.
API-based models (like those from Gemini, OpenAI, OpenRouter) do not require local measurement with the measure
command, as their performance depends on the API provider, not your local hardware. You only need to measure locally hosted Ollama models.
Changing Models
Run the following command and select models from the menu:
Run the following command and select models from the menu:
Run the following command and select models from the menu:
What is TPS?
TPS stands for Tokens Per Second. It’s a measure of how fast the AI model can process text. A higher TPS generally means better performance. For Dria, the Eval TPS measured by the launcher is the key metric for local models.
Hardware Performance Benchmarks
Below are model benchmarks for various hardware configurations. We’ve listed Ollama models that can serve an Eval TPS higher than 15 for each setting.
In addition to these locally hosted models, you can run any API level providers (Gemini, OpenRouter, OpenAI) regardless of your local specs (though you will need valid API keys).