Introduction
The Dria Compute Launcher is a simple and efficient way to set up and run the Dria Compute Node. The launcher automatically handles environment setup, model selection, and binary management, making it easy to start the node with minimal configuration.Features
Settings Menu: Change various settings such as your wallet, ports and API keys, all without leaving the launcher.
Model Selection: Choose your models with a nice menu.
Model Benchmarking: Measure TPS for Ollama models to see if your machine can handle them.
Automatic Updates: Launcher will automatically update a running compute node when there is an update & restart it.
Version Control: You can select & run a specific compute node release.
Auto-detect Ollama: Launcher will check Ollama if you are using it’s model, and start its server if required.
Installation
- Linux / macOS
- Windows
- Build from Source
Open a terminal and run the following command, it will ask for your user password:
Some Apple devices need you to bypass macOS’s security warning. If you see “macOS cannot verify that this app is free from malware” when using the launcher use the following command:
Usage
Double-click the executable or run it via the command line. Usehelp
to see available options:
help
have their own help messages within as well, you can view it with:
Model Providers
The purpose of running a Dria Compute Node is to serve LLMs to the network. These models can either be locally-hosted models via Ollama, or API-based models such as Gemini and OpenAI.API-Based Models
API-Based Models
To serve API-based models (OpenAI, Gemini, OpenRouter), you will need to get their API keys.
Local Models with Ollama
Local Models with Ollama
To serve a locally-hosted model with Ollama, you of course need Ollama installed, and you must make sure that your machine can handle your chosen models. See the “Measuring Local Models” section below to see the command-line tools that help you measure TPS.
Starting a Node
Start your node withstart
command:
When you are running for the first time, the launcher will prompt you to fill in
node information, such as your private key, chosen models and their respective provider information.
Referrals Program
You can earn $DRIA points if you refer other users! When you refer a user, for each point they earn you earn a portion of those points as well. To get a referral code, enter someone’s referral code and such, use the following command:Each referral code only has 10 uses! Once you have referred 10 users, your code will no longer work.
Changing Settings
You can use thesettings
command to change anything about your node:
- Wallet: change your secret key
- Port: edit your listen address port, defaults to
4001
- Models: view all models & edit the models that you want to serve
- Ollama: edit host & port of the Ollama server
- API Keys: change API keys for providers
- Log Levels: change log-levels for modules within compute node & launcher
You can always exit the process (ungracefully) with CTRL+C (on Linux / Windows) or CMD+C (on macOS), or ESC on both systems.
Choosing Models
When you select Model option in the Settings menu, you will be greeted with a list of model providers:Measuring Local Models
You can test your machine’s performance on locally served Ollama models using themeasure
command:
Displaying $DRIA Points
Use thepoints
command to display how much you have earned!
Update Manually
Using theupdate
command you can check for updates & automatically update your compute node and launcher.
start
command.
Environment Editor
For more advanced users that would like to view the environment file in more detail & plain-text, we provide theenv-editor
command:
settings
command will edit the last uncommented key on Save.
Running a Specific Release
Using thespecific
command you can choose to run a specific release:
The Dria Knowledge Network always considers the latest
minor
version as the active version; therefore,
if the latest is 0.3.x
and you decide to run a smaller version like 0.2.x
you will most likely kept out of network due to protocol mismatch.