Skip to main content

Documentation Index

Fetch the complete documentation index at: https://node-guide.dria.co/llms.txt

Use this file to discover all available pages before exploring further.

Node Setup

No. The new dria-node binary runs models natively using llama.cpp. Ollama is no longer required. Simply run dria-node setup to download and configure your model.
  • RAM: At least ~1 GB for the smallest model (qwen3.5:0.8b), up to ~27 GB for the largest (nemotron:30b-a3b)
  • Disk: 0.5 GB to 24.5 GB for model files depending on model choice
  • Network: Outbound UDP port 4001 (QUIC)
  • OS: macOS (Intel + Apple Silicon), Linux (x86_64, arm64), Windows (x86_64)
See Selecting Models for the full model table with RAM requirements.
The fastest way:
  • macOS/Linux: brew install firstbatchxyz/dkn/dria-node or curl -fsSL https://raw.githubusercontent.com/firstbatchxyz/dkn-compute-node/master/install.sh | sh
  • Windows: irm https://raw.githubusercontent.com/firstbatchxyz/dkn-compute-node/master/install.ps1 | iex
See Running a Node for all installation methods.

Running a Node

Visit the Dria Edge AI Dashboard and log in with the wallet associated with your node. The dashboard shows your node’s status, recent activity, and earned points.
Yes. You can run multiple nodes on the same machine or network, but each node must use a unique private key (wallet). Using the same key for multiple nodes will cause conflicts with task assignment and rewards.
Your node needs outbound access on UDP port 4001 to connect to the Dria router via QUIC. No inbound ports need to be opened.
Yes. dria-node supports:
  • Apple Metal (macOS) — works automatically on Apple Silicon
  • NVIDIA CUDA — use the CUDA build
  • AMD ROCm 6.x — use the ROCm install script (Linux x86_64 only)
Use --gpu-layers -1 to offload all model layers to GPU.
dria-node checks for new versions on GitHub Releases at startup. Patch bumps (e.g. 0.7.3 → 0.7.4) show a warning. Minor or major version bumps trigger an automatic update to keep your node compatible with the network. You can skip this with --skip-update.

Earning Points

Common reasons:
  1. No tasks — There may be low demand for the model(s) you’re running. Check the dashboard and consider switching to higher-demand models.
  2. Node offline — Make sure dria-node is running and your network allows outbound UDP on port 4001.
  3. Low reputation — New nodes start with a neutral reputation score. Complete tasks consistently to build it up. Failing or timing out on tasks reduces your score.
Refer to the Rewards page for details on the earning mechanism.
Points on the Dria Edge AI Dashboard are updated in near real-time as your node completes tasks.

Using the Network (CLI)

Install the Dria CLI:
npm install -g @dria/cli
dria init
Then run inference:
dria generate -m qwen3.5:9b "explain quantum computing"
See the CLI Guide for full documentation.
The CLI uses USDC credits on the Base network. Top up with:
dria topup --amount 10
This uses the x402 payment protocol with gasless EIP-712 signed transfers. See CLI Guide for details.

Troubleshooting

The node automatically reconnects with exponential backoff (1s, 2s, 4s, 8s, 16s). If it repeatedly fails:
  1. Check your internet connection
  2. Ensure outbound UDP port 4001 is not blocked by a firewall
  3. Try setting RUST_LOG=debug for more detailed logs
  4. Make sure you’re running the latest version (dria-node auto-updates by default)
Models are downloaded from HuggingFace. If the download fails:
  1. Check your internet connection
  2. Ensure you have enough disk space in ~/.dria/models/
  3. Try running dria-node setup again — it will resume where it left off
We recommend using native Windows (PowerShell) instead of WSL for the best experience. If you encounter issues on WSL, switch to the native Windows installation method.