Documentation Index
Fetch the complete documentation index at: https://node-guide.dria.co/llms.txt
Use this file to discover all available pages before exploring further.
Supported Models
dria-node includes a built-in registry of 12 models. All models are served locally using llama.cpp — no Ollama required. Each model is downloaded as a GGUF file from HuggingFace during setup.
| Model | Type | Default Quant | GGUF Size | Min RAM |
|---|---|---|---|---|
qwen3.5:0.8b | Vision | Q4_K_M | 0.5 GB | ~1 GB |
lfm2.5:1.2b | Text | Q4_K_M | 0.8 GB | ~1 GB |
lfm2.5-audio:1.5b | Audio | Q4_0 | 1.0 GB | ~1.5 GB |
lfm2.5-vl:1.6b | Vision | Q4_0 | 1.2 GB | ~1.5 GB |
qwen3.5:2b | Vision | Q4_K_M | 1.2 GB | ~2 GB |
nanbeige:3b | Text | Q4_K_M | 2.0 GB | ~2.5 GB |
locooperator:4b | Text | Q4_K_M | 2.5 GB | ~3 GB |
qwen3.5:9b | Vision | Q4_K_M | 6.0 GB | ~7 GB |
lfm2:24b-a2b | Text (MoE) | Q4_K_M | 14 GB | ~16 GB |
qwen3.5:27b | Vision | Q4_K_M | 16 GB | ~18 GB |
qwen3.5:35b-a3b | Vision (MoE) | Q4_K_M | 20 GB | ~22 GB |
nemotron:30b-a3b | Text (MoE) | Q4_K_M | 24.5 GB | ~27 GB |
Model Types
- Text — Standard text generation and instruction following.
- Vision — Multimodal models that can process both text and images.
- Audio — Multimodal models that can process text and audio inputs.
- MoE — Mixture-of-Experts models that activate only a subset of parameters per token, enabling larger models to run efficiently.
How to Choose a Model
Check Your RAM
Run
dria-node setup — it will automatically detect your available RAM and filter models that fit your system.Consider Demand
Visit dria.co/edge-ai to see which models are getting the most tasks. Running high-demand models means more task assignments and more earnings.
Match Your Hardware
Larger models produce higher-quality output but need more RAM and compute. Pick the largest model your hardware can comfortably run. GPU acceleration significantly improves performance for larger models.
