Configuration
BlockZero miners are configured via a YAML file. This page walks through every field.
Example Configuration
# Model
model_path: Qwen/Qwen3-VL-30B-Instruct # or local path
expert_group: 0 # your assigned expert group (0-indexed)
# Bittensor
chain_endpoint: wss://entrypoint-finney.opentensor.ai:443
netuid: 42 # BlockZero's subnet UID
wallet:
name: my-wallet
hotkey: my-hotkey
# Storage
checkpoint_dir: ./checkpoints
data_path: /data/expert-group-0/train.jsonl
# Training
batch_size: 4 # per-GPU batch size
gradient_accumulation_steps: 4 # effective batch = 4 × 4 = 16
learning_rate: 3e-5
warmup_steps: 100
max_steps: 500
fp16: true
# Monitoring
wandb: false
wandb_project: blockzero-miner
# Checkpoint management
checkpoint_keep_top_k: 3
# DataLoader
num_workers: 4
seed: 42
Field Reference
Model Settings
| Field | Type | Default | Description |
|---|---|---|---|
model_path | string | required | Path to base model or HuggingFace model ID |
expert_group | integer | required | Your assigned expert group ID (0-indexed). Get this from the validator. |
Bittensor Settings
| Field | Type | Default | Description |
|---|---|---|---|
chain_endpoint | string | wss://entrypoint-finney.opentensor.ai:443 | Chain WebSocket endpoint |
netuid | integer | required | BlockZero subnet UID |
wallet.name | string | default | Bittensor coldkey name |
wallet.hotkey | string | default | Bittensor hotkey name |
Storage Settings
| Field | Type | Default | Description |
|---|---|---|---|
checkpoint_dir | string | ./checkpoints | Where to save checkpoints (local directory). |
data_path | string | required | Training dataset path (JSONL format, one example per line) |
Training Settings
| Field | Type | Default | Description |
|---|---|---|---|
batch_size | integer | 4 | Per-GPU batch size. Reduce if you hit OOM. |
gradient_accumulation_steps | integer | 4 | Steps to accumulate before optimizer step. Effective batch = batch_size × steps. |
learning_rate | float | 3e-5 | AdamW learning rate. Start here; tune down if gradients are unstable. |
warmup_steps | integer | 100 | LR warmup steps at the start of each training cycle |
max_steps | integer | 500 | Max training steps per cycle before submission |
fp16 | boolean | true | Enable FP16 mixed precision. Nearly always better. |
Monitoring
| Field | Type | Default | Description |
|---|---|---|---|
wandb | boolean | false | Enable Weights & Biases logging |
wandb_project | string | blockzero-miner | W&B project name |
Checkpoint Management
| Field | Type | Default | Description |
|---|---|---|---|
checkpoint_keep_top_k | integer | 3 | Number of checkpoints to retain (by lowest validation loss). Older ones are pruned. |
num_workers | integer | 4 | DataLoader worker processes |
seed | integer | 42 | Random seed for reproducibility |
Multiple Expert Groups
If you are running multiple miners on the same machine (different expert groups), create a separate config file for each:
# Create separate config directories per expert group, then run each pair of processes:
# Group 0:
python mycelia/miner/train.py --path /path/to/checkpoints/miner/<hotkey>/group-0/
python mycelia/miner/model_io.py --path /path/to/checkpoints/miner/<hotkey>/group-0/
# Group 1:
python mycelia/miner/train.py --path /path/to/checkpoints/miner/<hotkey>/group-1/
python mycelia/miner/model_io.py --path /path/to/checkpoints/miner/<hotkey>/group-1/
Each miner should have a different checkpoint_dir to avoid conflicts.