Skip to main content

Configuration

BlockZero miners are configured via a YAML file. This page walks through every field.

Example Configuration

# Model
model_path: Qwen/Qwen3-VL-30B-Instruct # or local path
expert_group: 0 # your assigned expert group (0-indexed)

# Bittensor
chain_endpoint: wss://entrypoint-finney.opentensor.ai:443
netuid: 42 # BlockZero's subnet UID
wallet:
name: my-wallet
hotkey: my-hotkey

# Storage
checkpoint_dir: ./checkpoints
data_path: /data/expert-group-0/train.jsonl

# Training
batch_size: 4 # per-GPU batch size
gradient_accumulation_steps: 4 # effective batch = 4 × 4 = 16
learning_rate: 3e-5
warmup_steps: 100
max_steps: 500
fp16: true

# Monitoring
wandb: false
wandb_project: blockzero-miner

# Checkpoint management
checkpoint_keep_top_k: 3

# DataLoader
num_workers: 4
seed: 42

Field Reference

Model Settings

FieldTypeDefaultDescription
model_pathstringrequiredPath to base model or HuggingFace model ID
expert_groupintegerrequiredYour assigned expert group ID (0-indexed). Get this from the validator.

Bittensor Settings

FieldTypeDefaultDescription
chain_endpointstringwss://entrypoint-finney.opentensor.ai:443Chain WebSocket endpoint
netuidintegerrequiredBlockZero subnet UID
wallet.namestringdefaultBittensor coldkey name
wallet.hotkeystringdefaultBittensor hotkey name

Storage Settings

FieldTypeDefaultDescription
checkpoint_dirstring./checkpointsWhere to save checkpoints (local directory).
data_pathstringrequiredTraining dataset path (JSONL format, one example per line)

Training Settings

FieldTypeDefaultDescription
batch_sizeinteger4Per-GPU batch size. Reduce if you hit OOM.
gradient_accumulation_stepsinteger4Steps to accumulate before optimizer step. Effective batch = batch_size × steps.
learning_ratefloat3e-5AdamW learning rate. Start here; tune down if gradients are unstable.
warmup_stepsinteger100LR warmup steps at the start of each training cycle
max_stepsinteger500Max training steps per cycle before submission
fp16booleantrueEnable FP16 mixed precision. Nearly always better.

Monitoring

FieldTypeDefaultDescription
wandbbooleanfalseEnable Weights & Biases logging
wandb_projectstringblockzero-minerW&B project name

Checkpoint Management

FieldTypeDefaultDescription
checkpoint_keep_top_kinteger3Number of checkpoints to retain (by lowest validation loss). Older ones are pruned.
num_workersinteger4DataLoader worker processes
seedinteger42Random seed for reproducibility

Multiple Expert Groups

If you are running multiple miners on the same machine (different expert groups), create a separate config file for each:

# Create separate config directories per expert group, then run each pair of processes:
# Group 0:
python mycelia/miner/train.py --path /path/to/checkpoints/miner/<hotkey>/group-0/
python mycelia/miner/model_io.py --path /path/to/checkpoints/miner/<hotkey>/group-0/

# Group 1:
python mycelia/miner/train.py --path /path/to/checkpoints/miner/<hotkey>/group-1/
python mycelia/miner/model_io.py --path /path/to/checkpoints/miner/<hotkey>/group-1/

Each miner should have a different checkpoint_dir to avoid conflicts.