Skip to main content

Configuration Schema

BlockZero uses YAML configuration files for both miners and validators. This page documents every field for MinerConfig and ValidatorConfig.

MinerConfig

Example

model_path: Qwen/Qwen3-VL-30B-Instruct
expert_group: 0
chain_endpoint: wss://entrypoint-finney.opentensor.ai:443
netuid: 42
wallet:
name: my-wallet
hotkey: my-hotkey
checkpoint_dir: ./checkpoints
data_path: /data/expert-group-0/train.jsonl
batch_size: 4
gradient_accumulation_steps: 4
learning_rate: 3e-5
warmup_steps: 100
max_steps: 500
fp16: true
wandb: false
wandb_project: blockzero-miner
checkpoint_keep_top_k: 3
num_workers: 4
seed: 42

Field Reference

FieldTypeDefaultDescription
model_pathstringrequiredPath to base model directory or HuggingFace model ID (e.g., Qwen/Qwen3-VL-30B-Instruct)
expert_groupintegerrequiredExpert group ID assigned to this miner (0-indexed). Each miner trains only the experts belonging to this group.
chain_endpointstringwss://entrypoint-finney.opentensor.ai:443Bittensor chain WebSocket endpoint
netuidintegerrequiredBlockZero subnet UID
wallet.namestringdefaultBittensor coldkey wallet name
wallet.hotkeystringdefaultBittensor hotkey name
checkpoint_dirstring./checkpointsCheckpoint storage path. Supports local paths, s3:// URIs, and ipfs:// URIs.
data_pathstringrequiredPath to the training dataset for this expert group (JSONL format)
batch_sizeinteger4Per-GPU batch size during training
gradient_accumulation_stepsinteger4Number of gradient accumulation steps. Effective batch size = batch_size × gradient_accumulation_steps.
learning_ratefloat3e-5AdamW optimizer learning rate
warmup_stepsinteger100Number of linear LR warmup steps at the start of each training cycle
max_stepsinteger500Maximum number of training steps per cycle before the miner submits its checkpoint
fp16booleantrueEnable FP16 mixed-precision training
wandbbooleanfalseEnable Weights & Biases run logging
wandb_projectstringblockzero-minerW&B project name. Only used when wandb: true.
checkpoint_keep_top_kinteger3Number of local checkpoints to retain by lowest validation loss
num_workersinteger4Number of DataLoader worker processes
seedinteger42Global random seed for reproducibility

ValidatorConfig

Example

model_path: ./models/Qwen3-VL-30B-Instruct
chain_endpoint: wss://entrypoint-finney.opentensor.ai:443
netuid: 42
wallet:
name: my-wallet
hotkey: my-validator-hotkey
checkpoint_cache_dir: ./validator-cache
server_port: 8080
server_host: 0.0.0.0
# auth_token: set via BZ_AUTH_TOKEN environment variable
eval_dataset_path: /data/validation/held-out.jsonl
eval_batch_size: 8
score_ema_alpha: 0.9
consensus_timeout: 60
max_checkpoints_cached: 50

Field Reference

FieldTypeDefaultDescription
model_pathstringrequiredPath to the base model directory for serving to miners and for evaluation
chain_endpointstringwss://entrypoint-finney.opentensor.ai:443Bittensor chain WebSocket endpoint
netuidintegerrequiredBlockZero subnet UID
wallet.namestringdefaultBittensor coldkey wallet name
wallet.hotkeystringdefaultBittensor hotkey name
checkpoint_cache_dirstring./validator-cacheLocal directory where downloaded miner checkpoints are stored for evaluation
server_portinteger8080Port the FastAPI model server binds to
server_hoststring0.0.0.0Network interface the FastAPI server listens on
auth_tokenstringenv: BZ_AUTH_TOKENBearer token for API auth. Set via environment variable, not config file.
eval_dataset_pathstringrequiredPath to the held-out validation dataset for Proof-of-Loss scoring
eval_batch_sizeinteger8Batch size used during miner checkpoint evaluation
score_ema_alphafloat0.9EMA smoothing for MinerScoreAggregator. Higher = more weight on recent scores.
consensus_timeoutinteger60Seconds to wait for inter-validator merging before timing out
max_checkpoints_cachedinteger50Max miner checkpoints to retain in checkpoint_cache_dir
Auth token security

Never set auth_token directly in the config file. Use the BZ_AUTH_TOKEN environment variable to avoid storing secrets in version-controlled configuration files.