Connito AI
Validator Guide

Prerequisites

Important Note on Validator Whitelisting

To allow for close monitoring and rapid updates while the subnet stabilizes, we are temporarily utilizing a strict validator whitelist.

Currently, we are not accepting requests to join the validator whitelist.

If you hold TAO and wish to participate in consensus to earn validator emissions, you should instead set a childkey to the Subnet Owner at UID 0 rather than running your own validator node.

Running a Connito validator has higher requirements than running a miner. Validators must be online continuously, hold the full model, and score every miner submission in each cycle.

Stake Requirement

Validators must have sufficient TAO staked to be considered active on the subnet. Check the current minimum stake:

btcli subnets metagraph --netuid 102
# Look for: STAKE column or similar stake requirements

Validators with more stake have more influence in inter-validator merging and earn a higher share of validator emissions.

Server Requirements

Unlike miners, validators should run on a dedicated server (not a consumer desktop). Validators need:

  • 24/7 uptime — missing evaluation windows reduces your influence in consensus and can result in deregistration
  • Reliable network — validators serve model files to miners; high uptime and bandwidth are essential
  • Open port for miner connections (default: 8000)

Hardware Requirements

ComponentMinimum
GPU VRAM48GB (A6000)
System RAM64GB
Storage500GB NVMe
Network1Gbps

GPU compatibility: NVIDIA A6000 (48GB) is the reference hardware for validators. Other GPUs with 48GB+ VRAM will also work.

Storage requirement explanation: The full model plus per-miner checkpoints can add up quickly. Size your storage accordingly.

Software Requirements

Same as miners:

  • Python 3.10+
  • CUDA 12.x
  • Git and pip
  • Bittensor installed

Additionally, for the model distribution server:

  • Fastapi
  • Uvicorn

Networking

The validator runs an HTTP server that miners connect to for model downloads and checkpoint submissions.

  • Open port 8000 (or your configured server_port) to inbound connections
  • If behind NAT, configure port forwarding
  • TLS termination is recommended for production deployments (use a reverse proxy like nginx)

TAO Registration

You must register your hotkey on the Connito subnet to participate. Registration requires a small amount of TAO (the Bittensor network token).

btcli subnet register --wallet.name my-wallet --wallet.hotkey my-hotkey --netuid 102

Check the current registration cost (look for "Registration cost (recycled)"):

btcli subnets metagraph --netuid 102

For validator-level stake influence, you may also need to call become_delegate and have TAO delegated to your hotkey — see Bittensor documentation.

Hugging Face Setup

You must have a Hugging Face account and grant access to the required dataset.

  1. Log in at Hugging Face.
  2. Accept the NVIDIA Open Data License Agreement to the nvidia/Nemotron-CC-Math-v1 dataset.
  3. Install the Hugging Face CLI:
    pip install -U huggingface_hub
  4. Generate a Read only token in your Hugging Face Settings.
  5. Create a .env file in the project root (if it doesn't exist) and add your token:
    HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxx
  6. Log in using the CLI:
    huggingface-cli login

Checklist

  • Sufficient TAO stake for validator status
  • Dedicated server with 24/7 uptime
  • GPU with ≥48GB VRAM (A6000 or equivalent)
  • 64GB+ system RAM
  • 500GB+ storage for model + checkpoint cache
  • Port 8000 open inbound
  • Python 3.10+, CUDA 12.x installed
  • Hugging Face account set up with dataset access and token in .env
  • Validator hotkey registered on Connito subnet