Installation
Clone the Repository
git clone https://github.com/Connito-AI/Connito
cd ConnitoCreate a Virtual Environment
Create and activate a virtual environment before installing any dependencies:
python -m venv .venv
source .venv/bin/activate # Linux/MacInstall Dependencies
Install build tools and project dependencies:
sudo apt-get update
sudo apt-get install -y build-essential python3-dev
pip install -r requirements.txt
pip install -e .This installs all required packages (including Bittensor and PyTorch) and registers the project entry points. There is no need to install Bittensor or the base model separately.
Generate a Config File
Create a template config file for your miner:
python connito/shared/config.py \
create_config \
--role miner \
--coldkey_name <your coldkey name> \
--hotkey_name <your hotkey name> \
--run_name <your naming to specify this run>Then edit the generated YAML config to adjust any specifics. See Configuration for field details.
Observability & Telemetry
Your standalone miner automatically exports rich runtime metrics via a passive local HTTP server. This allows you to track memory usage, training loss, and chain state safely without interrupting PyTorch execution.
To view these metrics locally, you can use the pre-configured Prometheus and Grafana stack located in the root repository.
Getting Started
Telemetry is enabled by default. To disable it, run your miner with ENABLE_TELEMETRY=false.
In a new terminal, navigate to the observability directory in the repository root and launch the stack:
cd Connito/observability
docker compose up -dNavigate to http://localhost:3000 and login with admin/admin.
Verify Everything
# Check GPU is visible
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}, GPUs: {torch.cuda.device_count()}')"
# Check bittensor
python -c "import bittensor; print(bittensor.__version__)"If both pass, proceed to Configuration.