image.png


1. Introduction: The Vision for Decentralized DeFi Risk Intelligence

DRON (DeFi Risk Oracle Network) is a Bittensor subnet designed to become the decentralized nervous system for DeFi protocol risk. Our core vision is to replace fragmented, manual, and stale risk assessments with a continuously operating network of AI agents that score, monitor, and predict risk across live DeFi protocols in real time.

The DeFi ecosystem lost $2.7 billion in 2025 alone to hacks and exploits — yet risk infrastructure has not kept pace. Existing solutions like DeFi Safety and DeFiYields provide periodic, manually curated ratings updated monthly. No solution provides real-time, multi-dimensional, AI-powered risk scoring that is decentralized and verifiable.

DRON addresses this gap by deploying a subnet where miners compete to produce the most accurate and timely risk assessments across DeFi protocols. Validators score miners against real on-chain ground truth — actual exploits and liquidation events — creating a genuine Proof of Intelligence system that continuously evolves as the DeFi threat landscape changes.

The result is a permissionless risk oracle: any DeFi protocol, yield aggregator, or institutional trader can query DRON’s API and receive verified, consensus-weighted risk intelligence — powered by TAO incentives and verified by decentralized AI.


2. Incentive & Mechanism Design

2.1 Emission and Reward Logic

DRON uses a continuous, accuracy-weighted emission model. Unlike winner-takes-all systems, this ensures broad participation while still rewarding excellence.

Every Bittensor epoch (tempo), each miner is assigned a batch of risk assessment tasks. Their score is computed as a weighted composite across four dimensions:

Miner Score = (0.40 × Accuracy) + (0.25 × Confidence Calibration) + (0.20 × Speed) + (0.15 × Consistency)

TAO emissions flow proportionally to normalized miner scores via Yuma Consensus. Validators set weights, and the network averages them into final miner rankings. This ensures:

2.2 Incentive Alignment for Miners and Validators

Miners are incentivized to build genuinely intelligent risk models because:

Validators are incentivized to produce fair, rigorous evaluations because:

2.3 Anti-Adversarial Mechanisms

Threat Mechanism
Miners copying public risk ratings (DeFiYields, DeFi Safety) verbatim Miners must provide on-chain evidence citations per sub-factor, not just a score
Miner/validator collusion Randomized task routing — miners cannot know which validator grades them
Sybil attack (multiple low-quality miners) Minimum stake requirement for miner registration
Score manipulation via retroactive data adjustment All submissions are timestamped and hashed on-chain at submission
Validator awarding high scores to colluding miners Cross-validator consistency checks — outlier validators get their weights reduced

2.4 Proof of Intelligence

DRON qualifies as genuine Proof of Intelligence because:

  1. Non-trivial computation: Risk scoring requires integrating multiple heterogeneous data streams — on-chain liquidation ratios, oracle price deviation, smart contract static analysis, liquidity depth across DEXs, and historical volatility patterns. This cannot be replicated with a simple API call.
  2. Verifiable ground truth: Every prediction can be checked against real on-chain events. If a miner flagged Protocol X as high-risk 48 hours before a $10M exploit, that is an objectively verifiable correct call.
  3. Adversarial environment: The DeFi threat landscape evolves continuously. Miners who do not update their models fall behind competitors. Stagnation means emissions loss.
  4. Calibrated uncertainty: Miners must not just produce a score but also a confidence interval. A miner that says “0.85 risk, confidence 0.90” and is wrong gets penalized more than one that says “0.65 risk, confidence 0.55”. This demands probabilistic reasoning, not binary classification.

2.5 High-Level Algorithm

Every Bittensor Epoch (tempo ~360 blocks):

1. TASK GENERATION
   Validator selects N protocols from active watchlist
   For each protocol, validator packages:
   → protocol_address, chain, block_range, task_type
   Tasks are distributed randomly across registered miners

2. MINER EXECUTION (60s window)
   Each miner:
   → Fetches on-chain data (TVL, health factors, oracle prices)
   → Runs static analysis (Slither/Mythril wrapper)
   → Queries liquidity depth across top 3 DEXs
   → Computes 5-factor risk vector
   → Returns: risk_score, confidence, factor_breakdown, evidence_citations

3. VALIDATION (post-deadline)
   Validator checks miner response against:
   a) Historical exploit dataset (retroactive scoring)
   b) Cross-miner consensus (statistical outlier detection)
   c) Sub-factor evidence verification (on-chain data hash match)

4. SCORING
   Miner scores computed per formula above
   Scores submitted as weights to Yuma Consensus
   TAO emissions distributed proportionally

5. WEIGHT UPDATE
   Top 256 miners receive non-zero weights
   Bottom performers gradually deregistered after 3 consecutive low-scoring epoch

3. Miner Design

3.1 Miner Role

Miners are operators of AI risk assessment agents. They receive a protocol assessment task, gather on-chain and off-chain data, run their risk models, and return a structured risk report with evidence within the allotted time window.

Miners compete not just on the quality of their final score, but on the quality of their reasoning — validators verify evidence citations, not just the number.

3.2 Miner Tasks

The core task for every miner is:

Given a DeFi protocol address, chain, and timestamp, produce a verified multi-dimensional risk assessment with on-chain evidence.

Miners must handle three task types:

Task Type Description Frequency
COLLATERAL_HEALTH Score liquidation risk based on health factors and collateral volatility 50% of tasks
LIQUIDITY_RISK Score TVL concentration, slippage depth, and withdrawal risk 30% of tasks
SMART_CONTRACT_RISK Score contract vulnerability surface based on static + historical analysis 20% of tasks

3.3 Input → Output Format

Synapse Input (Validator → Miner)

{
  "task_id": "uuid-v4",
  "protocol_address": "0xBAAEBBD1251b5E1f... ",
  "protocol_name": "Aave V3 Ethereum",
  "chain": "ethereum",
  "block_number": 21985000,
  "task_type": "COLLATERAL_HEALTH",
  "deadline_ms": 60000
}

Synapse Output (Miner → Validator)

{
  "task_id": "uuid-v4",
  "miner_hotkey": "5F3sa...",
  "timestamp": 1740522000,
  "risk_score": 0.73,
  "confidence": 0.88,
  "factors": {
    "collateral_health": {
      "score": 0.71,
      "evidence": "Avg health factor: 1.18 across 3,240 positions. 847 positions below 1.3 threshold. Source: Aave V3 subgraph block 21985000"
    },
    "liquidity_depth": {
      "score": 0.82,
      "evidence": "2% slippage depth: $4.2M on Uniswap V3 USDC/ETH. Source: DeFiLlama API"
    },
    "oracle_reliability": {
      "score": 0.91,
      "evidence": "Chainlink ETH/USD: 0 deviations >2σ in past 720 blocks. Source: on-chain feed 0x5f4ec..."
    },
    "smart_contract_risk": {
      "score": 0.55,
      "evidence": "Slither: 2 medium warnings (reentrancy-benign, tautology). Last audit: 14 months ago."
    },
    "volatility_risk": {
      "score": 0.68,
      "evidence": "30d realized vol: 38.4%. Corr to liquidation cascade: 0.72. Source: on-chain price history"
    }
  },
  "model_version": "dron-miner-v1.2.0",
  "response_time_ms": 4820
}

3.4 Performance Dimensions

Dimension Weight How Measured
Accuracy 40% Miner’s risk score vs. actual exploit/liquidation outcome in retrospective validation
Confidence Calibration 25% Brier Score — overconfident wrong predictions penalized quadratically
Speed 20% Response time within 60s window; logarithmic decay after 30s
Consistency 15% Std deviation of scores for same protocol across 10 epochs

3.5 Miner Lifecycle

1. Register hotkey + deposit minimum stake (0.1 TAO)
2. Set up miner node (Python, bittensor SDK, Docker)
3. Build risk model (can use any stack: Slither + LightGBM, transformer-based, rule-based)
4. Connect to DRON data feeds (DeFiLlama, The Graph, Aave subgraph)
5. Go live → receive tasks → submit responses → earn TAO
6. Iterate model based on accuracy feedback in public dashboard

4. Validator Design

4.1 Scoring and Evaluation Methodology

Validators evaluate miners using a three-layer scoring pipeline:

Layer 1 — Format & Evidence Verification (Pass/Fail)

Layer 2 — Retrospective Accuracy Scoring (Primary Signal) Validators maintain a curated dataset of 500+ historical DeFi exploits from the De.Fi REKT Database with exact block timestamps. For each task, validators query: “What was the actual outcome for this protocol near this block?”

Scoring formula:

Accuracy Score = 1 - |miner_risk_score - actual_outcome|
(where actual_outcome = 1.0 if exploit occurred within 72h, 0.0 if protocol remained safe)

Confidence calibration uses the Brier Score:

Brier = (forecast_probability - actual_outcome)²
Lower Brier = better calibrated miner

Layer 3 — Cross-Miner Consensus Check

4.2 Evaluation Cadence

Event Timing
Task assignment Every epoch (~60 min)
Miner deadline 60 seconds per task
Validation computation Post-deadline, within same epoch
Weight submission to chain End of each epoch
Leaderboard update Real-time via public dashboard API
Validator scenario refresh Weekly (new protocols added, stale ones rotated out)

4.3 Validator Incentive Alignment


5. Business Logic & Market Rationale

5.1 The Problem

DeFi lost $2.7 billion to exploits in 2025 and over $8.8 billion year-to-date by October 2025. The root cause is not lack of security tools — it is the lack of real-time, continuous, multi-protocol risk monitoring that any protocol or trader can access programmatically.

Existing solutions fall into two inadequate buckets:

No solution provides what DRON does: a real-time, AI-powered, decentralized risk oracle with verifiable outputs and economic incentives for accuracy.

5.2 Competing Solutions

Solution Type Limitation DRON Advantage
DeFi Safety Manual scoring Updated monthly, single auditor bias Real-time, AI-driven, multi-factor
DeFiYields Risk Ratings Automated but simple Rule-based A–F ratings, no ML Probabilistic scores with confidence
Chainlink / API3 Price oracles Price data only, no risk intelligence Risk intelligence layer above price
Trail of Bits / OpenZeppelin Smart contract audit One-time, expensive ($50k+), not continuous Continuous automated analysis
Gauntlet / Chaos Labs Simulation-based Closed-source, enterprise-only, expensive Open, permissionless, affordable API
Existing BT Subnets (SN8, SN15) Price/trading signals Price prediction, not risk assessment Different layer — complementary

5.3 Why Bittensor Is the Right Fit

  1. Decentralized intelligence beats centralized: Risk models improve through competition. No single team has a monopoly on DeFi expertise. TAO incentives attract global quant researchers and DeFi engineers.
  2. Verifiable outputs: Bittensor’s consensus layer provides cryptographic assurance that DRON scores represent genuine network consensus, not a single actor’s opinion.

DRON_Submission_Proposal