Decentralized Manufacturing Intelligence Subnet

Defektr is a Bittensor subnet that produces a single commodity: production-grade, edge-deployable AI models for manufacturing visual quality control.

Miners compete to build the best defect detection models, scored on accuracy, speed, and robustness. Validators maintain curated benchmark datasets and objectively evaluate miner submissions through Yuma Consensus. Factory customers purchase top-ranked models and deploy them locally on edge hardware (Jetson, Coral TPU, industrial PCs) at production-line speed.

The result is a decentralized R&D engine for industrial computer vision, where global competition between miners continuously produces improving models, and market demand from real factories steers development toward what industry actually needs.

Why it matters: Manufacturing quality control costs companies 15–20% of sales revenue. The AI visual inspection market is growing at 25% CAGR toward $40B+. Yet incumbent solutions (Cognex, Keyence) cost $10K–$100K+ per station, locking out mid-market manufacturers. Defektr crowdsources model development across the miner network, delivering equivalent capability at a fraction of the cost.

image.png

image.png

What's in this proposal:

  1. Incentive & Mechanism Design: Emission logic, composite scoring, two-signal reward system (benchmark + market), and anti-gaming mechanisms.
  2. Miner Design: Task definition, input/output protocol, performance dimensions, and competitive strategies available to miners.
  3. Validator Design: Benchmark dataset management, evaluation loop, speed verification, and scoring methodology.
  4. Business Logic & Market Rationale: Problem sizing, competitive landscape (inside and outside Bittensor), monetization model, and path to sustainability.
  5. Go-To-Market Strategy: Early adopter targeting, bootstrapping incentives, and distribution channels.
  6. Extension: Federated Learning Layer: Phase 3+ vision where factories contribute private production data via federated learning, creating a cross-factory data flywheel that no single company can replicate.

Incentive & Mechanism Design

In Defektr, the benchmark is the subnet’s ground truth. What the subnet optimizes is not abstract “model quality,” but measurable performance against curated, rotating defect datasets that reflect real industrial conditions. Benchmark design, dataset composition, and evaluation methodology therefore define what intelligence means within the subnet and are treated as first-class protocol components

Emission and reward logic

Defektr receives TAO emissions from the Bittensor network via Dynamic TAO. Stakers vote with their TAO by staking into the subnet's alpha token pool, more inflow means higher emissions. Once emissions arrive, Yuma Consensus distributes them: approximately 41% to miners, 41% to validators (via bonds), and 18% to the subnet owner.

Miner Reward Calculation: The subnet uses a winner-takes-all reward structure, emissions are concentrated toward the top-performing miner(s) rather than distributed proportionally across all participants. This reflects the real market dynamic: factories want the single best model for their use case, not a distribution of mediocre ones. The exact reward curve (strict winner-takes-all vs. steep top-n distribution) will be calibrated during testing based on observed miner behavior.

Each evaluation epoch, the validator(s) scores miners using the composite formula: Score = 0.50 × accuracy + 0.30 × speed + 0.20 × robustness. Scores are normalized into a weight vector and submitted on-chain via set_weights(). Yuma Consensus distributes emissions according to these weights.

Validator Reward Calculation: Defektr launches with a centralized validator operated by the subnet team. This is a deliberate choice and the approach recommended by established subnets like Chutes. The evaluation process is fully transparent and auditable: benchmark samples are selected using deterministic public randomness, all scores and weights are published on-chain, and anyone can independently verify that scoring is consistent and fair. What matters is not who runs the validator, but that the evaluation process is open to scrutiny.