<aside> 💛

Welcome, hackers! This is your single source of truth for everything Weights & Biases at the Mistral Worldwide Hackathon — February 28 – March 1, 2026. Whether you're fine-tuning Mistral models, building agents, or pushing on-device AI, W&B has you covered. Jump to Quickstart Guide to see how best to get started.

</aside>


🏆 W&B Fine-Tuning Track

Push the Limits — Fine-Tune Mistral Models to Master Any Task

W&B is the Global Track Sponsor and we're hosting the Fine-Tuning Track. This track is for technically strong builders who want to fine-tune Ministral, Mistral Small, Mistral Medium, Codestral, or other Mistral models on a specific task of their choosing.

You can fine-tune however you like:

Judging Criteria — Fine-Tuning Track

Criteria Description Weight
Technical Quality Task-fit - does it make sense to fine-tune (show superior performance to prompt engineering only)?

Quality and complexity - how good is the fine-tuning? Incl. workflow completeness and complexity (data preparation, model selection, evals, optimization), benchmark maturity and optimization improvements and viability of final model for task. | ⭐⭐⭐⭐ | | E2E Points (Models + Weave) | Extra points if you use W&B Models and Weave together — e.g. fine-tune a model and then trace & evaluate it as part of an agent pipeline. | ⭐⭐ | | Experiment Tracking and Artifacts Logging (W&B Models) | Loss plots and key metrics are tracked in W&B Models. We want to see your training runs! We also expect you to save your model as an Artifact and log to HF for us to validate (e.g. the LoRA adaptors). | ⭐ | | Tracing and Evaluation (W&B Weave) | Likely you’ll integrate the model into an agent that needs to be evaluated (and maybe traced) - show us the traces and best evaluation in W&B Weave! | ⭐ | | W&B Report | A W&B Report summarizing your key findings, training curves, and results. Tip: use the W&B MCP to help generate your report! 📊 Plots will be shipped as an MCP tool update soon — factor this into the report criteria. | ⭐⭐ |


🤖 Supporting the Agents (Mistral) & On-Device (NVIDIA) Tracks

W&B also supports the other two tracks at the hackathon:

Mistral Agents Track

Build agents powered by Mistral models. W&B Weave is your best friend here — trace every LLM call, tool use, and agent decision. Our new audio evals and monitors are especially relevant if you're building with Voxtral (Mistral's voice model). Use Weave to evaluate and monitor your agent's audio interactions in production.

NVIDIA On-Device Track

Deploy Mistral models on-device. Use W&B Models to track your quantization experiments and optimizations, and W&B Weave to trace and evaluate on-device inference quality.

For both tracks: Log your experiments in W&B and use Weave for tracing — this will help you stand out in judging!