Sponsored by https://www.spatialreal.ai/

Step outside the conventional text box. We invite you to reimagine how humans naturally communicate with intelligence in the real world.

https://www.spatialreal.ai/

SpatialReal <> BETA Hackathon Guidance(2).pdf

https://bodhiagent.live/

This track challenges you to build multi-modal digital entities that see, hear, and react with human-level fluidity. By combining real-time conversational infrastructure with dynamic spatial rendering, we want you to discover entirely native use cases that solve actual human problems. We are looking for applications where a high-fidelity visual presence and a full-duplex, non-blocking voice interface are fundamentally required to get the job done.

Judges will look for:

Native Application & Real-World Impact: Originality of the use case. Does this solve a genuine problem? We are looking for out-of-the-box thinking where the combination of a face, a voice, and spatial awareness creates a solution that a standard text interface never could.

Interaction Fluidity: Real-world execution of a continuous voice and visual loop. We will evaluate system stability and intelligence in handling natural human pacing, spontaneous user interruptions, and ultra-low latency across both the spoken responses and the real-time rendering of the entity's expressions and body language.

Architectural Depth: How intelligently the system bridges the real-time audio/visual layer with external actions. We want to see quality engineering that translates live spoken intent and visual context into immediate, observable systemic changes (e.g., Speech-to-Action).