**By Rafael Kaufmann and the Digital Gaia Team**

This whitepaper describes the architecture for **Gaia,** a cyberphysical hybrid AI-human system for planetary-scale decision support and automation. Gaia is composed of:

- Fangorn, a decentralized network of AI agents (“Ents”);
- GaiaHub, a community of human contributors and users.

Gaia’s architecture simultaneously meets several design goals that today’s popular large deep learning AI systems fail to achieve, such as reliability, safety, alignment, explainability/ interpretability, robustness/ antifragility, and context-awareness. This is thanks to an adaptive, bottom-up, minimalist design stance, and an explicit emphasis on whole-system (sociotechnical) design that enforces model-alignment and control constraints. The key tradeoffs involve intention:

- While deep learning models are black boxes fitted “mindlessly” towards a loss function, Gaia is the product of mindful human effort to model the world in terms of semantics, probabilities, and causal structure; the technology helps humans discover, encode, validate, compose, deploy and evolve these models at greater scale and speed, while preserving an intrinsic “glass-box” constraint that allows for root cause analysis, failsafes, rollback and patching (Leventov 2023).
- While deep learning training protocols assume a fixed, explicit, top-level goal as the loss function, and hence offload instrumental subgoal discovery to the same black box, Gaia’s modular and context-bound architecture allows it to discover global goal states adaptively while remaining bound to explicit instrumental goals and satisficing constraints, bypassing AI alignment risk as usually conceived (ex: Yudkowsky 2022).

{Diagram}

An Ent implements an online compositional active inference loop, wired to a universe of sensors/data sources (including other Ents), actuators/resources (again, including other Ents), and “skills” (generative model components sourced from GaiaHub contributions, assumed to encode causal/scientific theories as well as instrumental theories). Active inference is performed over a “model of models” that simultaneously parameterizes model space and state space: the Ent updates its belief assignments over which skills are relevant and how they are connected into a “world model” (dynamic Bayesian model composition), and infers belief assignments over each skill’s state space to best explain observations and guide actions. An explicit, versioned ontology/schema serves as shared semantic scaffolding for state space and constrains model-level structure discovery, while maintaining the ability for further higher-level concept discovery.

This inference loop is enabled by a probabilistic programming engine based on NumPyro (itself based on JAX). A skill is an arbitrary (differentiable) function containing probabilistic primitives that specify “beliefs” as random variables and optionally variational parameters. A skill’s compute graph thus explicitly determines a posterior in terms of state (including preference) priors and structural priors; the engine performs the heavy lifting required for inference (resolving between exact inference, Markov Chain Monte Carlo, or Stochastic Variational Inference), prediction, and policy selection (conceptualized in terms of an expected free energy functional over policies, which allows for arbitrary decision architectures to be declared). In this style, skills can be arbitrarily nested, recursively invoked, hyper-parameterized, abstracted into generic libraries, etc. These primitives can represent observables, latent variables or coefficients in an explicitly semantic computational statistics style, or weights in a (Bayesian) neural network — there is no hard boundary. The engine serves as the execution context in a general sense: it performs the wiring that connects agent and environment (i.e., enforcing a common set of dynamical and conceptual frames), importing skills, and performing inference over the “model of models”, as described above.

After each inference loop, the engine persists a content-addressed representation of the computation — i.e., the model, execution context, and output set of belief updates — on an append-only database with field-level cryptographic signatures. This supports inference verification protocols to be built (whether through sampling methods such as randomized audits, or formal methods such as zero-knowledge proofs when feasible), allowing Ents’ “claims” — i.e., predictions and decisions — to be interpreted and reproduced by both humans and other agents.

The same verifiability guarantee allows Ents to reuse each other’s beliefs in a zero-trust, private and robust manner. This is the foundation for a federated inference scheme, where the entire network jointly estimates a hierarchical model posterior through partial pooling (Markovic, Champion and Kaufmann 2023, forthcoming), forming a substrate both for decentralized scientific analysis and for spatially- or domain-explicit, bottom-up modeling of phenomena at higher scales (a style related to Bayesian agent-based models).

An Ent’s estimates of data accuracy parameters and model weights can be interpreted to calculate contribution scores, for instance through Shapley values (Molnar 2023). This supports the establishment of positive-sum knowledge value chains where Ents act as claim factories, with GaiaHub’s model contributors and data providers as material suppliers. In this context, it is useful to note that these value chains have built-in incentives for quality control.

We have implemented a first generation of the above architecture where Ents act as “assistants” or “copilots”, providing decision support guidance to humans and traditional organizations while learning from each other in an accelerated scientific discovery cycle. As they evolve, we may see Ents become economic actors of their own right, autonomously managing resources and organizations. In this latter, fully agentic incarnation, Gaia may play a crucial role in a global transition towards intentional, positive-sum “Gaianomics” (Kaufmann 2020).

This whitepaper is a living document, intended to provide an overview of the current state of our technical design and our key theses about how this technology might solve the myriad challenges involved in it. As such, we assume the reader is at least broadly familiar with computer science, in particular with statistical modeling/machine learning methods and with distributed computing.

- We first introduce the motivation for Gaia, referring both to the primary application — endowing ecosystems with agency — and to the design stance of alignment-by-design.
- The majority of the paper describes the technical architecture of Gaia, focusing mainly on the inner workings of its nodes, called Ents, and particularly on aspects that diverge from standard agent modeling techniques and/or leverage techniques and frameworks that may be unfamiliar to most readers.
- We also provide two appendices: the first walks through our choice of technical frameworks (Bayesian probability and active inference), for readers that may be less familiar with these frameworks. The second briefly motivates our design stance and provides key references.