<aside>
This document describes the cognitive architecture underlying Cognis, a neuro-symbolic infrastructure designed for general-purpose intelligence across agentic and physical domains. The architecture is intentionally domain-agnostic: the same structural principles apply whether the system is orchestrating software agents, controlling a robotic arm, or coordinating a multi-step workflow.
</aside>
Cognis is infrastructure. The foundational layer on which harnesses, agents, and intelligent systems are built. It’s part of our Substrate. It does not wrap around a single model. It provides the substrate that determines how any model sees context, executes through tools, gets evaluated, and retains what it learns over time. These four functions (context, execution, control, and accumulation) are the infrastructure beneath model intelligence, not a wrapper above it.
In mid-2025, we began building toward context graphs as the substrate for this architecture. The core observation was that most agent frameworks treat orchestration as a routing problem: a query arrives, gets classified, and gets dispatched to a tool or model. That pattern handles single-turn tasks. It fails when a request crosses domains, depends on learned preferences, or requires adaptation mid-execution. Orchestration coordinates. It does not learn. It does not decompose.
Cognis operates below the level of the process. It works with primitives, what we call sparks, and the learned connections between them. A spark is a unit of capability at any level of abstraction: it can be an atomic function, a composed workflow, or a full agent. The system evaluates spark combinations through lightweight critics and learns execution patterns over time through weighted connections we call K-lines. The result is a system that does not reason from scratch on every request.
The architecture draws from Marvin Minsky's Society of Mind, specifically the insight that intelligence is not a single process but an emergent property of many simple components interacting. We extend Minsky's framework in several ways. K-lines carry outcome-based weights rather than binary activations. Critics both excite and inhibit rather than only suppressing. The abstraction boundary between node and connection is intentionally fluid. A K-line that fires reliably as a unit is, to the layer above it, indistinguishable from a spark. An agent is a node with its own internal K-lines. The hierarchy is recursive.
Consider a request: "I want spicy pasta and book a cab."
In a typical multi-agent setup, three orchestrations exist, each refined over time. Orchestration A has cooked pasta correctly 50 times. It knows User A likes it bland. Orchestration B has cooked chicken correctly 50 times. It knows User B likes it extra spicy. Orchestration C books cabs. It has done it 50 times.
The new user wants spicy chicken with pasta, a combination that has never existed in any single orchestration, and a cab booked alongside it.
No individual orchestration fails here. The problem is structural: agents follow processes, and processes are rigid. They cannot be decomposed mid-execution and recombined with the internals of another process. Agents connect queries to processes and processes to users. They coordinate. They do not decompose. They do not learn from the recombination.
Cognis operates below the process layer. It works with the primitives that constitute processes — sparks and K-lines, and recombines them dynamically based on context, goals, and accumulated execution history.
In building it, we are learning more than we initially thought possible about how intelligence can be organized, transferred, and expressed. Real example :
Skills as example are the atomic units. Each Spark does one thing: takes an input, produces an output. No internal state, no goals, no awareness of why it's being called. A skill that detects an object works the same whether it's being used to grasp a mug or identify a tool on a shelf. Skills are context-free by design. They can also compose into structured processes — what Minsky called frames: stereotyped situations with default assumptions and fillable slots that the system adapts as context changes.
K-lines are the connections between skills. They represent learned pathways — which skills tend to fire together, in what order, and with what strength. Multiple K-lines compose into a process. Over time, K-lines strengthen or weaken based on outcomes. This is how the system learns without retraining any model.
Critics are lightweight evaluators. Each critic asks exactly one question: "Is this going well?" They don't plan. They don't command. They activate only when their specific conditions are met — a particular goal is active, a particular skill is running, a particular sensor reading crosses a threshold. Their output adjusts K-line weights in real time, exciting or inhibiting skills mid-execution.
Goals set context. A goal doesn't command actions — it biases which critics become relevant. When the goal is "reach lip safely," spill-risk critics wake up. When the goal is "move fast," they stay quiet. Goals shape the evaluation landscape without dictating the execution path.
Ah : Fav parts are spark. But we’ll talk about those in details some other day.
Over time, Intelligence emerges from the interaction between these layers. There is no central planner.
Robotics is a useful test case for this architecture because it operates in unstructured, physics-driven environments: wet floors, shifting loads, variable lighting. Pre-programmed scripts fail the moment reality deviates from expectations. The same structural problem appears in agent orchestration when user intent crosses domains or conditions change mid-task.