Thesis: Human intelligence entails a layered system where each level constrains (and is constrained by) the others.
Layer 1 — Statistics (priors & uncertainty)
- Cognition relies on structured priors and learned regularities: what tends to co-occur, what follows what, what’s typical or surprising.
- Priors aren’t just frequencies; they can encode structural assumptions (e.g., object permanence, causal locality).
- Evidence updates priors (Bayes/predictive processing), while habits/policies are learned action strategies—distinct from beliefs, though we can maintain uncertainty over which policy to use.
Example: You expect dropped objects to fall because your priors encode everyday physics (gravity, friction), not merely patterns of words.
Layer 2 — Structure (frames, invariances, constraints)
- “Structure” covers the representational geometry—how concepts are arranged so meaning is preserved under transformation—and the invariances/symmetries that preserve meaning within a frame. By a frame, I mean the representational choices (state, variables, relations, and allowed transformations) that make some distinctions audible and others silent.
- Each domain has allowed transformations and forbidden ones (constraints).
- Physics: Approximate invariance to translation/rotation/scale; forbidden: energy non-conservation or faster-than-light signaling.
- Law: Paraphrases that preserve legal force are allowed; forbidden: edits that change intent or liability (e.g., shall → may; removing indemnification; swapping representations for warranties).
- Art: Perspective shifts can preserve surreal identity; realism has tighter constraints.
This is the grammar of worlds—what can vary and what must stay fixed.
Layer 3 — Inference (logic, planning, explanation)
- Inference operates within representational structure and also pressures it to be checkable and compositional (types, ontologies, proofs).
- Valid reasoning steps must respect the frame’s constraints; in turn, reasoning should yield verifiable artifacts: property tests, type checks, proofs, executable traces.
Example: Orbital reasoning works because dynamics and conservation laws bound the allowable inferences.
Layer 4 — Objectives (values, costs, interfaces)
- Real-world intelligence is goal-directed: preferences, risk/cost trade-offs, and interfaces for feedback shape what “good” reasoning is.