One day, around two years ago, I had an intuition: that geometry is more fundamental than logic.

As I was researching this interest and posting about what others have written, I ran into Professor McCarty (linked to at the end of this post) who had been doing theoretical research in this area. Understanding his foundational approach is essential to building AI models capable of sustaining a reliable and coherent world model, which is essential for an AI that would very powerful and dramatically more reliable than what we have so far.

The proposed model architecture consists of 3 primary layers, and a 4th one I added for high level goal driven reasoning.

It’s not all theory. The 3 layers of Prof. McCarty have been realized and demonstrated in code, although on simple cognitive tasks, proving learning within this architecture.

The work I’ve been doing is largely experimental and my time spent on it has been constrained by competing work. I thought I should share my thoughts on this approach and point people to Prof. McCarty’s work in the hope of inspiring others to follow this path.

The 4 layers of the proposed model architecture

Layer 1 — Statistics (priors & uncertainty)

Example: You expect dropped objects to fall because your priors encode everyday physics (gravity, friction), not merely patterns of words.


Layer 2 — Structure (frames, invariances, constraints)

This is the grammar of worlds—what can vary and what must stay fixed.