This is a living document — updated as questions arise across real use and feedback from the posts today. These are the most common ones so far.


1. Is Neuron just a wrapper around GPT?

No — that’s not what this is. Neuron isn’t built to dress up model calls. It’s an architecture for building reasoning systems: agents with memory, behavior, internal state, and the ability to collaborate or recover from failure. You can use LLMs inside it, but the system doesn’t depend on any specific one.


2. How is this different from LangChain or other prompt chaining tools?

Those tools are focused on chaining prompts. Neuron is about composing cognition. Each agent can reason, remember, respond to error, or hand off to others. You’re not stitching API calls — you’re assembling behaviorally distinct modules into resilient pipelines.


3. Does the memory system go beyond prompt tokens?

Yes. Memory here is its own system — layered, contextual, and queryable. You get working memory, episodic recall, semantic knowledge, procedural steps, even emotional tagging. These aren’t just prompt embeddings — they persist, decay, and adapt structurally.


4. Can I run this in production?

Yes — with some work. The tutorials aren’t a hosted product. They’re a reference architecture. You’ll want to add your own persistence, API boundary, auth, CI/CD, etc. But the core is modular, testable, and designed to support scale and adaptation.


5. What happens when things break?

Tutorial 13 goes into recovery. Agents can recognize error types, log context, switch to degraded modes, retry, or trigger circuit breakers. Failures don’t crash the system — they get processed like any other event.


6. Is agent behavior fixed, or does it change over time?

It changes. In Tutorials 14 and 15, agents shift traits, update preferences, and even reassign roles in team settings based on outcomes or urgency. This includes both internal and interpersonal adaptation.