This is a living document — updated as questions arise across real use and feedback from the posts today. These are the most common ones so far.
No — that’s not what this is. Neuron isn’t built to dress up model calls. It’s an architecture for building reasoning systems: agents with memory, behavior, internal state, and the ability to collaborate or recover from failure. You can use LLMs inside it, but the system doesn’t depend on any specific one.
Those tools are focused on chaining prompts. Neuron is about composing cognition. Each agent can reason, remember, respond to error, or hand off to others. You’re not stitching API calls — you’re assembling behaviorally distinct modules into resilient pipelines.
Yes. Memory here is its own system — layered, contextual, and queryable. You get working memory, episodic recall, semantic knowledge, procedural steps, even emotional tagging. These aren’t just prompt embeddings — they persist, decay, and adapt structurally.
Yes — with some work. The tutorials aren’t a hosted product. They’re a reference architecture. You’ll want to add your own persistence, API boundary, auth, CI/CD, etc. But the core is modular, testable, and designed to support scale and adaptation.
Tutorial 13 goes into recovery. Agents can recognize error types, log context, switch to degraded modes, retry, or trigger circuit breakers. Failures don’t crash the system — they get processed like any other event.
It changes. In Tutorials 14 and 15, agents shift traits, update preferences, and even reassign roles in team settings based on outcomes or urgency. This includes both internal and interpersonal adaptation.