Neuron's connections and behaviour determine the type, not place.
Layers or redundancy in the brain: dozens of synapses support an input pattern for a neuron to fire. Many neurons in one column responsible for a location or a feature in a reference frame. Many columns map the same reference frames (hence 'A Thousand Brains').
Brain arrives at a conclusion about the reference frame and a location by voting between columns. There are a few neurons having long connections for voting.
Attention plays essential role in how the brain learns models. All objects in attention are constantly added to models, either temporary or not.
Differences between deep learning and human intelligence
Knowledge is embodied in models.
Most intelligent machines will be universal (dubious).
Hawkins: there are moral obligations to intelligences with fears and emotions, therefore, unplugging AI without built-in fear of death would not be an equivalent to a murder. Q: what if AGI discovers emotions (or emotion-like states humans don't even recognize) on its own? How do we know if it did? Ref. ‣
Hawkins critical of hard takeoff: we can endow a machine to learn a model of the world, but the model still has to be learnt, and it takes time: e. g. observing natural processes, or building experimental setups or observatories.
"The list of things humans can do is so large that no machine can surpass human performance in every field." — this is not needed, but if AGI can learn much faster than to human in general, it is still AGI even if it doesn't learn exotic languages or a lot of trivial skills.
"Superhuman intelligence is impossible because what we know about the world is constantly changing and expanding." — also nonsense. The complexity theory suggests that no machine can "look objectively at the system", e. g. understand the ecosystem of intelligences as complex as it is. But this doesn't mean superhuman intelligence is impossible. Humans will be far, far behind by that time.
Hawkins: self-replication is a far greater threat to humanity than machine intelligence. To put in another way: bioweapon existential risk is higher than the risk from AGI.
Recognise the distinction between replication, motivation, and intelligence.