Why Can We Always Ask "Why?"?

In this world, we can coherently ask "why?" about any given phenomenon, knowing that, if we dug enough, we would find a satisfactory explanation of the phenomenon that places it clearly as the effect generated by some causes. Why the world is explainable in this manner is not yet clear to me, but that it is seems indisputable. We can say that everything in the world is subject to a doctrine of explainability$^1$.

This doctrine of explainability applies to humans and other animals as well. Anything any human does, be it a movement, a statement, a cognition, or something else, it can be shown to be the effect of some causes. But why can we always ask "why"? Where does the explanatory structure of the world, by which we can trace any given phenomenon back to other phenomena that cause it via some knowable mechanism, come from?

Causal vs Teleological Reasons

We use the word "reason" in two distinct ways:

  1. the causal sense, in which the reason for something is the phenomenon that caused it and/or the mechanism of causality. The reason that I waved hello to you is that my brain received the sense of your presence, did some cortical processing, and as a result sent some motor signals down to my arm in this particular pattern.
  2. the teleological sense, in which the reason for something is the purpose that it serves. The reason that I waved hello to you is that I wished to demonstrate my acknowledgement of your presence in a friendly manner (i.e., to greet you).

While a full account of why something happens — the word "why" being able to point to each of these two meanings as well! — will obviously depend on the complex interplay between causality and teleology, we tend to systematically conflate these two meanings when talking about reasons for things, why things happen, explanations for things, and so on.

The phrase "everything has a reason" is a cliche with religious overtones, but the word "reason" there is invariably used in its teleological sense — God made this particular phenomenon happen in order to achieve some end, though we may not know what that end is. The doctrine of explainability, on the other hand, says that "everything has a reason", using the word "reason" strictly in its causal sense — we can identify some set of entities and relations that caused this particular phenomenon$^2$.

There Are Differential Determinants of All Behavior

(Synonyms for "differential" here: "on the margin", "ceteris paribus").

The doctrine of explainability, when taken seriously, gives us a new perspective on behavior.

Why is any given weightlifter a weightlifter? Understanding that any full account of this must exclude all other possibilities (so that, for instance, we can't stop at something like "I've always been a very physical person, and I like seeing myself make progress", as this doesn't tell us why the person chose weightlifting over jiu-jitsu or marathon running), we must consider the person's propensity towards weightlifting in particular. There are several differential determinants of weightlifting propensity:

Most of these differential dispositions are extremely subtle, such that you wouldn't even notice their existence unless they were pointed out to you, something weird happened to you that relied critically on these things, or you have sufficient mindfulness to notice these dispositions as they play out in everyday life. This kind of mindfulness of differential dispositions is rare to see, but seems to be a critical part of the Alexander Technique (which I don't know much about), and is one of the implicit frameworks behind some rationalist self-improvement techniques (see e.g. the Bug Hunt sequence).

This is all to say that, unless we're incredibly good at introspection and continuously prompted to introspect, we will be blind to the reasons underlying almost all of our behavior. We don't even have an internal view, as the mental processes generating this behavior are at best waves underlying and gently shaking our conscious experience, rather than contained within our conscious experience; instead, we have an external view of the internal view. This consists of the usual concepts we use to structure our explanations of behavior: motivation, personality, emotion, and so on. This is a map which we take to be the territory, even though it's at best a crude effigy constructed in the image of something we've never even seen.

Footnotes

  1. I'm not convinced by the argument that things are explainable because our minds are constituted so as to prevent us from experiencing any effect without first experiencing the cause, because there are clearly effects which we don't know how to attribute causes to, and often don't even care to attribute causes to; why are we capable of experiencing these, when so often we only afterwards figure out a cause? It's also totally implausible from the evolutionary point of view.

    Furthermore, randomized quantum effects ("why did measurement cause the superposition to take this eigenvalue rather than this eigenvalue? — it can't be explained, it's random") seem irrelevant here. Not only do they throw a wrench into explainability only in specific interpretations of quantum mechanics (e.g. Copenhagen yes, Bohm no), but it's extremely rare to see them propagate to observable experience except when specifically guided to do so by certain structures, the designs of which provide explainability. This does away with potential violations of the explainability doctrine like Orch-OR, though not necessarily with some electronic mysteries, like this famous glitch which may well have been caused by quantum effects. So I'm willing to accept that there might be holes in the doctrine of explainability, but they're irrelevant to the macro-scale phenomena I'm discussing.

  2. On the usefulness of explicating obvious concepts:

    While it seems that the doctrine of explainability as I have stated it is "obvious", it's very useful to state this "obvious" thing explicitly, as careful consideration of it leads to some bizarre consequences and viewpoints.

    It's extremely common that we have some concept which seems obvious to us and yet which is left unexplicated. I can see five possible fates for such concepts:

    1. The concept remains unexplicated, and we are unable to examine its implications.

    2. An attempt is made at explicating the concept, and analysis of the explication reveals only further obvious implications.

      This is the "best case" scenario, and indicates that there is a large part of our intuition which includes the concept and fits into a unified frame.

    3. An attempt is made at explicating the concept, and analysis of the explication reveals non-obvious implications, which we accept.

      Category theory, for instance, started off as an attempt to formalize some intuitive patterns that appeared over and over again in algebraic topology — look how that went!

    4. An attempt is made at explicating the concept, and analysis of the explication reveals non-obvious implications, which lead us to reject or modify the attempt at explication.

      This is basically the history of moral philosophy.

    5. An attempt is made at explicating the concept, and analysis of the explication reveals non-obvious implications, which lead us to reject the obviousness (/coherence) of the concept itself.

      This is also basically the history of moral philosophy. Attempts at explicating an incoherent concept will obviously also be incoherent, unless they either add onto or remove from the concept; in either case, the concept will suggest implications that are non-obvious wrt the attempt itself, which leads to a rejection of the attempted explication just as easily as it leads to a rejection of the concept itself.

    Of course, we never know what path a concept will undergo as it is explicated, improved as a result, again explicated, etc., unless we've already gone through this dialectical process — so, if we haven't, we may as well try.