Part of ‣

Deutsch is a scientific realist: he thinks that *true* explanations of phenomena in science (as well as art, philosophy, and history) do exit, even though we may never actually attain to them but only get ever closer to the truth.

Deutsch labels instrumentalism (a view that science cannot possibly come up with explanations of anything, only predictions) and behaviorism (which is instrumentalism applied to psychology: a view that pretending to be able to think is the same as to think) as misconceptions.

I don't remember if Deutsch tried to logically prove this position in the book, so I cannot point out in what exactly I disagree with Deutsch.

Personally, I believe in instrumentalism and behaviorism because I also believe in the Principle of Computational Equivalence, a. k. a. Computable Universe Hypothesis, a. k. a. *digital philosophy*. If the reality is equivalent to computation, saying that there are true scientific explanations, that is, exact models of phenomena, would be equivalent to saying that we can "outcompute" the universe. The basic structure of the universe's computation might be so that this is theoretically possible, indeed. But this is not *guaranteed* to be possible for any possible structure of the universe's computation, because this problem is equivalent to the halting problem.

To quote Stephen Wolfram:

And actually even before that, we need to ask: if we had the right rule, would we even know it? As I mentioned earlier, there’s potentially a big problem here with computational irreducibility. Because whatever the underlying rule is, our actual universe has applied it perhaps $10^{500}$ times. And if there’s computational irreducibility—as there inevitably will be—then there won’t be a way to fundamentally reduce the amount of computational effort that’s needed to determine the outcome of all these rule applications.

However, Wolfram immediately goes on to say:

But what we have to hope is that somehow—even though the complete evolution of the universe is computationally irreducible—there are still enough “tunnels of computational reducibility” that we’ll be able to figure out at least what’s needed to be able to compare with what we know in physics, without having to do all that computational work. And I have to say that our recent success in getting conclusions just from the general structure of our models makes me much more optimistic about this possibility.

But Wolfram's optimism here is not even a philosophical position, it's just his attitude. The philosophical premise remains that the laws of physics, even if somehow deducible from the basic structure of the universe's computation, will remain *models* of what's happening, and not absolute truths. Models naturally have bounds of applicability, as, for example, general relativity and quantum mechanics apply on different scales. Models may also cease to apply under certain conditions, and it's not just the poster examples of the Big Bang and black holes. John Barrow writes:

There exist equilibria characterised by special solutions of mathematical equations whose stability is undecidable. In order for this undecidability to have an impact on problems of real interest in mathematical physics the equilibria have to involve the interplay of very large numbers of different forces. While such equilibria cannot be ruled out, they have not arisen yet in real physical problems. Da Costa and Doria went on to identify similar problems where the answer to a simple question, like ‘will the orbit of a particle become chaotic’, is Gödel undecidable. They can be viewed as physically grounded examples of the theorems of Rice and Richardson which show, in a precise sense, that only trivial properties of computer programs are algorithmically decidable.

Whether GPT-3 really has common sense knowledge or just "deepfaking understanding" is an instance of instrumentalism vs. realism debate.

Related:

- ‣

Contra: