Section 4.3
Section 4.3 introduces reality theory and discusses the deficiencies of the two main classes of models currently used in science and in reality theory. Reality theory is a term I could not find much information on but I doubt Langan made this one up, I would guess it was mainly used in certain niche circles. Reality theory seems very close to metaphysics, or like a bridge between metaphysics and the empirical sciences. The two main models are the older continuum model and the newer discrete model. The continuum model views reality in terms of unified, infinitely divisible dimensions. The discrete model views the world more in terms of finite, or countably infinite, parameters.
Science has made tremendous progress in the last few hundred years[citation needed], I doubt anyone would argue otherwise. So, when Langan discusses science being in a state of crisis he does not mean that science hasn’t made progress and developed startlingly good models and localized explanations. Instead, it is becoming more apparent that science cannot accomplish what many view as its main goal: to offer a full explanation of reality and in particular the physical world. This problem, that science may never be able to offer a full explanation of the world, has spawned reality theory. Reality theory aims to properly interpret quantum mechanics; reconcile quantum physics with classical physics; reconcile science, mathematics, philosophy, and religion; and provide a full explanation of the world. This leaves reality theory subject to a little recognized requirement: it must explain itself as well. Later on Langan defines the Reality Principle: “Reality contains all and only that which is real”. If something is real enough to affect reality, relevant enough, then it is real and contained in reality by definition. Because an explanation of reality would be real enough to affect reality, it must be inside reality and therefore be something that it itself explains. Literally it must be self-explanatory. It would also seem the case that any paradigm that cannot explain itself cannot serve as a basis for a reality theory. Both of the classes of models we use, discrete and continuous, are non-self-explanatory.
The continuum model has fallen out of favor with the advent of quantum mechanics and computer simulation. As science has increasingly used computer simulation as a tool, discrete models have gained favor. Not only does the continuum model seem inadequate for an explanation of quantum mechanics, it also seems inadequate for a self explanation: Where did the continuum come from? Where did infinitely divisible spacetime come from? Furthermore, rejecting the continuum is one of John Wheeler’s “No’s”. In “Information, Physics, Quantum: The Search for Links”, Wheeler argues against the existence of the continuum and against spacetime existing as a continuum. Similar arguments can be found in “Beyond The Black Hole”.
The discrete models are more inline with quantum mechanics but fail on the cosmic scale. Discrete models are based on bits, quanta, quantum events, and computational operations. Looking at the Quantum Meta-Mechanics paper, quanta and quantum events don’t necessarily have to do with the elementary particles in physics. You can have some property X, and quantize it in terms of its smallest meaningful units. Space, time, and energy in physics are all quantized. There’s no reason we can’t imagine quantizing other things inside or outside physics. Langan says the discrete models exhibit scaling and non-locality problems. These are problems we see directly in science. Scaling is likely a reference to quantum mechanics being very hard to reconcile with general relativity. The non-locality problems seems to be a reference (partly) to Nick Herbert’s book “Quantum Reality” overviewing Bell’s Inequality and why any reality of the type we live in must be non-local. Not sure yet but I suspect Langan attempts to solve non-locality with more than one layer of topology.
There’s a bigger issue with the discrete models as well, one shared by the continuum models. They are fundamentally classical and based on information and computation which are well defined, non-self-explanatory, non-self-generative concepts. They cannot account for themselves, and they cannot account for the hardware or medium the computations are operating on. No simulation has yet managed to account for the computer it’s running on, nor its own starting conditions (a ridiculously tall order, but still).
Langan appears to be suggesting that the only paradigm available that could account for the existence of information, computation, the appearance of the continuum, and itself, would be language. As argued in the previous section, language is the most general paradigm we have at our disposal. However, language as we generally use it is also insufficient. Language needs an external processor such as a computer or person to handle its operations. It will also need to be able to handle self-reference without destroying itself with contradiction.So we need a special kind of language with the ability to handle its own functionality and self-reference, hence: Self Processing Self Configuring Language (SCSPL). The self-processing aspect seems like one we can just declare and imagine. The self-configuring aspect, the aspect I’m guessing refers more to self-modification, seems a bit trickier. In order to handle this, SCSPL will have to render itself immune to Russell’s paradox.
Since I don’t know SCSPL at this point it’s pretty difficult to see how this can be done, it’s a hard problem. I’ve tried making a computerized version of SCSPL before and the double recursion becomes very confusing very fast. Yes, I realize putting it in a computer means it’s being processed by something other than itself, but I have to imagine it needs to be formalized somehow, even if it ends up being “just pretend this is running itself for now”.