Section 4.2, Again!
In section 4.2 Langan gives a brief “Introduction” that seems a little out of place considering the depth of the rest of the paper but I suppose it’s good setup. In his opinion the most exciting fields at the time of writing the CTMU were Complexity Theory and Intelligent Design Theory. Complexity Theory, which he defined as the theory of self organizing systems, offers a switch from physical or material reductionism to informational reductionism. This new reduction, Langan argues, is as problematic as the old one. It also directly mirrors mind-matter dualism. Previously, mind and matter were the two sides of the dualism, thinkers chose one and relegated the other to secondary status. Informational reductionism chooses to base reality in abstract information representing the physical world and demotes physical material itself to secondary status. Since we need to explain both in a theory of reality, keeping the two separate with a chasm between the two is unacceptable. We cannot have one be primary and leave the other unexplained, there must be an equivalence.
Intelligent Design Theory is a pseudoscientific theory that was never able to separate itself from religious creationism. Not necessarily a good move, but refer to our assumptions. ID seems to center around two basic mathematical, or semi-mathematical, notions: irreducible complexity and specified complexity. Irreducible complexity is the idea that a functioning system can only be made so simple and still function. The classic creationist example is to say the eye could not have evolved because if you change or remove any part of it, it would cease to function and would therefore not be selected for. This specific claim has since been debunked but it illustrates the point. Specified complexity is the more important of the two for our purposes. In short, specified complexity is “given the set of all possible outcomes W, the set of all pre-specified utile outcomes T, and the number of tries you get R, is the amount of utility produced sufficient to doubt dumb luck.” As an example, say you have a device that produces English characters. The vast majority of outputs from the device will be gibberish, but if the device consistently produces intelligible English sentences then one can guess that there is some kind of intelligence in the box. Or, say you’re playing poker and the guy to your left conveniently gets a royal flush every single hand, then it’s likely there was some kind of intelligence involved in the selection of his cards (i.e. he was cheating). From Wikipedia the specific formula is:
Specified complexity = -log2[R x phi(T) x P(T)]
T is “the pattern”, which if W is the set of all possible outcomes, as far as I can gather, is the subset of outcomes from W that satisfy some pre-specified criteria. R is the “replicational resources”, which is roughly the number of attempts or trials granted. Phi(T) is the number of other patterns that have kolmogorov complexity equal to or less than the pattern T. The Kolmogorov Complexity of T is the length of the shortest computer program, in a predetermined language, that produces T as output. The reason we want lower Kolmogorov Complexity is because it indicates greater specificity (within our pre-specified criteria). The whole thing reads like an information measure suggesting the term inside the log should be a probability, so this approach seems a little off to me… (If you roll a die 6 times the odds of a 6 isn’t 6 x ⅙ = 1, so I have to wonder if there isn’t more to it). ChatGPT to the rescue: It’s an upper bound on the probability, and can be treated like one. This by way of the Union Bound or Boole’s Inequality. So in this case if you roll a die 6 times the specificity of getting a 6 is -log2(6 x 1 x ⅙) = 0. So there is no “intelligence” detected.
These numbers, phi(T) and R, are generally incalculable on the larger scale. We can calculate for simple examples like our die roll, but for larger systems only vague estimates are available. For what it’s worth the proponents of ID generally take R = 10**120 and phi(T) = 10**20.
So if we’re tracking, according to Langan, we need some kind of mathematical object that can be taken as a model or paradigm for a self-generating, self-organising system capable of intelligent self-design. This paradigm must also solve the discrepancy between mind and matter, information and material. Language is the paradigm proposed, not as a tool to study Reality but as a model of Reality, a paradigm in and of itself. Langan claims language is the most general, powerful, and necessary. As far as I can tell he does seem to have a point. In Logic and Model Theory mathematical theories are treated as languages with axioms attached, and Langan has argued elsewhere that if one wants one could claim the axioms as grammatical rules. Joscha Bach, on the Lex Fridman podcast, also once claimed mathematics as “the domain of all languages” (I believe it was the first interview with him).
The rest of the section is a little mystical, though not unreasonably so, claiming cognition and perception are languages based on what Immanuel Kant might call phenomenal syntax.