Up To Isomorphism - What Even Is Language Anyway?
The beginning of section 4.5 contains the first main text use of the phrase “up to isomorphism”, giving us an excuse to get some important mathematical background. We see in the beginning of section 4.5 one of the novel requirements of any theory of reality in light of John Wheeler’s insights is matter-information equivalence. Matter-information equivalence demands concrete matter and abstract information be identical “up to isomorphism”. This is also mentioned in footnote 1 in section 4.3: “In fact, they are identical up to isomorphism beyond which the mental side, being the more comprehensive of the two, outstrips the material.”. Interestingly I think this footnote may be an error, though not a serious one, but we’ll get to that. In order to understand what it means for matter and information to be equivalent up to isomorphism it would help to review what the phrase “up to isomorphism” means. In order to understand the phrase, it would help to understand isomorphism in the mathematical context in which Langan is using it, model theory. In order to understand isomorphism in that context we will want to understand both language and models in that context. Here, we start with language.
The model theory sources I used for this are Chang and Keisler’s “Model Theory”, Wilfred Hodges “A Shorter Model Theory”, and David Marker’s “Model Theory: An Introduction”. Chang and Keisler’s book is the authoritative source on model theory but not so approachable for the beginner so I emphasized David Marker’s more. For logic in general I also used Stephen Cole Kleene’s “Mathematical Logic” and “Introduction To Metamathematics”, as well as Robert Rogers “Mathematical Logic and Formalized Theories”. Links to all of these in the resources section. For our purposes all of the information we need for now is in the first couple chapters of any of the model theory resources.
Generally texts develop models and language together. Marker for example starts out by how to specify a language, then moves to define an L-Structure, then to L-Embeddings, then then L-Terms, then to formulas and sentences at which point I consider the definition of language complete. My aim here is to give an overview of what language is in the context the CTMU is starting from. Marker includes logic implicitly, which I personally am not fond of, but I’m coming at this from outside academia and it may well be the convention. Based on Kleene’s treatment in “Mathematical Logic” this seems completely acceptable.
So what is a language? A language is a set of special symbols, of three different kinds, called the signature of that language. The signature of a language contains relation symbols (Pn), function symbols (Fm), and individual constant symbols (cp):
L = {P0,...,Pn,F0,...,Fm,c0,...,cp}
Each symbol Pi is a relation symbol between some number of elements p ⩾ 1. Each symbol Fj is a function symbol accepting some number of arguments q ⩾ 1. Note: not allowing function and relation symbols accepting 0 arguments is from Chang and Keisler, I do not yet know if allowing them is a mere complication, or fundamentally challenging).
A language also contains the logical constants:
∧ (AND)
∨(OR)
¬ (NOT)
→ (If/Then)
∀ (Universal quantifier, “for all”)
∃ (Existential quantifier, “there exists”)
The equals sign:
=
An infinite list of individual variables:
x1, x2, x3, x4, …
And punctuation symbols:
( , )
(including the ‘,’)
A language can be set up with different symbols. For example all the logical connectives can be reduced to a single NAND operator: ↑, and the universal quantifier ∀. Punctuation symbols can in principle be eliminated but from a human readability standpoint, it’s easier to have them in place (even though inductive proofs can be simpler without them). The important thing is that a language contains relation, function, and constant symbols, as well as logic and access to endless variable symbols.
Now we define the terms of a language as follows:
Any individual constant c or variable x is a term.
If t1 … tn are terms, and f is an n placed function symbol, f(t1,...,tn) is a term. (Or ft1…tn if we are omitting punctuation).
Next an atomic formula. Let all ti be terms, and R be some relation symbol accepting n arguments then atomic formulas are any of the form:
tj = tk
R(t1,..,tn)
Next formulas in general. Say 𝜙 and 𝜓 are formulas and x is a variable, then the following are formulas:
¬𝜙
(𝜙 ∧ 𝜓)
(𝜙 ∨ 𝜓)
(𝜙 → 𝜓)
∀x(𝜙)
∃x(𝜙)
Next, sentences. A sentence (of L) is any formula of L with no free variables. A free variable is some variable x that is not within the scope of a quantifier of x (∀x or ∃x). The scope of a quantifier is the formula said quantifier is applied to - the 𝜙 in ∀x(𝜙).
There you have it, that’s the vocabulary and grammar of a language L. Vocabulary being all the symbols used, grammar being the rules producing valid formulas and sentences. For a much more precise overview of languages in logic and model theory see the sources listed above (also found in the resources section). Again, those sources tend to build languages and models together so if you want an understanding of language from there you’ll have to extract it. (Building the two up together makes perfect sense in the context of those textbooks, nothing wrong with that). There is one minor complication: There is also language in the sense of generative grammar, but we will cover that when it becomes relevant later.