The cellular automata universe offers a universe that’s easy to deal with and yet rich enough to give us all the complexity of our universe. Indeed, I argue that the details are essentially irrelevant at high enough levels of complexity; that no particular material nor specific fine-scale mechanics are necessary for consciousness, and that we might expect quantum mechanics to inform our understanding of pop music or hedge fund management as well as it informs our understanding of the brain. Thus, the cellular automata world is rigid enough to grant the most stringent determinism, it is also rich enough to birth arbitrarily deep levels of complexity, and house any imaginable intelligence. I want to also distinguish my position from those who insist free will (or consciousness) is an illusion. At best this is misleading, more likely it is just plain wrong.
Long before we need worry about free will and consciousness, we need to worry about what objects are. These will be the nouns of any truth claim we make. Before there is *choice*, there is *person*, and how to deal with the distinction of being a *person* object is not as easy as it may seem. Even in a low resolution cellular automaton (i.e., one with few cells) the problem remains. Is a glider an object? Is every configuration of cells an object? We might suppose that every configuration of cells theoretically has a name, or could be given a name, and even that names in our language must be shorthand for collections of cells (that meaning in language must essentially be built from these building blocks). But then something so simple as ‘glider’ is necessarily shorthand for a list of trillions of configurations. This seems like a faulty way of looking at things. If glider includes not only the same two 5-cell configurations of cells, up to translation, but also larger things which exhibit a gliding property, the problem is harder. Certainly if an object appeared to maintain its shape more or less, as it translated itself through space, perhaps even fizzling out at some time, we’d be tempted to call it a glider. This is especially true if our instruments of detection are unable to detect individual cells, so we cannot discern a glider’s states at that finest level. No one argues that natural language is not fuzzy, as it unarguably is, but then how do we interpret a fuzzy truth claim, in reductionist terms?
We’ve built in our imaginations a cellular world with trillions of cells, and in this world a creature has been formed. That creature is constantly bombarded with gliders of various sizes and from these collisions, (and internal happenings), the creature processes thought and outputs gliders, as statements. This creature I’m thinking of is essentially a human, or a near approximation. Now, let’s say we agree on an interpretation of its language (i.e., the waves of gliders it sends out of its mouth, each wave differing in shape enough so that a discrete language can be understood, as English is). What can it say in this language? One thing it can say is “the universe is a cellular automaton with the following rule of evolution…” What can it not say? It cannot say “I will now give names to the 10^(10^10) cellular configurations possible in this universe, beginning with ‘aardvark,’…” Indeed, since each utterance takes up space (for the gliders to carry the waves of speech) the utterances are quite limited in the amount of information they can carry (necessarily less than the total number of configurations possible in the same tiny amount of space, let alone the universe). Now, theoretically we can offer the utterer all the time he wants to longwindedly describe each fine detail (indeed each cell) of some object, and terminate after finite time. But who, or what, is his audience that can reassemble the information into a model that contains as much information as the original object? This is one reason computers cannot calculate the evolution of the universe, because you don’t even get to specify the initial condition without generating an infinite descending loop! It almost seems absurd to expect more than fuzziness from meaning in language, but of course our language is not fuzzy, and perhaps this is where some of the confusion lies. Language is ridiculously precise. Unlike facial expressions or performed music, it is exact and codifiable. Yet still we discern subtlety and nuance in our favorite authors, after reading hundreds of thousands of their words (or even a good cadence in a paragraph or sentence).
I want to draw attention to the fact that fuzzy terms (as in the referent is fuzzy, such as with natural language) aren’t just ‘fuzzy,’ as opposed to being precise, as a sort of deficit. But instead, that there is meaning in a fuzzy term that is essentially lost with the attempt to make it precise. I am thinking of the cellular human, and imagining her, let’s call her Frida, holding a ball and commenting “it is round.” This roundness property, which seems so elementary, is in reality a reflection of the ball’s resemblance to other objects previously perceived by Frida. So the process goes: an object is in front of Frida, some waves of gliders emanate from it (or rather “bounce” off it), carrying information about the object into Frida’s sensory apparatus, then Frida’s brain momentarily gets a hold of that information. Within a few seconds most of the information is gone, but some faint ghost remains, a ghost which somehow holds information about the object which is general, and connects to yet other things Frida has seen. This object would not be classified under the blanket abstraction ’round’ were it not for the ‘intrinsic’ cellular make up of the ball, but the abstraction cannot be said to be an intrinsic quality of the ball, supported only by the physical state of that ball. I think some would find this distinction too subtle, but it’s an absolutely crucial difference to understanding how meaning works and how the reductionist is wrong.
It is only too easy to imagine ’roundness’ is a concrete quality either enjoyed intrinsically by an object or not. However, terms which are overtly contextual as opposed to physical are readily available and make up the majority of the words we use. Take ‘majority,’ for example, and define it in terms of cells in such a way that nearly all usages found in English literature can be said to reference it. It can’t be done. We can say what a majority of cells being on in a given region means, more or less, but that is not directly referenced by my usage above, nor when we stumble upon it in literature, say in the phrase “tyranny of the majority.”
It is observed, also, that terms which don’t lend themselves easily to reduction, are frequently not necessarily more complex, given a context. This is exactly the point, for if it was necessary to give a description of X with complexity proportional to the extent X resists reduction, then we are in a reductionist framework, and X is just really complex. But the human mind doesn’t work this way. It is frequently possible to communicate great generalities to children, who would have no way of understanding the reduction to finer physical parts. How can one insist “being on one’s best behavior in a restaurant” is really a property or action of physical particles, or even a deep sociological action, when neither of these can be comprehended by the child, while the statement itself is easily understood? Part of the answer is that in building a vast framework of complexity, certain terms become contextually simple, while being intractably complex from a ground-up perspective. You don’t build “one’s best behavior” from scratch. The other part of the answer is that a child begins with such a framework. Knowledge does not stick to an empty slate (not even a blackboard!), but children have a robust way of making sense of generalities from the beginning. My main point here is that meaning is emergent, held together by a framework harder to imagine than a strict partial order. There are more lateral connections in the network of meaning.