1.3(i-iii) Cell World

The cellular automata universe offers a universe that’s easy to deal with and yet rich enough to give us all the complexity of our universe. Indeed, I argue that the details are essentially irrelevant at high enough levels of complexity; that no particular material nor specific fine-scale mechanics are necessary for consciousness, and that we might expect quantum mechanics to inform our understanding of pop music or hedge fund management as well as it informs our understanding of the brain. Thus, the cellular automata world is rigid enough to grant the most stringent determinism, it is also rich enough to birth arbitrarily deep levels of complexity, and house any imaginable intelligence. I want to also distinguish my position from those who insist free will (or consciousness) is an illusion. At best this is misleading, more likely it is just plain wrong.

i. Objectness

Long before we need worry about free will and consciousness, we need to worry about what objects are. These will be the nouns of any truth claim we make. Before there is *choice*, there is *person*, and how to deal with the distinction of being a *person* object is not as easy as it may seem. Even in a low resolution cellular automaton (i.e., one with few cells) the problem remains. Is a glider an object? Is every configuration of cells an object? We might suppose that every configuration of cells theoretically has a name, or could be given a name, and even that names in our language must be shorthand for collections of cells (that meaning in language must essentially be built from these building blocks). But then something so simple as ‘glider’ is necessarily shorthand for a list of trillions of configurations. This seems like a faulty way of looking at things. If glider includes not only the same two 5-cell configurations of cells, up to translation, but also larger things which exhibit a gliding property, the problem is harder. Certainly if an object appeared to maintain its shape more or less, as it translated itself through space, perhaps even fizzling out at some time, we’d be tempted to call it a glider. This is especially true if our instruments of detection are unable to detect individual cells, so we cannot discern a glider’s states at that finest level. No one argues that natural language is not fuzzy, as it unarguably is, but then how do we interpret a fuzzy truth claim, in reductionist terms?

ii. Diagonalization

We’ve built in our imaginations a cellular world with trillions of cells, and in this world a creature has been formed. That creature is constantly bombarded with gliders of various sizes and from these collisions, (and internal happenings), the creature processes thought and outputs gliders, as statements. This creature I’m thinking of is essentially a human, or a near approximation. Now, let’s say we agree on an interpretation of its language (i.e., the waves of gliders it sends out of its mouth, each wave differing in shape enough so that a discrete language can be understood, as English is). What can it say in this language? One thing it can say is “the universe is a cellular automaton with the following rule of evolution…” What can it not say? It cannot say “I will now give names to the 10^(10^10) cellular configurations possible in this universe, beginning with ‘aardvark,’…” Indeed, since each utterance takes up space (for the gliders to carry the waves of speech) the utterances are quite limited in the amount of information they can carry (necessarily less than the total number of configurations possible in the same tiny amount of space, let alone the universe). Now, theoretically we can offer the utterer all the time he wants to longwindedly describe each fine detail (indeed each cell) of some object, and terminate after finite time. But who, or what, is his audience that can reassemble the information into a model that contains as much information as the original object? This is one reason computers cannot calculate the evolution of the universe, because you don’t even get to specify the initial condition without generating an infinite descending loop! It almost seems absurd to expect more than fuzziness from meaning in language, but of course our language is not fuzzy, and perhaps this is where some of the confusion lies. Language is ridiculously precise. Unlike facial expressions or performed music, it is exact and codifiable. Yet still we discern subtlety and nuance in our favorite authors, after reading hundreds of thousands of their words (or even a good cadence in a paragraph or sentence).

iii. Emergence

I want to draw attention to the fact that fuzzy terms (as in the referent is fuzzy, such as with natural language) aren’t just ‘fuzzy,’ as opposed to being precise, as a sort of deficit. But instead, that there is meaning in a fuzzy term that is essentially lost with the attempt to make it precise. I am thinking of the cellular human, and imagining her, let’s call her Frida, holding a ball and commenting “it is round.” This roundness property, which seems so elementary, is in reality a reflection of the ball’s resemblance to other objects previously perceived by Frida. So the process goes: an object is in front of Frida, some waves of gliders emanate from it (or rather “bounce” off it), carrying information about the object into Frida’s sensory apparatus, then Frida’s brain momentarily gets a hold of that information. Within a few seconds most of the information is gone, but some faint ghost remains, a ghost which somehow holds information about the object which is general, and connects to yet other things Frida has seen. This object would not be classified under the blanket abstraction ’round’ were it not for the ‘intrinsic’ cellular make up of the ball, but the abstraction cannot be said to be an intrinsic quality of the ball, supported only by the physical state of that ball. I think some would find this distinction too subtle, but it’s an absolutely crucial difference to understanding how meaning works and how the reductionist is wrong.

It is only too easy to imagine ’roundness’ is a concrete quality either enjoyed intrinsically by an object or not. However, terms which are overtly contextual as opposed to physical are readily available and make up the majority of the words we use. Take ‘majority,’ for example, and define it in terms of cells in such a way that nearly all usages found in English literature can be said to reference it. It can’t be done. We can say what a majority of cells being on in a given region means, more or less, but that is not directly referenced by my usage above, nor when we stumble upon it in literature, say in the phrase “tyranny of the majority.”

It is observed, also, that terms which don’t lend themselves easily to reduction, are frequently not necessarily more complex, given a context. This is exactly the point, for if it was necessary to give a description of X with complexity proportional to the extent X resists reduction, then we are in a reductionist framework, and X is just really complex. But the human mind doesn’t work this way. It is frequently possible to communicate great generalities to children, who would have no way of understanding the reduction to finer physical parts. How can one insist “being on one’s best behavior in a restaurant” is really a property or action of physical particles, or even a deep sociological action, when neither of these can be comprehended by the child, while the statement itself is easily understood? Part of the answer is that in building a vast framework of complexity, certain terms become contextually simple, while being intractably complex from a ground-up perspective. You don’t build “one’s best behavior” from scratch. The other part of the answer is that a child begins with such a framework. Knowledge does not stick to an empty slate (not even a blackboard!), but children have a robust way of making sense of generalities from the beginning. My main point here is that meaning is emergent, held together by a framework harder to imagine than a strict partial order. There are more lateral connections in the network of meaning.


5 Responses to 1.3(i-iii) Cell World

  1. snake oil!

    Don’t you find him embarrassing, Dominic?

  2. bunnylover says:

    I think it would help clarify your point if you specified clearly what you mean by “emergence”. (I realize the irony here.) I’ve heard the word used, even by professional philosophers, in a lot of different senses.

    Meaning is surely extrinsic, in the sense that a single utterance can never by itself mean anything to anyone. For example “be on your best behavior” would not make sense if there were no English language, as indeed there wouldn’t be if there were no other examples whatsoever of English sentences, the context of whose utterances could be compared with this instance. Less radically, if the parent just had never said anything like it before, the child would also not be able to grasp any meaning (though the parent still might, in this case).

    Beyond the necessity of having a complicated web of interconnected examples to make sense of meaning, I don’t see anything *intractably* complex going on here. The relevant notion of complexity for the child surely is not “ground-up” complexity in terms of finer physical parts (or factorized wave equations), but rather the complexity in terms of the apparent actions the child can take and perceptions the child can experience; these are the natural primitives. You and I, as curious observes, may in turn explain the child’s conceptualization of the world in terms of being build of finer physical parts, but so what?

  3. What is intractably complex is that which you get when you insist that meaning has a single preferred set of ultimate primitives. That’s how I understand reductionism. I wouldn’t argue that explanations in terms of primitives, those a priori preferred or otherwise, aren’t valid, many are, but that meaning isn’t solely subordinate to it.

    To my mind–it has been a long time since I thought about this–emergence is exactly what I’ve outlined above with the child example. It is phenomena which arrises from some level of causality, say the rules of a cellular automaton, which is sufficiently complex–say critters made from cells–to be (a) intractably explained in terms of that causality level and (b) easily explained in terms of some other, less intrinsic (local) primitives. To confess, I don’t have a clear, extensive understanding of emergence, but only a handful of examples, and am using these examples to contend against reductionism.

  4. bunnylover says:

    I don’t think reductionism needs a single preferred set of ultimate primitives. I don’t know what the professional philosophers would say here, but to me reductionism is rather about an aversion to taking concepts as primitive any more than strictly necessary. An explanation of a phenomenon which has fewer primitives is to be much favored. That’s why a reductionist would eventually want to break down (in principle, not in practice!) the meaning of “be on your best behavior” in terms of fundamental physics, not because fundamental physics is inherently a good set of primitives, but because this way we’re explaining the sociological phenomenon in the same terms as we’re explaining the phenomena of say, the tables and chairs. If we were to invoke human behavioral concepts as primitives, then those would not explain the tables and chairs which act absolutely nothing like humans, and our whole explanation would be more mysterious because it would have twice as many primitives, and it’s not even clear how their interrelationships could be defined. So this just wouldn’t be a satisfactory explanation of the total experience of being in a restaurant.

    So the above is probably too argumentative–you’re making a lot of good points and my main objection to what you’re saying is that you’re hellbent on attacking some idea of “reductionism” which to us is rather ill-defined–neither of us can really claim to know what philosophers think it means and we don’t seem to agree about what it means so it then becomes meaningless (ha) for us to argue about whether it’s right or wrong.

    Your definition of emergence suggest that it’s actually going to be a ubiquitous phenomenon, because most things worth mentioning that are made out of smaller parts are easier to talk about on their own terms rather than on the terms of the smaller parts. Cars, for example. Try to talk about what they do without using “car” or a synonym and it’s going to be extremely annoying–very complex to explain, though not intractably so. It wouldn’t be true if we highlighted really random chunks of atoms and gave them names–there wouldn’t be any local concepts to assign words to that would make them any easier to talk about. I think what’s going on here is that we only give a name to a complicated phenomenon if in fact it economizes on meaning and makes it easier to talk about things we care about.

    It’s just a matter of scale though. I still don’t see any reason to think that something like a brain is fundamentally different from a car apart from being more complex by many orders of magnitude.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: