go-3women-534x716Responding to my post on the Game of Life and Emergence, John Doyle, over at Ktismatics, speculates about the ontological status of the patterns that emerge in the game. Doyle first outlines five helpful criteria for emergence drawn from Jaegwon Kim:

1. Systems with a higher level of complexity emerge from the coming together of lower-level entities in new structural configurations.
2. Higher-level systems exhibit higher-level emergent properties arising from the lower-level properties and relations of its constituent parts.
3. Emergent properties are not predictable from information about lower-level conditions.
4. Emergent properties are not explainable or reducible to the lower-level conditions.
5. Emergent properties have novel causal powers of their own.

I am largely in agreement with these five criteria of emergent phenomena so long as number four isn’t taken to entail anything spooky or magical like a sudden magical leap, but rather is a thesis about scale dependent properties that couldn’t have strictly been predicted from the lower level rules. I’ll have more to say about this in a moment. For those interested in actually playing the Game of Life, Ian Bogost has been kind enough to provide a link here.

read on!

250px-Go-Equipment-Narrow-BlackRaising the question of the ontological status of the critters that emerge in the Game of Life– gliders, guns, rakes, etc –John writes,

I detect the emergent properties when I watch the game go through its iterations. To what extent are they properties of the game itself? Certainly at the lower level the individual cells do light up or go dark. Certainly in the aggregate the lights form patterns. But what about those higher-level clusters of cells that appear to move across the screen over time: do they really move? They seem to eat other objects or fire weapons or propel themselves across the screen: are they really doing so?

The seemingly mobile and purposive objects that emerge from running the game aren’t physical objects being tracked by a camera or a computerized eye. I’d say that they’re optical illusions, imposed by our perceptual systems on the higher-level emergent optical outputs generated by the program. The illusion takes advantage of the human perceptual system’s ability to impose higher-order structure on sensory input so as to extract meaningful information from a visual array. So: at time t I see an illuminated rectangle of dimensionality L*H located at position XY on the grid; at time t+1 I see an illuminated L*H rectangle located at position X(Y+1). My visual perception system interprets this information as evidence that the original rectangle moved a little bit to the right. Inside the game’s algorithm, though, what happened is that the leftmost cell on the illuminated rectangle switched from on to off, while the cell just to the right of the rectangle swithed from off to on. This isn’t the same rectangle moving to the right; it’s two separate rectangles displayed sequentially.

The rest of the post is well worth the read. Clearly this conclusion is not going to be acceptable from the standpoint of onticology, but not for the reasons that one might think. The two key principles of onticology are the ontic principle and the principle of translation. Throughout his post John draws a distinction between reality or the “really real” and illusion. But it is precisely this distinction that is undermined by the ontic principle. As I argue in my post on Flat Ontology, there are not two worlds– one consisting of the really real or “mind-independent objects” and another consisting of mind and the social –but rather only one world, the real, of which mind is counted as a member. Consequently, the first point to make is that the phenomena that take place in the mind regarding the game are themselves real. They are not less real than the game itself, nor “other” than the real.

What is taking place in the mind regarding the game is an instance of what I call translation. The principle of translation states that there is no transportation of a difference without a transformation or translation of that difference. In other words, in the interaction between the game and mind a difference is conveyed from one domain (the game) to another (the mind). In being received by the mind– or, for me, more preferably, the brain –that difference is reorganized or transformed in a variety of system-specific ways precisely as John describes. However, the important caveat made by the object-oriented ontologist is that this process of translation is true not simply of mind-object interactions, but of all object-object interactions regardless of whether or not minds are involved. In other words, translation is every bit as much a phenomenon characterizing the interaction of rocks with sunlight as it is of frogs tracking “flies” and humans regarding the Game of Life. There is no object that receives differences from other objects like a glassy reflection in a mirror… Including mirrors themselves! Rather, for every interaction between objects there is a translation and a transformation. As such, translation is not an epistemological limitation that prevents us from ever getting at the “true things in themselves”, but is rather a general ontological feature of all inter-ontic relations among objects. Translation is an ontological process.

go_gameHowever, while I think John is right to claim that our minds “interpret” what it perceives on the screen, attributing intentional characteristics to the patterns, I think it is a mistake to suggest that there is not real emergence and pattern taking place within the computer program. It is not simply an illusion of our cognition that makes the patterns on the screen behave as they do. Rather, these are real properties of the system itself. A while back I wrote a post on the structure of possibility, distinguishing between logical possibility, physical possibility, biological possibility, and historical possibility. If structured possibility is conceptually important, then this is because the constraints belonging to each of these levels of possibility (and we could add additional levels) play a role in defining fitness-landscapes of what can and cannot take place within a particular field of interactions.

The Game of Life is interesting because we see all of these structures of possibility at work, generating the emergent phenomena we witness on the screen. Thus, of course, at the least interesting level of possibility and constraint, logical possibility, the programming must be internally consistent in order to run. At the next, more interesting level, we get the physical structure of possibility governing the game. Conway presents four physical laws governing the unfolding of the game:

1. Any live cell with fewer than two live neighbours dies, as if caused by underpopulation.
2. Any live cell with more than three live neighbours dies, as if by overcrowding.
3. Any live cell with two or three live neighbours lives on to the next generation.
4. Any dead cell with exactly three live neighbours becomes a live cell.

These “physical laws” place constraints on what can take place in the Game of Life, but do not tell us what will take place in the Game of Life. In order to get that, we need two other structures of possibility or constraint. On the one hand, we need historical possibility. Historical possibility will consist of the initial conditions with which the Game of Life begins. Note that the physical structure of the game of life tells us nothing about these initial conditions, but only the constraints on whatever conditions happen to obtain. Initial conditions, in contrast to the structure of physical possibility, are instead randomly selected by the player at the beginning of the game (I strongly recommend fiddling with the game at the link I gave above to get a sense of this). Historical conditions are thus constrained by the structure of physical possibility, but are not determined by this structure. The manner in which the game evolves will depend on these initial conditions. You will get entirely different patterns depending on these initial conditions. In evolutionary biology it is often said that were we to completely rewind the emergence of life on earth, it would turn out entirely different. This is because, under this scenario, historical conditions would differ. For example, had the asteroid not hit the Earth, the world might still very well be dominated by dinosaurs.

sci-bonobo-ape_4Finally, fourth, we have biological constraints. The Game of Life develops through the instantiations of the rules constraining physical possibility from moment to moment. The arrangement of ON and OFF blocks at any particular moment defines what is “biologically possible” for the patterns at any given point in time. You can’t simply leap from one form of organization to another within the game, but must pass through a series of intermediaries defined by the physical constraints of the system. Similarly, while I no doubt have plenty of bonobo monkey genes in my genotype, it is biologically impossible for me or my offspring to leap to bonobo monkey phenotypes in a single bound as a whole series of transitional states would have to first occur in order for this to take place.

The point I am trying to make– perhaps clumsily –is that while human beings certainly “project” all sorts of things on to the Game of Life when viewing it, the patterns there on the screen are not a result of how we interpret these patterns, but are a result of both the physical constraints governing the system and the emergent patterns that arise from the initial conditions. It is perhaps metaphorical to suggest that one “glider” is eating another in the Game of Life, but nonetheless it is the case that the gliders glide across the scheme according to very strict regularities within the game itself, not as a result of our minds. What is interesting here is that we get all this complexity from very “stupid” basic rules, requiring no further design on our part. Perhaps the more disquieting possibility is that our minds are themselves like this… The result of a lot of stupid, unintelligent algorithms at small levels of scale that give rise to emergent and patterned order at higher levels of scale.