Free Will

By Paul Hsieh

Date: 25 Sep 1994 21:16:01 -0700

From: Paul Hsieh

Newsgroups: alt.philosophy.objectivism

Subject: Free Will Essay (warning: long!) (Dan Hankins) writes:

>For instance, I have logical problems with the Objectivist arguments about

>free will. In particular, I find flawed the argument that physical

>determinism and conscious free will necessarily contradict each other.

>This is something I'll be addressing in posts in the future. In the

>meantime, the works of Daniel Dennett (specifically, _Elbow Room: The

>Varieties of Free Will Worth Wanting_) will provide the same arguments in

>a more detailed form.

Dan has chosen to open an interesting can of worms.

Although there hasn't been a recent shortage of active topics on a.p.o., I'd like to go ahead and start yet another potentially controversial thread.

The following is a (slightly modified) copy of a letter that I've sent to some friends a few months ago. In this letter, I argue that even in a completely deterministic universe, it is possible for entities to develop/evolve which display most (?all) of the traits which we associate with free will.

The immediate implications for Objectivism are still unclear to me. I look forward to hearing what others have to say.

Let the opinions fly!

(For the record, I am *not* trying to claim or prove that we live in a universe such as I discuss below. My essay is instead intended to spur some further discussion on the subject of free will vs. determinism.)

Copyright (C) Paul Hsieh, September 1994 Again, permission is granted to distribute this via Usenet, MDOP, Vixie's Objectivism list, or private e-mail. I do *not* grant permission for this to be distributed via the OSG mailing list. Thank you.


Dear [Insert Name] This is the essay on free will and determinism that I promised to inflict upon you. Most of what I'll propose has been liberally plagiarized,^H^H^H^H^H^H I mean adapted from Dennett, Dawkins, Conway, Poundstone, and others whose names I can't remember. I've listed the references at the end. If you don't understand anything, it's probably due to a flaw in one of my arguments, not

Dennett's or those other writers'.

With all these caveats, here goes! (Warning: This is pretty long.)


Question: Can one meaningfully speak of "free will" in a deterministic universe?

Answer: I think so. Let me try to demonstrate this by showing how an organism can evolve in a deterministic universe, yet exhibit all the characteristics we mean when we talk about free will.

Introduction: The Game of LifeI'd like to propose a thought experiment using a simple deterministic universe. Let's consider John Horton Conway's game of Life. I assume that you're already familiar with his game, but just in case you aren't, let me summarize it briefly for you.

Conway's Life is one of the simpler cellular automata played on a (preferably) infinite square lattice of cells. Each cell can be in either one of two possible states: "on" or "off". All cells start in one of those two states at time=0. The universe then evolves according to certain simple, well-defined transition rules.

For each cell C, consider the 8 adjacent neighbors (including the cells connected via a single diagonal to C).

If at time n, a cell C is "on",

* then { C is "on" }

* If either 2 or 3 of C's neighbors are "on",

. . . . then cell C stays "on" at time (n+1)

. . . . else cell C becomes "off" at time (n+1)

* else { C is "off" }

* If exactly 3 of C's neighbors are "on"

. . . . then cell C turns "on" at time (n+1)

. . . . else cell C stays "off" at time (n+1)

Time is a discrete, quantized variable that only takes integer values. At time=0, each cell is in its initial state. The transition algorithm is then applied to all cells in the lattice. The values for each cell for the next time period (time=1) are calculated. Then, each cell is set "on" or "off" simultaneously to its new value as specified by the transition rules. This generates the new state of the lattice for time=1. This process is repeated to give a new state at time=2, then time=3, etc.

It is easy to see that the state of the lattice at any time=n is completely determined by its initial state at time=0. It may not be a trivial task to calculate the state at an arbitrary time=n for any given any initial condition short of actually executing the algorithm. (I think this means that it is a Wolfram Class 4 cellular automata, but it's been a while...) However, the evolution of the lattice is deterministic -- there is no randomness in the transition rules.

Some Properties of LifeComputer people have noticed that Conway's choice of transition rules leads to interesting results. All sorts of small stable and semi-stable structures are seen in many different Life games using various initial conditions.

There are static clusters of cells that don't change with time, and some of the configurations have names like the "block" and the "beehive". There are structures that oscillate in a fixed sequence between several different patterns and some have been given fanciful names like "blinkers", "tumblers", "traffic lights", "ferris wheels", etc.

There are some structures that move -- for instance, the structure known as the "glider" goes through various contortions, and after 4 turns, it reappears, but displaced diagonally by one cell from its original position. Other larger structures, named "spaceships" behave similarly Other structures will proliferate wildly. Others will die out either slowly or quickly. Some will shoot off all sorts of moving debris. Others have the capability of being able to absorb gliders and other moving debris without suffering any permanent change in their configuration -- these are called "glider eaters". Some structures can grow arbitrarily large -- i.e., the number of "on" cells will increase without bound. Others can even reproduce themselves -- i.e., they evolve in such a way that after a certain number of turns, there are two copies of the original structure. These copies, can then of course make additional copies, etc. (The future generations may not necessarily all survive, but some will.)

It is often difficult to predict the final behavior of any given structure. Sometimes changing the state of one of its initial cells can lead to a dramatic difference in final behavior.

(It's even been shown that, given appropriately clever structures, one can implement the equivalent of memory, registers, logic gates, and bit streams in Life -- all the components necessary to make a Turing machine!

However, that fact is not essential for the argument I am about to make. If you want references to this fact or to other aspects of Life, I can dig them up for you.)

How Might "Living" Creatures Develop in a Life Game?Now, let's consider an infinite lattice with a random distribution of cells turned "on" during the initial state. We would probably want the density of "on" cells to be fairly low, just for the purposes of this experiment. The reasons why will become clear later.

The lattice will therefore consist mostly of empty space ("off" cells) with a few scattered "on" cells. All of the isolated single "on" cells will die out in one turn. A few might (by random chance) be distributed in a cluster that will be stable and won't die off. A very few might even start off as a cluster or set of clusters that will move and/or grow. And a miniscule fraction might start off in a cluster (or evolve into a cluster) which is one of those reproducing structures!

So what will happen as we track the lattice through a long period of time? This is my (and others') guess: First, there will be some reproducing structures that are far apart from each other, and separated by mostly empty space (with a sparse scattering of static blocks, oscillators, and gliders/spaceships randomly located in the otherwise perfect vacuum).

Not all of the reproducing structures will be equally efficient at reproducing. Some will be faster than others.

Some will be more robust than others -- they might create copies that are spaced farther apart from each other, and therefore less likely to interfere with each other during successive iterations of the reproductive process.

Some will be very sensitive if they run into the occasional stray block or glider, and will stop reproducing when they strike debris like that. Others might be less sensitive to debris -- for instance, they might include an outer shell consisting of glider eaters or other equivalents that act to protect them from the random debris. Others might even include components that shoot out gliders that interact and destroy any potentially harmful bits of debris out there before they can harm the main structure.

Basically, some reproducing structures will simply be better at it than others. Hence, over the long term, we will see many more of the better reproducers than the poor reproducers. There's no teleology involved -- it's just a description of these deterministic events unfolding.

There will be hazards that face these reproducers. Interactions with debris can be potentially catastrophic, resulting in loss of the integrity of the reproducers. Hence, those reproducers that are able to either harmlessly absorb the debris or pre-emptively destroy the debris will have a reproductive advantage over the others.

In other words, the presence of these hazards creates a form of natural selection pressure.

There will even be mutations (of a sort). Occasionally a stray glider or random bit of debris will slip through the reproducer's defenses. Most of the time, this will throw a monkey wrench into its works and have a severe deleterious effect. However, occasionally, it might result in a beneficial change -- i.e., in something that permits the organism to reproduce more effectively.

There may even be the equivalent of food. Some forms of debris might be in a configuration that is readily absorbed into the structure of these organisms in such a way as to allow it to increase its size or replace damaged components. (I'm not sure about this particular point, but it's not essential to the argument.)

What Sorts of "Living" Creatures Would Develop in a Life Game?Now we get to some of the more speculative ideas. I can't prove all of the following ideas, but I hope I can at least make them sound plausible.

As the system evolves, there will be a slow reproduction of widely-spaced reproducers, as described above.

They will proliferate in proportion to their reproductive fitness. Because we have an infinite (or arbitrarily large)

universe, pretty much every different possible variety of reproducer will exist *somewhere* in the universe.

Some will have (or may develop) fairly sophisticated capacities. One would be the ability to "sense" characteristics of its immediate neighborhood and react accordingly. For instance, there are probably ways for an organism to send forth (via gliders or some other mechanism) some part of itself which can interact with the neighborhood and send back a signal which conveys relevant information -- e.g., "this part of space is good; it has no appreciable harmful debris" or "this part of space is bad; there is lots of potentially harmful debris; stay away" or "this part of space is very good; there is static debris in a benign configuration that we can use as food". If the organism also includes a mechanism which can take this "sensory" information and use it to appropriately drive some crude "motor" mechnisms which can steer the organism towards a good area of space or away from a bad area of space, then we will have the beginnings of purposeful behavior. I'm not claiming that these reproducers have any conscious sense of purpose or intention. They are more like amoebas, which have crude stimulus-response behavior patterns hardwired into their cells. These sophisticated reproducers would be similar, and those which are able to more quickly and accurately "sense" their environments and respond accordingly would have a survival advantage and would proliferate preferentially to those which were less adept.

This is the crudest level at which "perception" can be meaningfully said to occur. (In this case, we have a crude analog to the sense of smell, which is considered by many to be the most fundamental of senses.)

One major leap occurs when these creatures are able to take the information gained, and maintain some crude internal representation or model of its environment within its structure. The exact details aren't important -- the information acquired would presumably be stored in some pattern of on/off states of certain elements within the organism. As long as there is some way in which this information from the sensors is stored, and later referred to by the mechanisms which control actions, this can be thought of as a very crude form of "knowledge". Organisms that were able to quickly acquire "knowledge" that accurately reflects the state of the outside world and were able to act on it would have a definite edge in survival over slower and less accurate competitors.

The next major leap is the ability to predict future events. If an organism had a crude information processing mechanism that was able to keep track of past events and was also able to recognize patterns and correlations in this knowledge base, it would have another strong survival advantage. For instance, if certain stimuli always correlated with the eventual appearance of food in a certain direction, it could use that information to move in that direction before the actual food debris was directly encountered by the sensory stimuli. Similarly, an accurate predictive mechanism could help an organism detect and avoid dangers more quickly than it otherwise would have, by detecting the presence of relevant warning signs.

At this stage, one can speak of these creatures as acting *as if* they had intentions. One does not actually have to prove that there is conscious contemplation of intentions going on -- it is enough to say that these creatures uses senses to grab and filter information from the environment, and use it to act in seemingly purposeful ways to seek good things and avoid bad things. (Good and bad of course being defined in terms of survival.) Let us call these creatures "intentional beings", meaning they act *as if* they had intentions.

Eventually these organisms will encounter other organisms. That's when things get interesting. They might compete for a limited amount of food. Or one might make good food for the other! Or one might pose a direct danger to the survival of the other in other ways. In any case, it would be advantageous for an organism to be able to sense the presence of other organisms and treat them like other natural hazards (or natural benefits).

Again, those that are more proficient at this will have a survival and reproductive advantage.

Even more sophisticated creatures will include in their knowledge model the existence of other creatures not as mere natural phenomena but as other *intentional beings*. Such a creature will be able to take into account the apparent intentions of others. If an organism's knowledge model includes a crude representation of other creatures with *their* knowledge models, this will let it engage in behaviors that looks like what we call cooperation, deception, evasion, etc. It can perform actions that are designed to elicit a desired effect in another's knowledge model which works towards the first creature's benefit. Those creatures that are able to more accurately assess and predict the knowledge and behavior of others will have a powerful survival edge. This is the stage at which it becomes meaningful to speak of communication.

Yet more sophisticated creatures will include in their knowledge model a representation of themselves! If a creature is able to "know" its own strengths, weaknesses, knowledge base and likely patterns of behavior in the same way that it knows others', it will be able to use that knowledge to its benefit. It won't attempt tasks that are beyond its physical or computational capabilities. It may detect weaknesses in its physical or mental assets and attempt to strengthen them or work around them. And, if its internal representation of its own knowledge base and computation mechanism is sufficiently detailed, it can apply that computational mechanism onto itself and examine its chains of calculation, performing high-level error checking -- a crude form of "self-awareness" and "rationality". (Again, I'm not claiming that such a creature necessarily has the same subjective sensation that we call self-awareness. All that I am saying is that it will act in a very similar fashion to one who does.)

And finally, we get to the point where organisms include in their own knowledge models representation of other organisms with their own self-representations. We can obviously extend this to even higher levels of representations-within-representations of self and others, but there probably isn't too much utility beyond this point.

How Would These Sophisticated Organisms Behave?Depending on the complexity of the sensory, motor, and (most importantly) the computational apparatus, these creatures would probably behave similarly to animals in our real world.

These organisms would be able to extract information from the environment and act upon it accordingly. They would seek out good environments and avoid dangerous ones. They could even communicate and cooperate with other organisms, if appropriate. Presumably language would develop in those cases where there is mutual benefit to all concerned (most likely within reproducers of the same species, but perhaps also between closely related species). Because of the richness of their mental maps, these languages would include fairly sophisticated concepts like "food", "danger", "good", "bad", "I", "you", "cooperate", "deception", "past", "future", "maybe", "what if", "almost", etc.

The more sophisticated ones would be capable of memory, learning and even some reasoning using the information contained within their representations of the external environment. Some might even have the equivalent of elaborate internal conversations with themselves, via mechanisms that are the same or similar to the ones they use to communicate with others. For instance, if a hungry organism with a highly advanced brain/CPU receives some apparent "food" stimuli, the following "mental traffic" might pass between different subportions of its CPU: Subprocessor 1: "Hey! This stimuli means food! Let's send a signal to our motor mechanism to move in this direction!" Subprocessor 2: "Belay that order. After reviewing some recent events, we have found that this particular stimulus is a trap. It is a form of deception put forth by another predatory organism that is subtly different from the real 'food' stimulus. The last couple of times we've pursued it, we only narrowly escaped being eaten ourselves. Steer clear of this false 'food' stimulus." Subprocessor 3: "Correction. This stimulus does come from a predator, but not from a living predator. Other information shows that the predator is not moving and its physical integrity is disrupted. In addition, some scavenger organisms (approximately 10) are around the predator picking at its internal structure and absorbing food value from the predator's body. The false 'food' stimulus the body is giving off is coming from one of its internal organs. This is probably the source of such a stimulus in a living predator. It is therefore safe to move in its direction. Furthermore, if we wish to partake of this rich food source also, we had better move towards the body swiftly." Subprocessor 4: "Revision. The scavengers are too numerous. Although we can drive off one or two, we cannot fight all ten without risking serious harm to ourselves. Another solution is necessary." At this point, the processor might invoke a problem-solving subroutine to analyze the available data. Additional "mental traffic" might consist of the following: Subprocessor 5: "Goal: To get to the food source. Problem: The presence of 10 scavengers poses an unacceptable threat to us.

"Solution 1 -- approach the food source in such a way that we only contend with 1 or two scavengers at a time. Objection -- only a small amount of food can be obtained that way before the others are alerted to our presence."Solution 2 -- find a way to reduce the number of scavengers. Objection -- we don't understand enough about how the scavengers perceive the world to manipulate their perceptions in a way to drive them off."Solution 3 -- find a way to reduce our vulnerability to attack from the scavengers so that even 10 of them do not pose a threat to us. Protective shielding may help. Let us look for raw materials necessary to make such shielding."Then, the organism initiates the motor commands to enable it to look around for materials with which to make a shield. Because of the tight time constraint (i.e., the food will be all gone soon if it doesn't find some shielding material), the organism devotes most of its "attention" (most of the computational resources at its highest level)

to finding some shielding material, even if it means allocating less than normal to other forms of sensory input that pertain to other aspects of the environment. Unfortunately, there is not a sufficient quantity of shielding material available to allow the organism to carry out its plan. The organism is aware of that, and responds by searching a wider area in a more rapid and hasty fashion. The search is still not going well. The organism is on the verge of concluding that the search will not succeed and the problem is unsolvable -- i.e., that the next best step is to abandon the search and look for another food source.

But while searching and moving, the organism "accidentally" interacts with some static debris and makes the equivalent of a loud noise (i.e., it runs into a locus of debris that it would have normally "noticed" and avoided, but failed to do so in this case because of its altered focus of attention). The interaction between the organism and the debris results in the production of some striking stimuli that catches the attention of both the organism and the scavengers.

The scavengers cease eating the carcass for a moment then look around somewhat nervously. They don't perceive the organism that caused the "sound" and slowly resume eating the carcass.

Our organism, *does* notice the scavengers' behavior, however. The following mental traffic ensues: "This 'noise' appeared to startle the scavengers. Re-examining our earlier options, we may have found a way to implement solution (2) above. Perhaps if we reproduce the noise louder and more frequently, we will drive them away. Performing solution (2) would be preferable to solution (3) because it will take less time to implement." The organism then deliberately repeats what it had "accidentally" done last time, but in a slightly different way in order to create a much louder 'noise' than the last time. (Presumably it uses its knowledge about 'physics' in this universe to enable to do this.) This time, the scavengers are all spooked and they all run away.

Our organism proceeds to move towards the now-unguarded carcass of the predator and triumphantly enjoys a much needed meal.

As the organism eats, various changes take place in its CPU/brain. At some low level of brain function, some strong reinforcing stimuli are released that serve to 'lock' this new piece of information into memory -- i.e, a certain type of 'noise' will scare this species of scavenger away. Other levels of reinforcement are also done -- reinforcement is also given to the problem solving subroutine to let it know that 'thinking' in a certain way led to productive solutions. This reinforcement is designed to help the problem solving subroutine repeat that same mode of analysis for future cases. Other reinforcing stimuli are stored about not letting one's attention wander too much while performing an urgent task, lest an unfortunate accident occur. (It is tempered by the realization that this time things worked out well, but next time it might not.) All sorts of high and low-level lessons are learned from this episode and incorporated into the memory of our organism, at different levels.

A Few Observations About This Computationally Sophisticated OrganismBy nearly any standard, this organism displays fairly intelligent behavior, close to if not on par with that performed by humans. It integrates information, it learns, it modifies its subgoals in pursuit of a main goal, and it learns how to modify its own and other entities' behaviour. I think that it is in principle possible to achieve this level of sophistication in both the Game of _Life_ universe as well as in AI labs in our universe. (Maybe not today, but someday.) Furthermore, it wouldn't take much more before we have a creature in the _Life_ universe that would be as intelligent as humans and as capable of using language as we are.

(Since we can program a computer to run the Game of _Life_, once we learn the language of intelligent creatures that evolve within our program, we can interact with them. We could for instance introduce our own bit streams into portions of the universe which to the organisms would seem like voices from nowhere. And since we can control every aspect of their universe, we could be literally like gods to them, if we so chose.)

These organisms would reproduce, pursue goals, communicate and transmit information and otherwise act very much like living creatures in our world. Some of their behaviour would be hard-wired in, based on traits that developed during its species evolution, much as our own biology causes us to have some hard-wired behaviors.

Other forms of their behaviour would be subject to modification based on facts that are learned either by direct experience or communicated by others.

Again, I think that the natural course of evolution would lead to the development of truly intelligent species with their own languages and cultures, all implemented on this simple cellular automata!

Yes, But Do They Have Free Will?That depends on what exactly you mean by free will. On the surface, the answer would seem obvious: "Of course they don't have free will. Their behaviour is completely determined by the state of the cellular automata at time=0, before this species even existed!" This is true. But on the other hand, the following facts are also true: (1) Although the creature is influenced by external stimuli (and by past events), in a very real sense, it controls itself. It would be similar to NASA engineers trying to design a space probe that could land on other planets. If the planet were close by in our solar system, it could be reliably controlled from Earth via radio transmissions.

However, if the planet were too far away (where the light-speed delay becomes too long to be practical) then it might be a better strategy to give it more advanced programming and allow it to make its own decisions on the planet surface. That way, the probe would have a better chance of avoiding dangers and accomplishing its goals than if it had to wait several hours for instructions and information to go back and forth between Pluto and Mission Control in Texas. In this case, the NASA engineers would have relinquished control of the probe from themselves to *the probe itself*. (Or as Dennett says, the probe ceases to become a puppet and becomes a robot.) Yes, the probe responds deterministically to the stimuli it receives on the planet Pluto. But *it* decides what to do -- certainly no one else is doing the deciding!

The same would be true of our organisms in the Game of Life -- they would be in control of themselves. No one else decides for them (although others can attempt to persuade them, deceive them, coerce them, etc., just as we humans can do to each other.)

(2) But, you say, its decisions are predetermined, given a particular set of stimuli. It can't *really* choose between two alternatives. On the other hand, *I* can!

Here I disagree. In the example I gave earlier, our organism had to decide between three different strategies for getting food. It looked at them, rejected #1 and #2, and decided upon #3. Then, when that didn't work out, it came across some new information which led it to re-evaluate its decision and choose #2.

Sometimes the organism will be confronted with a decision in which there is only one obvious rational choice. In that case, its decision-making algorithm will probably quickly settle on that choice while barely giving the alternatives any consideration at all.

That's pretty similar to what you do in these circumstances. Suppose you are at a cross-walk, waiting to cross a busy street. The red "Don't Walk" sign is on. You basically have two choices -- [1] either to wait until the light changes or [2] go ahead and take your chances and cross against the light. But as the heavy stream of cars whizz bye at 45 mph, you decide to choose number [1]. In fact, the choice is so obvious, no one in their right mind would choose number [2]. And in fact, you probably don't even give [2] any conscious consideration -- your subconscious prunes the decision tree for you before it gets to your conscious level. So, you don't really choose between two alternatives here any more than our Life organism does.

Sometimes the organism is faced with a decision between two viable alternatives. In that case, the choice is not so obvious at first glance. The organism will therefore have to spend a longer time deciding. Part of the decision process may involve weighing a complicated set of factors, some of which favor one alternative and some of which favor the other alternative. In that case, the organism may include as part of its decision-making process an algorithm to attempt to extrapolate the future based on all the available relevant data to see what the future would look like if it takes option [1] vs if it takes option [2]. Depending how sophisticated its internal modelling process is, it may simulate within its CPU all sorts of details of the two potential outcomes. It may go back and forth trying to extrapolate all sorts of variants of the two futures -- "What's the best case if I take [1] vs. the best case if I take [2]?" "What's the worst case if I take [1] vs the worst case if I take [2]?" "How easy is it to undo the effects of a wrong decision?" "What other information might make the decision easier? How can I obtain that information?" "How urgent is it that I decide now -- can I delay the decision until later?" Eventually, it reaches a decision. Somewhere in its decision making algorithm, the scales tip just ever so slightly in one direction over the other, and that becomes its choice. (Perhaps it may even require the functional equivalent of a random number generator if the two options are perfectly balanced, but *a* choice needs to be made quickly.) In this case, all of the mental traffic of the organism would reflect the equivalents of mental writhings and contortions that it went through.

How different is that from what you or I do? Whenever I've been faced with a difficult decision (which medical school to go to, which job to take, should I break up with this girlfriend, etc.) I've always agonized over it, sometimes for a long time. At some point however, after a lot of deliberation, something seems to percolate up from deep within my innards, and it almost feels as if the decision were made for me! It's difficult to describe, but I assume that you've had similar experiences.

The reason that you feel like you could choose either way is because part of your mental analysis includes imagining yourself in both possible futures. In fact, the very process of imagining both alternatives (sometimes in excruciating detail) is a crucial part of the decision making process!

So, yes, you *do* choose. You analyze and integrate the available information and arrive at a decision. But then, so does our game of Life organism!

(3) Ahhh, but you say, I am *responsible* for my own decisions. The deterministic organism bears no responsibility for what it does -- things can't turn out any other way for it than the way it actually does.

Dennett discusses this at great length in his book on free will. I'll just attempt to summarize his main point, as I understand it. (The examples below, are mine, however.)

The concept of responsibility is important for us as rational creatures. If we act as if we had responsibility and freedom to choose, we will have more options than if we sit by with a fatalistic attitude and ignore opportunities that come by. Furthermore, from a moral point of view, if we accept responsibility for ourselves and hold others responsible for their actions, this helps lead to greater morality in ourselves and others.

Can a computationally sophisticated Life organism understand and accept responsibility? The answer is yes.

If it performs an action that leads to a bad outcome (either for itself or another organism that it values), it will critique that prior action, much as in the example of the food seeker I gave above. Depending on the situation, it might say to itself (i.e., it's mental traffic might consist of symbols which translate to the following sentences)

something like the following: Example 1"That was a foolish move I just did. I should have paid more attention to the warning signs. I almost walked into a predator's trap, thinking that I was chasing food! I'm smarter than that. Next time, I'll just have to be more careful!" (If the brain were structured properly, this higher level mental traffic would also induce changes in lower levels of cognitive functioning that would have the effect of causing the brain to "remember" this lesson and alert the higher levels to this lesson the next time a similar situation is encountered.)

Example 2"My trustworthy colleague that I've known for many time cycles is dead! And I was the direct cause! I incorrectly overestimated my degree of control over this lethal weapon in my 'hand', and it slipped and 'killed' my partner!

This disrupts my expectations of the future (in which I pictured my partner as being with me for a long time, engaging in mutual cooperation with me). This is very detrimental to nearly all of my short and long range goals. I must never to allow something like this to happen again!" Similarly, it might hold another organism responsible for its actions, as below: Example 3"A certain subroutine in your decision making process -- the one which kicks in when you need to perform rapid, hasty actions under severe time constraints -- was invoked inappropriately. You almost caused me great harm! I wish you to think about what could have happened and I expect you to take steps to control your 'temper' so that it does not happen again!" If the Life organism has a sufficiently complex mental life that it has symbols for "I", "you", its own thought processes, others' thought processes, and it has the ability to perform "what if" calculations, the ability to critique its own and others' behavior in light of its goals, the ability to remember the results of its critiques at some level of cognitive function, and the recognition that critiques are valuable *because* it can remember lessons learned from critique sessions, then all of the above conversations are possible (and even desirable).

In other words, in a very real sense, the organism can understand that it is responsible for the outcomes of its actions. It can also meaningfully hold other organisms responsible for the outcomes of their actions.

Some Concluding RemarksIf you were to encounter in real-life, a sophisticated android with all of the above cognitive and information processing capabilities, but was completely deterministic, I don't think you could distinguish it from a person who had "free will". Even its internal mental conversations would be nearly identical to yours or mine, as it tries to make a decision or it attempts to bear responsibility for its actions.

There are a lot more subtleties to Dennett's book that I haven't touched on in this essay. I have to re-read it to make sure I understand all of his arguments, since a lot of them went over my head the first time I read it. But after my first pass, here are some of the major ideas that I took home from it: 1) Most (if not all) of what we mean when we talk about free will is not necessarily incompatible with a deterministic universe.

2) We may in fact be highly sophisticated "organic robots" following *very* complicated programs. This does not necessarily invalidate all our concepts of self-control, volition, responsibility, etc.

3) Rather than thinking -- "You mean we might be nothing more than organic robots! How terrible!", we should think, "Wow! I never realized that organic robots are in principle capable of such astoundingly rich and intelligent behavior!"

A Few Marginally Related Topics

1) It's still unclear to me if these Life creatures would experience qualia, like we do. What would it be like to be one of them? Would it be like anything? Is there a consciousness "at home" in these creatures?

2) What sorts of communications could we humans in our world have with these Life creatures implemented on one of our supercomputers? I would imagine that we could discuss mathematics with them, as well as the physics of their universe. But would they have a sense of humor, for instance, and find jokes funny? (I suspect so, at least for the right type of jokes.)

3) In a very real sense, we would be like gods to them, since we can arbitrarily control and alter any aspect of their universe. They, on the other hand, would have no way to even begin to understand what our universe is like.

Is it possible that we are intelligent beings implemented in someone else's cellular automata system?

References1) Daniel Dennett; Elbow Room: The Varieties of Free Will Worth Wanting; Bradford Books/MIT Press; 1984.

(Best discussion of free will I've ever read.)

2) Elwyn Berlekamp, John H. Conway, Richard Guy; Winning Ways (For Your Mathematical Plays); Vol 2, Chapter 25, "What is Life?"; Academic Press, 1982. (The entire 2 volume book is about mathematical analysis of games. Chapter 25 is devoted to Life. Of course, one would expect Conway to be pretty knowledgable about the subject.)

3) William Poundstone; The Recursive Universe; Contemporary Books; 1985. (Also discusses Life and how it might relate to cosmology. A little speculative at times in the cosmology sections, but the chapters on Life are good.)

4 and 5) Richard Dawkins; The Selfish Gene and The Blind Watchmaker (I don't have them with me, so I don't remember the publisher/date. Excellent discussion of the evolutionary process and how amazing order can arise out of mere "blind chance".)




this website copyright scars publications and design. All rights reserved. No material may be reprinted without express permission from the author.

this page was downloaded to your computer