logo Welcome, Guest. Please Login or Register.
2024-04-28 19:45:09 CoV Wiki
Learn more about the Church of Virus
Home Help Search Login Register
News: Everyone into the pool! Now online... the VirusWiki.

  Church of Virus BBS
  General
  Science & Technology

  Reflections on Stephen Wolfram's "A New Kind of Science"
« previous next »
Pages: [1] Reply Notify of replies Send the topic Print 
   Author  Topic: Reflections on Stephen Wolfram's "A New Kind of Science"  (Read 1231 times)
David Lucifer
Archon
*****

Posts: 2642
Reputation: 8.94
Rate David Lucifer



Enlighten me.

View Profile WWW E-Mail
Reflections on Stephen Wolfram's "A New Kind of Science"
« on: 2002-05-15 13:05:14 »
Reply with quote

Reflections on Stephen Wolfram's "A New Kind of Science"

Source: KurzweilAI.net
Authors: Ray Kurzweil
Dated: 2002-05-14

In his remarkable new book, Stephen Wolfram asserts that cellular automata operations underlie much of the real world. He even asserts that the entire Universe itself is a big cellular-automaton computer. But Ray Kurzweil challenges the ability of these ideas to fully explain the complexities of life, intelligence, and physical phenomena.



Stephen Wolfram's A New Kind of Science is an unusually wide-ranging book covering issues basic to biology, physics, perception, computation, and philosophy. It is also a remarkably narrow book in that its 1,200 pages discuss a singular subject, that of cellular automata. Actually, the book is even narrower than that. It is principally about cellular automata rule 110 (and three other rules which are equivalent to rule 110), and its implications.

It's hard to know where to begin in reviewing Wolfram's treatise, so I'll start with Wolfram's apparent hubris, evidenced in the title itself. A new science would be bold enough, but Wolfram is presenting a new kind of science, one that should change our thinking about the whole enterprise of science. As Wolfram states in chapter 1, "I have come to view [my discovery] as one of the more important single discoveries in the whole history of theoretical science."1

This is not the modesty that we have come to expect from scientists, and I suspect that it may earn him resistance in some quarters. Personally, I find Wolfram's enthusiasm for his own ideas refreshing. I am reminded of a comment made by the Buddhist teacher Guru Amrit Desai, when he looked out of his car window and saw that he was in the midst of a gang of Hell's Angels. After studying them in great detail for a long while, he finally exclaimed, "They really love their motorcycles." There was no disdain in this observation. Guru Desai was truly moved by the purity of their love for the beauty and power of something that was outside themselves.

Well, Wolfram really loves his cellular automata. So much so, that he has immersed himself for over ten years in the subject and produced what can only be regarded as a tour de force on their mathematical properties and potential links to a broad array of other endeavors. In the end notes, which are as extensive as the book itself, Wolfram explains his approach: "There is a common style of understated scientific writing to which I was once a devoted subscriber. But at some point I discovered that more significant results are usually incomprehensible if presented in this style…. And so in writing this book I have chosen to explain straightforwardly the importance I believe my various results have."2 Perhaps Wolfram's successful technology business career may also have had its influence here, as entrepreneurs are rarely shy about articulating the benefits of their discoveries.

So what is the discovery that has so excited Wolfram? As I noted above, it is cellular automata rule 110, and its behavior. There are some other interesting automata rules, but rule 110 makes the point well enough. A cellular automaton is a simple computational mechanism that, for example, changes the color of each cell on a grid based on the color of adjacent (or nearby) cells according to a transformation rule. Most of Wolfram's analyses deal with the simplest possible cellular automata, specifically those that involve just a one-dimensional line of cells, two possible colors (black and white), and rules based only on the two immediately adjacent cells. For each transformation, the color of a cell depends only on its own previous color and that of the cell on the left and the cell on the right. Thus there are eight possible input situations (i.e., three combinations of two colors). Each rule maps all combinations of these eight input situations to an output (black or white). So there are 28 = 256 possible rules for such a one-dimensional, two-color, adjacent-cell automaton. Half of the 256 possible rules map onto the other half because of left-right symmetry. We can map half of them again because of black-white equivalence, so we are left with 64 rule types. Wolfram illustrates the action of these automata with two-dimensional patterns in which each line (along the Y axis) represents a subsequent generation of applying the rule to each cell in that line.

Most of the rules are degenerate, meaning they create repetitive patterns of no interest, such as cells of a single color, or a checkerboard pattern. Wolfram calls these rules Class 1 automata. Some rules produce arbitrarily spaced streaks that remain stable, and Wolfram classifies these as belonging to Class 2. Class 3 rules are a bit more interesting in that recognizable features (e.g., triangles) appear in the resulting pattern in an essentially random order. However, it was the Class 4 automata that created the "ah ha" experience that resulted in Wolfram's decade of devotion to the topic. The Class 4 automata, of which Rule 110 is the quintessential example, produce surprisingly complex patterns that do not repeat themselves. We see artifacts such as lines at various angles, aggregations of triangles, and other interesting configurations. The resulting pattern is neither regular nor completely random. It appears to have some order, but is never predictable.

Why is this important or interesting? Keep in mind that we started with the simplest possible starting point: a single black cell. The process involves repetitive application of a very simple rule3. From such a repetitive and deterministic process, one would expect repetitive and predictable behavior. There are two surprising results here. One is that the results produce apparent randomness. Applying every statistical test for randomness that Wolfram could muster, the results are completely unpredictable, and remain (through any number of iterations) effectively random. However, the results are more interesting than pure randomness, which itself would become boring very quickly. There are discernible and interesting features in the designs produced, so the pattern has some order and apparent intelligence. Wolfram shows us many examples of these images, many of which are rather lovely to look at.

Wolfram makes the following point repeatedly: "Whenever a phenomenon is encountered that seems complex it is taken almost for granted that the phenomenon must be the result of some underlying mechanism that is itself complex. But my discovery that simple programs can produce great complexity makes it clear that this is not in fact correct."4

I do find the behavior of Rule 110 rather delightful. However, I am not entirely surprised by the idea that simple mechanisms can produce results more complicated than their starting conditions. We've seen this phenomenon in fractals (i.e., repetitive application of a simple transformation rule on an image), chaos and complexity theory (i.e., the complex behavior derived from a large number of agents, each of which follows simple rules, an area of study that Wolfram himself has made major contributions to), and self-organizing systems (e.g., neural nets, Markov models), which start with simple networks but organize themselves to produce apparently intelligent behavior. At a different level, we see it in the human brain itself, which starts with only 12 million bytes of specification in the genome, yet ends up with a complexity that is millions of times greater than its initial specification5.

It is also not surprising that a deterministic process can produce apparently random results. We have had random number generators (e.g., the "randomize" function in Wolfram's program "Mathematica") that use deterministic processes to produce sequences that pass statistical tests for randomness. These programs go back to the earliest days of computer software, e.g., early versions of Fortran. However, Wolfram does provide a thorough theoretical foundation for this observation.

Wolfram goes on to describe how simple computational mechanisms can exist in nature at different levels, and that these simple and deterministic mechanisms can produce all of the complexity that we see and experience. He provides a myriad of examples, such as the pleasing designs of pigmentation on animals, the shape and markings of shells, and the patterns of turbulence (e.g., smoke in the air). He makes the point that computation is essentially simple and ubiquitous. Since the repetitive application of simple computational transformations can cause very complex phenomena, as we see with the application of Rule 110, this, according to Wolfram, is the true source of complexity in the world.

My own view is that this is only partly correct. I agree with Wolfram that computation is all around us, and that some of the patterns we see are created by the equivalent of cellular automata. But a key issue is to ask is this: Just how complex are the results of Class 4 Automata?

Wolfram effectively sidesteps the issue of degrees of complexity. There is no debate that a degenerate pattern such as a chessboard has no effective complexity. Wolfram also acknowledges that mere randomness does not represent complexity either, because pure randomness also becomes predictable in its pure lack of predictability. It is true that the interesting features of a Class 4 automata are neither repeating nor pure randomness, so I would agree that they are more complex than the results produced by other classes of Automata. However, there is nonetheless a distinct limit to the complexity produced by these Class 4 automata. The many images of Class 4 automata in the book all have a similar look to them, and although they are non-repeating, they are interesting (and intelligent) only to a degree. Moreover, they do not continue to evolve into anything more complex, nor do they develop new types of features. One could run these automata for trillions or even trillions of trillions of iterations, and the image would remain at the same limited level of complexity. They do not evolve into, say, insects, or humans, or Chopin preludes, or anything else that we might consider of a higher order of complexity than the streaks and intermingling triangles that we see in these images.

Complexity is a continuum. In the past, I've used the word "order" as a synonym for complexity, which I have attempted to define as "information that fits a purpose."6 A completely predictable process has zero order. A high level of information alone does not necessarily imply a high level of order either. A phone book has a lot of information, but the level of order of that information is quite low. A random sequence is essentially pure information (since it is not predictable), but has no order. The output of Class 4 automata does possess a certain level of order, and they do survive like other persisting patterns. But the pattern represented by a human being has a far higher level of order or complexity. Human beings fulfill a highly demanding purpose in that they survive in a challenging ecological niche. Human beings represent an extremely intricate and elaborate hierarchy of other patterns. Wolfram regards any pattern that combines some recognizable features and unpredictable elements to be effectively equivalent to one another, but he does not show how a Class 4 automaton can ever increase its complexity, let alone to become a pattern as complex as a human being.

There is a missing link here in how one gets from the interesting, but ultimately routine patterns of a cellular automaton to the complexity of persisting structures that demonstrate higher levels of intelligence. For example, these class 4 patterns are not capable of solving interesting problems, and no amount of iteration moves them closer to doing so. Wolfram would counter that a rule 110 automaton could be used as a "universal computer."7 However, by itself a universal computer is not capable of solving intelligent problems without what I would call "software." It is the complexity of the software that runs on a universal computer that is precisely the issue.

One might point out that the Class 4 patterns I'm referring to result from the simplest possible cellular automata (i.e., one-dimensional, two-color, two-neighbor rules). What happens if we increase the dimensionality, e.g., go to multiple colors, or even generalize these discrete cellular automata to continuous functions? Wolfram addresses all of this quite thoroughly. The results produced from more complex automata are essentially the same as those of the very simple ones. We obtain the same sorts of interesting but ultimately quite limited patterns. Wolfram makes the interesting point that we do not need to use more complex rules to get the complexity (of Class 4 automata) in the end result. But I would make the converse point that we are unable to increase the complexity of the end result through either more complex rules or through further iteration. So cellular automata only get us so far.

So how do we get from these interesting but limited patterns of Class 4 automata to those of insects, or humans or Chopin preludes? One concept we need to add is conflict, i.e., evolution. If we add another simple concept to that of Wolfram's simple cellular automata, i.e., an evolutionary algorithm, we start to get far more interesting, and more intelligent results. Wolfram would say that the Class 4 automata and an evolutionary algorithm are "computationally equivalent." But that is only true on what I could regard as the "hardware" level. On the software level, the order of the patterns produced are clearly different, and of a different order of complexity.

An evolutionary algorithm can start with randomly generated potential solutions to a problem. The solutions are encoded in a digital genetic code. We then have the solutions compete with each other in a simulated evolutionary battle. The better solutions survive and procreate in a simulated sexual reproduction in which offspring solutions are created, drawing their genetic code (i.e., encoded solutions) from two parents. We can also introduce a rate of genetic mutation. Various high-level parameters of this process, such as the rate of mutation, the rate of offspring, etc., are appropriately called "God parameters" and it is the job of the engineer designing the evolutionary algorithm to set them to reasonably optimal values. The process is run for many thousands of generations of simulated evolution, and at the end of the process, one is likely to find solutions that are of a distinctly higher order than the starting conditions. The results of these evolutionary (sometimes called genetic) algorithms can be elegant, beautiful, and intelligent solutions to complex problems. They have been used, for example, to create artistic designs, designs for artificial life forms in artificial life experiments, as well as for a wide range of practical assignments such as designing jet engines. Genetic algorithms are one approach to "narrow" artificial intelligence, that is, creating systems that can perform specific functions that used to require the application of human intelligence.

But something is still missing. Although genetic algorithms are a useful tool in solving specific problems, they have never achieved anything resembling "strong AI," i.e., aptitude resembling the broad, deep, and subtle features of human intelligence, particularly its powers of pattern recognition and command of language. Is the problem that we are not running the evolutionary algorithms long enough? After all, humans evolved through an evolutionary process that took billions of years. Perhaps we cannot recreate that process with just a few days or weeks or computer simulation. However, conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won't help.

A third level (beyond the ability of cellular processes to produce apparent randomness and genetic algorithms to produce focused intelligent solutions) is to perform evolution on multiple levels. Conventional genetic algorithms only allow evolution within the narrow confines of a narrow problem, and a single means of evolution. The genetic code itself needs to evolve; the rules of evolution need to evolve. Nature did not stay with a single chromosome, for example. There have been many levels of indirection incorporated in the natural evolutionary process. And we require a complex environment in which evolution takes place.

To build strong AI, we will short circuit this process, however, by reverse engineering the human brain, a project well under way, thereby benefiting from the evolutionary process that has already taken place. We will be applying evolutionary algorithms within these solutions just as the human brain does. For example, the fetal wiring is initially random in certain regions, with the majority of connections subsequently being destroyed during the early stages of brain maturation as the brain self-organizes to make sense of its environment and situation.

But back to cellular automata. Wolfram applies his key insight, which he states repeatedly, that we obtain surprisingly complex behavior from the repeated application of simple computational transformations - to biology, physics, perception, computation, mathematics, and philosophy. Let's start with biology.

Wolfram writes, "Biological systems are often cited as supreme examples of complexity in nature, and it is not uncommon for it to be assumed that their complexity must be somehow of a fundamentally higher order than other systems. . . . What I have come to believe is that many of the most obvious examples of complexity in biological systems actually have very little to do with adaptation or natural selection. And instead . . . they are mainly just another consequence of the very basic phenomenon that I have discovered. . . .that in almost any kind of system many choices of underlying rules inevitably lead to behavior of great complexity."8

I agree with Wolfram that some of what passes for complexity in nature is the result of cellular-automata type computational processes. However, I disagree with two fundamental points. First, the behavior of a Class 4 automaton, as the many illustrations in the book depict, do not represent "behavior of great complexity." It is true that these images have a great deal of unpredictability (i.e., randomness). It is also true that they are not just random but have identifiable features. But the complexity is fairly modest. And this complexity never evolves into patterns that are at all more sophisticated.

Wolfram considers the complexity of a human to be equivalent to that a Class 4 automaton because they are, in his terminology, "computationally equivalent." But class 4 automata and humans are only computational equivalent in the sense that any two computer programs are computationally equivalent, i.e., both can be run on a Universal Turing machine. It is true that computation is a universal concept, and that all software is equivalent on the hardware level (i.e., with regard to the nature of computation), but it is not the case that all software is of the same order of complexity. The order of complexity of a human is greater than the interesting but ultimately repetitive (albeit random) patterns of a Class 4 automaton.

I also disagree that the order of complexity that we see in natural organisms is not a primary result of "adaptation or natural selection." The phenomenon of randomness readily produced by cellular automaton processes is a good model for fluid turbulence, but not for the intricate hierarchy of features in higher organisms. The fact that we have phenomena greater than just the interesting but fleeting patterns of fluid turbulence (e.g., smoke in the wind) in the world is precisely the result of the chaotic crucible of conflict over limited resources known as evolution.

To be fair, Wolfram does not negate adaptation or natural selection, but he over-generalizes the limited power of complexity resulting from simple computational processes. When Wolfram writes, "in almost any kind of system many choices of underlying rules inevitably lead to behavior of great complexity," he is mistaking the random placement of simple features that result from cellular processes for the true complexity that has resulted from eons of evolution.

Wolfram makes the valid point that certain (indeed most) computational processes are not predictable. In other words, we cannot predict future states without running the entire process. I agree with Wolfram that we can only know the answer in advance if somehow we can simulate a process at a faster speed. Given that the Universe runs at the fastest speed it can run, there is usually no way to short circuit the process. However, we have the benefits of the mill of billions of years of evolution, which is responsible for the greatly increased order of complexity in the natural world. We can now benefit from it by using our evolved tools to reverse-engineer the products of biological evolution.

Yes, it is true that some phenomena in nature that may appear complex at some level are simply the result of simple underlying computational mechanisms that are essentially cellular automata at work. The interesting pattern of triangles on a "tent olive" shell or the intricate and varied patterns of a snowflake are good examples. I don't think this is a new observation, in that we've always regarded the design of snowflakes to derive from a simple molecular computation-like building process. However, Wolfram does provide us with a compelling theoretical foundation for expressing these processes and their resulting patterns. But there is more to biology than Class 4 patterns.

I do appreciate Wolfram's strong argument, however, that nature is not as complex as it often appears to be. Some of the key features of the paradigm of biological systems, which differ from much of our contemporary designed technology, are that it is massively parallel, and that apparently complex behavior can result from the intermingling of a vast number of simpler systems. One example that comes to mind is Marvin Minsky's theory of intelligence as a "Society of Mind" in which intelligence may result from a hierarchy of simpler intelligences with simple agents not unlike cellular automata at the base.

However, cellular automata on their own do not evolve sufficiently. They quickly reach a limited asymptote in their order of complexity. An evolutionary process involving conflict and competition is needed.

For me, the most interesting part of the book is Wolfram's thorough treatment of computation as a simple and ubiquitous phenomenon. Of course, we've known for over a century that computation is inherently simple, i.e., we can build any possible level of complexity from a foundation of the simplest possible manipulations of information.

For example, Babbage's computer provided only a handful of operation codes, yet provided (within its memory capacity and speed) the same kinds of transformations as do modern computers. The complexity of Babbage's invention stemmed only from the details of its design, which indeed proved too difficult for Babbage to implement using the 19th century mechanical technology available to him.

The "Turing Machine," Alan Turing's theoretical conception of a universal computer in 1950, provides only 7 very basic commands9, yet can be organized to perform any possible computation. The existence of a "Universal Turing Machine," which can simulate any possible Turing Machine (that is described on its tape memory), is a further demonstration of the universality (and simplicity) of computation. In what is perhaps the most impressive analysis in his book, Wolfram shows how a Turing Machine with only two states and five possible colors can be a Universal Turing Machine. For forty years, we've thought that a Universal Turing Machine had to be more complex than this10. Also impressive is Wolfram's demonstration that Cellular Automaton Rule 110 is capable of universal computation (given the right software).

In my 1990 book, I showed how any computer could be constructed from "a suitable number of [a] very simple device," namely the "nor" gate11. This is not exactly the same demonstration as a universal Turing machine, but it does demonstrate that any computation can be performed by a cascade of this very simple device (which is simpler than Rule 110), given the right software (which would include the connection description of the nor gates).12

The most controversial thesis in Wolfram's book is likely to be his treatment of physics, in which he postulates that the Universe is a big cellular-automaton computer. Wolfram is hypothesizing that there is a digital basis to the apparently analog phenomena and formulas in physics, and that we can model our understanding of physics as the simple transformations of a cellular automaton.

Others have postulated this possibility. Richard Feynman wondered about it in considering the relationship of information to matter and energy. Norbert Weiner heralded a fundamental change in focus from energy to information in his 1948 book Cybernetics, and suggested that the transformation of information, not energy, was the fundamental building block for the Universe.

Perhaps the most enthusiastic proponent of an information-based theory of physics was Edward Fredkin, who in the early 1980s proposed what he called a new theory of physics based on the idea that the Universe was comprised ultimately of software. We should not think of ultimate reality as particles and forces, according to Fredkin, but rather as bits of data modified according to computation rules.

Fredkin is quoted by Robert Wright in the 1980s as saying "There are three great philosophical questions. What is life? What is consciousness and thinking and memory and all that? And how does the Universe work? The informational viewpoint encompasses all three. . . . What I'm saying is that at the most basic level of complexity an information process runs what we think of as physics. At the much higher level of complexity, life, DNA - you know, the biochemical functions - are controlled by a digital information process. Then, at another level, our thought processes are basically information processing. . . . I find the supporting evidence for my beliefs in ten thousand different places, and to me it's just totally overwhelming. It's like there's an animal I want to find. I've found his footprints. I've found his droppings. I've found the half-chewed food. I find pieces of his fur, and so on. In every case it fits one kind of animal, and it's not like any animal anyone's ever seen. People say, where is this animal? I say, Well he was here, he's about this big, this that, and the other. And I know a thousand things about him. I don't have him in hand, but I know he's there. . . . What I see is so compelling that it can't be a creature of my imagination."13

In commenting on Fredkin's theory of digital physics, Robert Wright writes, "Fredkin . . . is talking about an interesting characteristic of some computer programs, including many cellular automata: there is no shortcut to finding out what they will lead to. This, indeed, is a basic difference between the "analytical" approach associated with traditional mathematics, including differential equations, and the "computational" approach associated with algorithms. You can predict a future state of a system susceptible to the analytic approach without figuring out what states it will occupy between now and then, but in the case of many cellular automata, you must go through all the intermediate states to find out what the end will be like: there is no way to know the future except to watch it unfold. . . There is no way to know the answer to some question any faster than what's going on. . . . Fredkin believes that the Universe is very literally a computer and that it is being used by someone, or something, to solve a problem. It sounds like a good-news / bad-news joke: the good news is that our lives have purpose; the bad news is that their purpose is to help some remote hacker estimate pi to nine jillion decimal places."14

Fredkin went on to show that although energy is needed for information storage and retrieval, we can arbitrarily reduce the energy required to perform any particular example of information processing, and there is no lower limit to the amount of energy required15. This result made plausible the view that information rather than matter and energy should be regarded as the more fundamental reality.

I discussed Weiner's and Fredkin's view of information as the fundamental building block for physics and other levels of reality in my 1990 book The Age of Intelligent Machines16.

The complexity of casting all of physics in terms of computational transformations proved to be an immensely challenging project, but Fredkin has continued his efforts.17 Wolfram has devoted a considerable portion of his efforts over the past decade to this notion, apparently with only limited communication with some of the others in the physics community who are also pursuing the idea.

Wolfram's stated goal "is not to present a specific ultimate model for physics,"18 but in his "Note for Physicists,"19 which essentially equates to a grand challenge, Wolfram describes the "features that [he] believe[s] such a model will have."

In The Age of Intelligent Machines, I discuss "the question of whether the ultimate nature of reality is analog or digital," and point out that "as we delve deeper and deeper into both natural and artificial processes, we find the nature of the process often alternates between analog and digital representations of information."20 As an illustration, I noted how the phenomenon of sound flips back and forth between digital and analog representations. In our brains, music is represented as the digital firing of neurons in the cochlear representing different frequency bands. In the air and in the wires leading to loudspeakers, it is an analog phenomenon. The representation of sound on a music compact disk is digital, which is interpreted by digital circuits. But the digital circuits consist of thresholded transistors, which are analog amplifiers. As amplifiers, the transistors manipulate individual electrons, which can be counted and are, therefore, digital, but at a deeper level are subject to analog quantum field equations.21 At a yet deeper level, Fredkin, and now Wolfram, are theorizing a digital (i.e., computational) basis to these continuous equations. It should be further noted that if someone actually does succeed in establishing such a digital theory of physics, we would then be tempted to examine what sorts of deeper mechanisms are actually implementing the computations and links of the cellular automata. Perhaps, underlying the cellular automata that run the Universe are yet more basic analog phenomena, which, like transistors, are subject to thresholds that enable them to perform digital transactions.

Thus establishing a digital basis for physics will not settle the philosophical debate as to whether reality is ultimately digital or analog. Nonetheless, establishing a viable computational model of physics would be a major accomplishment. So how likely is this?

We can easily establish an existence proof that a digital model of physics is feasible, in that continuous equations can always be expressed to any desired level of accuracy in the form of discrete transformations on discrete changes in value. That is, after all, the basis for the fundamental theorem of calculus22. However, expressing continuous formulas in this way is an inherent complication and would violate Einstein's dictum to express things "as simply as possible, but no simpler." So the real question is whether we can express the basic relationships that we are aware of in more elegant terms, using cellular-automata algorithms. One test of a new theory of physics is whether it is capable of making verifiable predictions. In at least one important way that might be a difficult challenge for a cellular automata-based theory because lack of predictability is one of the fundamental features of cellular automata.

Wolfram starts by describing the Universe as a large network of nodes. The nodes do not exist in "space," but rather space, as we perceive it, is an illusion created by the smooth transition of phenomena through the network of nodes. One can easily imagine building such a network to represent "naïve" (i.e., Newtonian) physics by simply building a three-dimensional network to any desired degree of granularity. Phenomena such as "particles" and "waves" that appear to move through space would be represented by "cellular gliders," which are patterns that are advanced through the network for each cycle of computation. Fans of the game of "Life" (a popular game based on cellular automata) will recognize the common phenomenon of gliders, and the diversity of patterns that can move smoothly through a cellular automaton network. The speed of light, then, is the result of the clock speed of the celestial computer since gliders can only advance one cell per cycle.

Einstein's General Relativity, which describes gravity as perturbations in space itself, as if our three-dimensional world were curved in some unseen fourth dimension, is also straightforward to represent in this scheme. We can imagine a four-dimensional network and represent apparent curvatures in space in the same way that one represents normal curvatures in three-dimensional space. Alternatively, the network can become denser in certain regions to represent the equivalent of such curvature.

A cellular-automata conception proves useful in explaining the apparent increase in entropy (disorder) that is implied by the second law of thermodynamics. We have to assume that the cellular-automata rule underlying the Universe is a Class 4 rule (otherwise the Universe would be a dull place indeed). Wolfram's primary observation that a Class 4 cellular automaton quickly produces apparent randomness (despite its determinate process) is consistent with the tendency towards randomness that we see in Brownian motion, and that is implied by the second law.

Special relativity is more difficult. There is an easy mapping from the Newtonian model to the cellular network. But the Newtonian model breaks down in special relativity. In the Newtonian world, if a train is going 80 miles per hour, and I drive behind it on a nearby road at 60 miles per hour, the train will appear to pull away from me at a speed of 20 miles per hour. But in the world of special relativity, if I leave Earth at a speed of three-quarters of the speed of light, light will still appear to me to move away from me at the full speed of light. In accordance with this apparently paradoxical perspective, both the size and subjective passage of time for two observers will vary depending on their relative speed. Thus our fixed mapping of space and nodes becomes considerably more complex. Essentially each observer needs his own network. However, in considering special relativity, we can essentially apply the same conversion to our "Newtonian" network as we do to Newtonian space. However, it is not clear that we are achieving greater simplicity in representing special relativity in this way.

A cellular node representation of reality may have its greatest benefit in understanding some aspects of the phenomenon of quantum mechanics. It could provide an explanation for the apparent randomness that we find in quantum phenomena. Consider, for example, the sudden and apparently random creation of particle-antiparticle pairs. The randomness could be the same sort of randomness that we see in Class 4 cellular automata. Although predetermined, the behavior of Class 4 automata cannot be anticipated (other than by running the cellular automata) and is effectively random.

This is not a new view, and is equivalent to the "hidden variables" formulation of quantum mechanics, which states that there are some variables that we cannot otherwise access that control what appears to be random behavior that we can observe. The hidden variables conception of quantum mechanics is not inconsistent with the formulas for quantum mechanics. It is possible, but is not popular, however, with quantum physicists because it requires a large number of assumptions to work out in a very particular way. However, I do not view this as a good argument against it. The existence of our Universe is itself very unlikely and requires many assumptions to all work out in a very precise way. Yet here we are.

A bigger question is how could a hidden-variables theory be tested? If based on cellular automata-like processes, the hidden variables would be inherently unpredictable, even if deterministic. We would have to find some other way to "unhide" the hidden variables.

Wolfram's network conception of the Universe provides a potential perspective on the phenomenon of quantum entanglement and the collapse of the wave function. The collapse of the wave function, which renders apparently ambiguous properties of a particle (e.g., its location) retroactively determined, can be viewed from the cellular network perspective as the interaction of the observed phenomenon with the observer itself. As observers, we are not outside the network, but exist inside it. We know from cellular mechanics that two entities cannot interact without both being changed, which suggests a basis for wave function collapse.

Wolfram writes that "If the Universe is a network, then it can in a sense easily contain threads that continue to connect particles even when the particles get far apart in terms of ordinary space." This could provide an explanation for recent dramatic experiments showing nonlocality of action in which two "quantum entangled" particles appear to continue to act in concert with one another even though separated by large distances. Einstein called this "spooky action at a distance" and rejected it, although recent experiments appear to confirm it.

Some phenomena fit more neatly into this cellular-automata network conception than others. Some of the suggestions appear elegant, but as Wolfram's "Note for Physicists" makes clear, the task of translating all of physics into a consistent cellular automata-based system is daunting indeed.

Extending his discussion to philosophy, Wolfram "explains" the apparent phenomenon of free will as decisions that are determined but unpredictable. Since there is no way to predict the outcome of a cellular process without actually running the process, and since no simulator could possibly run faster than the Universe itself, there is, therefore, no way to reliably predict human decisions. So even though our decisions are determined, there is no way to predetermine what these decisions will be. However, this is not a fully satisfactory examination of the concept. This observation concerning the lack of predictability can be made for the outcome of most physical processes, e.g., where a piece of dust will fall onto the ground. This view thereby equates human free will with the random descent of a piece of dust. Indeed, that appears to be Wolfram's view when he states that the process in the human brain is "computationally equivalent" to those taking place in processes such as fluid turbulence.

Although I will not attempt a full discussion of this issue here, it should be noted that it is difficult to explore concepts such as free will and consciousness in a strictly scientific context because these are inherently first-person subjective phenomena, whereas science is inherently a third person objective enterprise. There is no such thing as the first person in science, so inevitably concepts such as free will and consciousness end up being meaningless. We can either view these first person concepts as mere illusions, as many scientists do, or we can view them as the appropriate province of philosophy, which seeks to expand beyond the objective framework of science.

There is a philosophical perspective to Wolfram's treatise that I do find powerful. My own philosophy is that of a "patternist," which one might consider appropriate for a pattern recognition scientist. In my view, the fundamental reality in the world is not stuff, but patterns.

If I ask the question, 'Who am I?' I could conclude that, perhaps I am this stuff here, i.e., the ordered and chaotic collection of molecules that comprise my body and brain.

However, the specific set of particles that comprise my body and brain are completely different from the atoms and molecules than comprised me only a short while (on the order of weeks) ago. We know that most of our cells are turned over in a matter of weeks. Even those that persist longer (e.g., neurons) nonetheless change their component molecules in a matter of weeks.

So I am a completely different set of stuff than I was a month ago. All that persists is the pattern of organization of that stuff. The pattern changes also, but slowly and in a continuum from my past self. From this perspective I am rather like the pattern that water makes in a stream as it rushes past the rocks in its path. The actual molecules (of water) change every millisecond, but the pattern persists for hours or even years.

It is patterns (e.g., people, ideas) that persist, and in my view constitute the foundation of what fundamentally exists. The view of the Universe as a cellular automaton provides the same perspective, i.e., that reality ultimately is a pattern of information. The information is not embedded as properties of some other substrate (as in the case of conventional computer memory) but rather information is the ultimate reality. What we perceive as matter and energy are simply abstractions, i.e., properties of patterns. As a further motivation for this perspective, it is useful to point out that, based on my research, the vast majority of processes underlying human intelligence are based on the recognition of patterns.

However, the intelligence of the patterns we experience in both the natural and human-created world is not primarily the result of Class 4 cellular automata processes, which create essentially random assemblages of lower level features. Some people have commented that they see ghostly faces and other higher order patterns in the many examples of Class 4 images that Wolfram provides, but this is an indication more of the intelligence of the observer than of the pattern being observed. It is our human nature to anthropomorphize the patterns we encounter. This phenomenon has to do with the paradigm our brain uses to perform pattern recognition, which is a method of "hypothesize and test." Our brains hypothesize patterns from the images and sounds we encounter, followed by a testing of these hypotheses, e.g., is that fleeting image in the corner of my eye really a predator about to attack? Sometimes we experience an unverifiable hypothesis that is created by the inevitable accidental association of lower-level features.

Some of the phenomena in nature (e.g., clouds, coastlines) are explained by repetitive simple processes such as cellular automata and fractals, but intelligent patterns (e.g., the human brain) require an evolutionary process (or, alternatively the reverse-engineering of the results of such a process). Intelligence is the inspired product of evolution, and is also, in my view, the most powerful "force" in the world, ultimately transcending the powers of mindless natural forces.

In summary, Wolfram's sweeping and ambitious treatise paints a compelling but ultimately overstated and incomplete picture. Wolfram joins a growing community of voices that believe that patterns of information, rather than matter and energy, represent the more fundamental building blocks of reality. Wolfram has added to our knowledge of how patterns of information create the world we experience and I look forward to a period of collaboration between Wolfram and his colleagues so that we can build a more robust vision of the ubiquitous role of algorithms in the world.

The lack of predictability of Class 4 cellular automata underlies at least some of the apparent complexity of biological systems, and does represent one of the important biological paradigms that we can seek to emulate in our human-created technology. It does not explain all of biology. It remains at least possible, however, that such methods can explain all of physics. If Wolfram, or anyone else for that matter, succeeds in formulating physics in terms of cellular-automata operations and their patterns, then Wolfram's book will have earned its title. In any event, I believe the book to be an important work of ontology.


--------------------------------------------------------------------------------

1 Wolfram, A New Kind of Science, page 2.

2 Ibid, page 849.

3 Rule 110 states that a cell becomes white if its previous color and its two neighbors are all black or all white or if its previous color was white and the two neighbors are black and white respectively; otherwise the cell becomes black.

4 Wolfram, A New Kind of Science, page 4.

5 The genome has 6 billion bits, which is 800 million bytes, but there is enormous repetition, e.g., the sequence "ALU" which is repeated 300,000 times. Applying compression to the redundancy, the genome is approximately 23 million bytes compressed, of which about half specifies the brain's starting conditions. The additional complexity (in the mature brain) comes from the use of stochastic (i.e., random within constraints) processes used to initially wire specific areas of the brain, followed by years of self-organization in response to the brain's interaction with its environment.

6 See my book The Age of Spiritual Machines, When Computers Exceed Human Intelligence (Viking, 1999), the section titled "Disdisorder" and "The Law of Increasing Entropy Versus the Growth of Order" on pages 30 - 33.

7 A computer that can accept as input the definition of any other computer and then simulate that other computer. It does not address the speed of simulation, which might be slow in comparison to the computer being simulated.

8 Wolfram, A New Kind of Science, page 383.

9 The seven commands of a Turing Machine are: (i) Read Tape, (ii) Move Tape Left, (iii) Move Tape Right, (iv) Write 0 on the Tape, (v) Write 1 on the Tape, (vi) Jump to another command, and (vii) Halt.

10 As Wolfram points out, the previous simplest Universal Turing machine, presented in 1962, required 7 states and 4 colors. See Wolfram, A New Kind of Science, pages 706 - 710.

11 The "nor" gate transforms two inputs into one output. The output of "nor" is true if an only if neither A nor B are true.

12 See my book The Age of Intelligent Machines, section titled "A nor B: The Basis of Intelligence?," pages 152 - 157.

13 Edward Fredkin, as quoted in Did the Universe Just Happen by Robert Wright.

14 Ibid.

15 Many of Fredkin's results come from studying his own model of computation, which explicitly reflects a number of fundamental principles of physics. See the classic Edward Fredkin and Tommaso Toffoli, "Conservative Logic," International Journal of Theoretical Physics 21, numbers 3-4 (1982). Also, a set of concerns about the physics of computation analytically similar to those of Fredkin's may be found in Norman Margolus, "Physics and Computation," Ph.D. thesis, MIT.

16 See The Age of Intelligent Machines, section titled "Cybernetics: A new weltanschauung," pages 189 - 198.

17 See the web site: www.digitalphilosophy.org, including Ed Fredkin's essay "Introduction to Digital Philosophy." Also, the National Science Foundation sponsored a workshop during the Summer of 2001 titled "The Digital Perspective," which covered some of the ideas discussed in Wolfram's book. The workshop included Ed Fredkin Norman Margolus, Tom Toffoli, Charles Bennett, David Finkelstein, Jerry Sussman, Tom Knight, and Physics Nobel Laureate Gerard 't Hooft. The workshop proceedings will be published soon, with Tom Toffoli as editor.

18 Stephen Wolfram, A New Kind of Science, page 1,043.

19 Ibid, pages 1,043 - 1,065.

20 The Age of Intelligent Machines, pages 192 - 198.

21 Ibid.

22 The fundamental theorem of calculus establishes that differentiation and integration are inverse operations.

 
   

« Last Edit: 2002-05-15 13:09:45 by David Lucifer » Report to moderator   Logged
rhinoceros
Archon
*****

Gender: Male
Posts: 1318
Reputation: 8.39
Rate rhinoceros



My point is ...

View Profile WWW E-Mail
Wolfram: A skeptic critique
« Reply #1 on: 2003-02-07 15:09:59 »
Reply with quote

[rhinoceros]
This is from the www.skeptic.com newsletter, February 6, 2003.



SKEPTICS ON STEPHEN WOLFRAM AT CALTECH

Last Saturday, February 1, 2003, Dr. Stephen Wolfram, author of the controversial book A New Kind of Science, spoke at Caltech to a packed audience of over a thousand people, who came to see and hear the subject of so much scientific press, as well as what three world-class scientists had to say about it.

In an upcoming issue of Skeptic computer scientist David Naiditch will be publishing a full review essay of Wolfram's book, but for now I post his summary of the Caltech event, along with aerospace engineer Michael Gilmore's impressions of the day.



A New Kind of Science?

by David Naiditch

On February 1, physicist and computer scientist, Dr. Stephen Wolfram, spoke to a full house at Caltech's Beckman Auditorium about his grandiose proposal for a new and improved kind of science. After Wolfram spoke for about an hour, he answered questions from a panel of distinguished scientists, and then responded to questions from the audience.

Stephen Wolfram was a child prodigy. He received his doctoral degree in theoretical physics from Caltech when he was only 20, and was the youngest scientist to receive a MacArthur award for his work in physics and computer science. Wolfram made a fortune developing Mathematica--a powerful software program that has become a standard for technical computing. Then, staring in the early 1980s, he began working on cellular automata.

To understand cellular automata, imagine a grid of squares where each square can either be black or white. From an initial state of a few black squares, a simple rule is applied over and over again. This rule determines whether or not a square changes its color, and is based on the color of the square's nearest neighbors. For instance, a square might change from white to black only if its nearest left neighbor is black and its right neighbor is white. From such simple rules, intricate patterns can be generated, some of which are highly symmetric like snowflakes, others that appear random, and others that are self-similar fractals. Wolfram discovered that even the simplest programs yield patterns of astonishing complexity.

In May 2002, Wolfram published his book, A New Kind of Science, which for the first time revealed to the world the results of his research on cellar automata and related fields. Wolfram's book was an immediate success and caused a great deal of controversy. According to his publicist, the initial print run of 50,000 copies sold out the first day, with over 200,000 copies sold at the time of this writing. The book has been reviewed in most major media venues (New York Times Book Review, New York Review of Books, Science, Nature, etc.) and Wolfram has been featured in such national publications as Time and Newsweek.

Wolfram proposed a new way of doing science. For hundreds of years, scientists have successfully used mathematical equations that show how various entities are connected. For instance, Newton's equation, F=ma, shows us how force (F) is related to mass (m) and acceleration (a). The problem with this approach is that equations fail to describe complex phenomena we see all around us, such as the turbulence of boiling water or the changing weather. To describe such complex phenomena, Wolfram proposes that scientists employ the types of rules used in cellular automata and related areas of computing.

In Wolfram's theory the universe is a giant computer. This computer produces complexity through the repeated execution of simple rules. Instead of using equations to describe the results of nature's computer programs, Wolfram tells us to examine the programs themselves.

At the Caltech event Wolfram's ideas were challenged by a stellar panel of scientists: Steven Koonin, Chris Adami, John Preskill, and David Stevenson. Steven Koonin, the moderator, is a full professor of physics at Caltech and received the Caltech Associated Students Teaching Award, the Humboldt Senior Scientist Award, and the E.O. Lawrence Award in Physics from the Department of Energy. Chris Adami is faculty associate and director of the Digital Life Laboratory at Caltech, principle scientist in the Quantum Technologies Group at the Jet Propulsion Laboratory, and author of the textbook Introduction to Artificial Life. John Preskill is the John D. MacArthur Professor of Theoretical Physics at Caltech and the director of the Institute for Quantum Information. David Stevenson has been a physics professor at Caltech since 1980 and is the recipient of a Fellowship of the Royal Society of London and the Feynman teaching prize.

Although it is clear that Wolfram is no crank, not someone skeptics would label a pseudoscientist, skeptics will notice that, despite his flawless credentials, staggering intelligence, and depth of knowledge, Wolfram possesses many attributes of a pseudoscientist: (1) he makes grandiose claims, (2) works in isolation, (3) did not go through the normal peer-review process, (4) published his own book, (5) does not adequately acknowledge his predecessors, and (6) rejects a well-established theory of at least one famous scientist.

First, throughout his lecture Wolfram made the grandiose claim that his work amounts to a "paradigm shift" of how we do science. Furthermore, Wolfram claims his work will shed light on a broad range of fundamental issues that have stymied scientists for ages, including the randomness found in nature, biological complexity, the nature of space-time, the possibility of a "theory of everything," and the scope and limitations of mathematics. Wolfram even claims his insights can be used to tackle the ancient paradoxes of free will and determinism, and the nature of intelligence.

Second, like so many pseudoscientists on the fringe, Wolfram did his work in isolation for 20 years. Although he was running a company that required he interact with employees and customers (many of whom are scientists), his work on cellular automata was kept largely to himself.

Third, Wolfram admitted that he had enough material during this time for hundreds of scientific papers, yet he did not bother to publish any of the material or present his ideas at any scientific conferences. Thus, any critical feedback that might have improved his theory before it was cemented in inky stone was eschewed, making change at this point in the development of his theory much more unlikely.

Fourth, in May 2002 Wolfram revealed his work for the first time in his massive self-published tome, A New Kind of Science, coming in at 1,268 pages. This is not because he could not get a publisher, or that no publisher would print such a large book. Readers may recall Stephen Jay Gould's magnum opus, The Structure of Evolutionary Theory, was released about the same time by Harvard University Press, topping out at 1,433 pages. Between the two, bookstores shelves were sagging under the weight of Big Science. Wolfram self-published because he wanted to maintain tight control over the production and distribution of his life's work.

Fifth, not only did Wolfram work alone, during his Caltech lecture not once did he acknowledge the work of other scientists. In addition, throughout the 850 pages of general text, and 350 pages of notes, there are no traditional references to be found in A New Kind of Science: no references to scientific papers, no citations of books related to the topic, and no bibliography. In fact, the notes section consists mostly of further commentary on his own work earlier in the book, with occasional reference to other scientists and scholars without actually providing citations to their work. In actual fact, many of Wolfram's ideas are not new. They can be found, for instance, in James Gleick's popular book, Chaos: Making a New Science, and in Robert Wright's book, Three Scientists and Their Gods, which describes the work of Edward Fredkin. Fredkin, like Wolfram, believes that the universe is a digital computer. What is new in A New Kind of Science is Wolfram's claim that cellular automata, instead of being peripheral to science, should be central to the way science is practiced.

Sixth, Wolfram raised the hackles of the scientific panel as well as the audience when he rejected a well-established theory of a famous scientist: none other than Charles Darwin and his theory of natural selection. Although Wolfram does not claim natural selection is totally without merit, he does claim it is insufficient to fully explain the complexity found in the biological world. For instance, he claims that natural selection can explain phenomena such the lengthening of bones, but not fundamental changes to an animal's morphology. Wolfram also claims that, contrary to popular belief, evolution is not very important to biologists.

Panel member Chris Adami, who researches how complexity arises from natural selection, took exception to these claims. Adami pointed out that Darwinian evolution in general, and natural selection in particular, is of fundamental importance to biologists; without it, biology does not make sense. Adami also argued that the kind of complexity biologists are most concerned with is different from the kind of complexity presented by Wolfram. Wolfram tries to explain complex patterns such as those found on seashells. According to Adami, such complexity is based on our perception and our inability to perceive the simple rules that can generate such patterns. In contrast, biologists are concerned with functional complexity that arises as organisms adapt to various environments, thereby increasing their chance of survival and reproduction. Adami finds it inconceivable that the functional complexity of, say, a living cell, is due to a simple underlying rule. John Preskill also challenged Wolfram on this point, noting that cellular automata are very fragile. Any "mutation" to cellular automata is disastrous. Biological systems, on the other hand, must be stable even when mutations and other errors are introduced.

In addition to these criticisms, other objections were raised to Wolfram's ideas. Steven Koonin pointed out that a paradigm shift cannot arise simply by asserting something is a paradigm shift. One must convince the scientific community that this description is warranted. To the contrary, according to David Stevenson, Wolfram fails to satisfy rules of what constitutes good science. Creating programs that generate images that look like things found in nature is not sufficient. One needs specific predictions. Wolfram does not offer any laboratory experiments or observations that could verify or falsify his grand claims.

Wolfram responded that the requirement of falsifiability does not apply to mathematics or computer science. He argued that his claims have the character of mathematics rather than physics, employing calculus as an analogy. Newton= showed how calculus provides a new way of doing science. Calculus itself, however, is not tested to determine whether it is true or false. Its justification is that it works. The panel rebutted that if this analogy is true, then Wolfram is just proposing a new kind of computational method, not a new kind of science.

Objections were also raised that Wolfram's theory lacks explanatory power. Not everything that is useful is explanatory. For example, David Stevenson explained that Feynman diagrams are very useful and can provide answers to problems of quantum mechanics much faster than answers obtained by computational methods. However, Feynman diagrams do not provide an explanation or deeper understanding of quantum phenomena. Again, it was emphasized that Wolfram seems to be offering a new kind of computational tool, not a new kind of science.

According to Wolfram, by generating patterns on the computer screen that resemble, for instance, snowflakes, he has explained how snowflakes acquire their complex symmetric structures. Panelists countered that such inferences are unwarranted. The resemblance does not, by itself, mean nature uses rules to generate snowflake patterns. Wolfram needs to demonstrate how nature physically instantiates the rules of cellular automata. Evidence is needed to show that the shape of snowflakes was produced by a physical mechanism whose behavior resembles the rules used by a computer.

John Preskill observed that few of the ideas presented in Wolfram's book are concrete enough to be usable by research scientists. Wolfram's answer that no experts in his field yet exist, does not address the problem. For example, Wolfram's most original ideas--such as the attempt to incorporate quantum theory and gravity using random network models and path independence--are too speculative to be of use to scientists.

At the end of the Caltech program the moderator, Steven Koonin, asked the panelists to predict whether in 20 years Wolfram's A New Kind of Science will be viewed as a paradigm shift. The unanimous answer was "no." One panelist said, "it is not an approach that has much promise," while another noted that Wolfram's ideas are the "Emperors New Clothes." Wolfram tried to get in the last word by stating that this reaction from the panelists is just what one would expect from a paradigm shift. But Steven Koonin rejoined that this is also just what one would expect if Wolfram's ideas did not amount to a paradigm shift. Ultimately, time will tell who is right.

« Last Edit: 2003-02-07 15:18:28 by rhinoceros » Report to moderator   Logged
rhinoceros
Archon
*****

Gender: Male
Posts: 1318
Reputation: 8.39
Rate rhinoceros



My point is ...

View Profile WWW E-Mail
Wolfram: A skeptic critique
« Reply #2 on: 2003-02-07 15:16:44 »
Reply with quote

[rhinoceros]
This is also from the www.skeptic.com newsletter. The previous one is much more specific, but this one  takes a wider view.



Of Triangles and Bulldogs

Is Stephen Wolfram a Modern Pythagoras?
By Michael Gilmore

"I would rather understand one cause than be King of Persia."
--Democritus of Abdera


It was a warm day in Pasadena and a full house at Caltech's Beckman auditorium Saturday, February 1.

In a swift and densely packed hour, Stephen Wolfram presented ideas from his 1268 page best seller, A New Kind of Science, that deals with the mathematical world of cellular automata It was a fine lecture, but was it science? Is this the beginning of a new paradigm shift, as Wolfram so repeatedly and confidently claimed?

I first wondered if this Saturday could be like that hot Oxford day when Huxley, as Darwin's bulldog, debated Bishop Wilberforce about the new Theory of Natural Selection? Perhaps Wolfram was his own bulldog: exceptionally bright, eloquent, and confident, with a British accent to boot. But this was a lonely bulldog, and he had no defenders. It was four to one in this debate, with no soapy bishops among the Caltech panel of stellar scientists who questioned Wolfram. Of course, science isn't done by consensus, but then Wolfram was no Darwin. At least not yet. He had made no new predictions about nature that the scientists sitting quietly in the auditorium could then go forth and check by microscope, cyclotron, or telescope.

As Wolfram talked, I remembered a hot summer day on the island of Samos. On an Ionian trek, with my son Tyson, we had sailed to Samos to find the muses of science. Pythagoras was the name most in evidence on the island. His theorem regarded right triangles, you know. But, it was the legends of Thales, Aristarchus, and Anaximander that was more to our taste. These guys were the ancient equivalent to modern scientists. They pursued observation and experiment. They got their hands dirty and used their brains.

But Pythagoras wasn't one of them. He professed that nature could be understood by pure thought alone. Wolfram seemed to be pitching something rather close to that idea. He also had an apparent obsession with triangles, manifest throughout his magnum opus.

I think the ancient rift between the Ionian experimentalists and the Pythagorian mystics gives some insight to the Wolfram question.

One modern manifestation of this ancient rift, is the traditional separation between experimentalist and theoretician. One extreme is the stereotypical well manicured, well dressed, elegant, and usually arrogant, theoretician, who never has grease under the nail or eye to the microscope, yet knows all the answers by thought alone.

Of course there is also the scientist who has gathered reams of observations in the outback, but has never had a philosophical thought in his or her life. Good science is, of course, neither of these stereotypes. Those who make useful observations and experiments are usually driven by some variation of what Michael Shermer calls "Darwin's dictum" where, as the sage of Down said, "all observation must be fore or against some view if it is to be of any service." Good theoreticians are informed by the latest observations and experimental results. It is no accident that Galileo, Newton, Halley, Faraday, and Darwin were good with their hands and great experimentalists.

Yes, we all know of the exceptions. A famous example being the delightfully arrogant theoretical physicist Wolfgang Pauli who allegedly could destroy whole laboratories at a distance, just by his presence in their vicinity!

But, the most famous theoretical scientist of the 20th century, Einstein, remarked how much he enjoyed the laboratory experience and was bored with the lecture hall. Feynman's self constructed youthful laboratory was his joy, and Enrico Fermi's world class reputation was grounded in both his theoretical and laboratory talent.

We should keep these examples in mind when we sit at our computer screens day after day. We must remember to pick ourselves up, roll up our sleeves, tinker in the lab, explore the world, and observe nature.

The theorem of the sums of the squares of the sides of a right triangle may not have been original with Pythagoras. But the method of mathematical deduction for a general proof was his. Today's mathematical argument, and scientific practice owes much to Pythagoras. However, there is no short cut to the secrets of nature by mind alone, as the Pythagorians believed. At least not yet.

Scientists today depend on Stephen Wolfram's Mathematica, which has become a legendary standard program for technical computing throughout the world. This software allowed Wolfram to explore deeply the mathematical world of cellular automata. Cellular automata has elements of a sort of perfect and mystical world. A world the Pythagorians really thought existed. It is a beautiful mathematical creation, but it is not nature.

There is a deja vu about Stephen Wolfram, perhaps others have noticed it. Like Wolfram, the American mathematician, Johnny von Neumann was a great pioneer in computer science as well as cellular automata. Like Wolfram he was incredibly bright, a child prodigy. (I checked some photos, they even look alike.). Von Neumann's good friend, the British mathematician and polymath, Jacob Bronowski, kindly found fault with him and stated Johnny von Neumann was in love with the aristocracy of the intellect." This was a sin Bronowski believed could destroy civilization. Like Galileo and Darwin, Wolfram has written a popular book. In doing so he isn't practicing the sin Bronowski had in mind regarding von Neumann. But, I can't help thinking of the "aristocracy of the intellect" when I consider the Pythagoreans and their mystical short cut to know the world. The aristocracy of the intellect, the arrogance about not getting your hands dirty, and about having some sort of absolute knowledge with no test in the world, are all closely related. And they are a barrier to doing good science.

« Last Edit: 2003-02-07 15:17:54 by rhinoceros » Report to moderator   Logged
Pages: [1] Reply Notify of replies Send the topic Print 
Jump to:


Powered by MySQL Powered by PHP Church of Virus BBS | Powered by YaBB SE
© 2001-2002, YaBB SE Dev Team. All Rights Reserved.

Please support the CoV.
Valid HTML 4.01! Valid CSS! RSS feed