logo Welcome, Guest. Please Login or Register.
2024-03-29 08:11:43 CoV Wiki
Learn more about the Church of Virus
Home Help Search Login Register
News: Open for business: The CoV Store!

  Church of Virus BBS
  General
  Science & Technology

  Computers vs. Brains
« previous next »
Pages: [1] Reply Notify of replies Send the topic Print 
   Author  Topic: Computers vs. Brains  (Read 1834 times)
Walter Watts
Archon
*****

Gender: Male
Posts: 1571
Reputation: 8.85
Rate Walter Watts



Just when I thought I was out-they pull me back in

View Profile WWW E-Mail
Computers vs. Brains
« on: 2009-04-01 14:43:15 »
Reply with quote

Olivia Judson - A New York Times Blog
March 31, 2009, 10:15 pm

Guest Column: Computers vs. Brains

By Sandra Aamodt and Sam Wang

Inventor Ray Kurzweil, in his 2005 futurist manifesto “The Singularity Is Near,” extrapolates current trends in computer technology to conclude that machines will be able to out-think people within a few decades. In his eagerness to salute our robotic overlords, he neglects some key differences between brains and computers that make his prediction unlikely to come true.

Brains have long been compared to the most advanced existing technology — including, at one point, telephone switchboards. Today people talk about brains as if they were a sort of biological computer, with pink mushy “hardware” and “software” generated by life experiences.

However, any comparison with computers misses a messy truth. Because the brain arose through natural selection, it contains layers of systems that arose for one function and then were adopted for another, even though they don’t work perfectly. An engineer with time to get it right would have started over, but it’s easier for evolution to adapt an old system to a new purpose than to come up with an entirely new structure. Our colleague David Linden has compared the evolutionary history of the brain to the task of building a modern car by adding parts to a 1925 Model T that never stops running. As a result, brains differ from computers in many ways, from their highly efficient use of energy to their tremendous adaptability.

One striking feature of brain tissue is its compactness. In the brain’s wiring, space is at a premium, and is more tightly packed than even the most condensed computer architecture. One cubic centimeter of human brain tissue, which would fill a thimble, contains 50 million neurons; several hundred miles of axons, the wires over which neurons send signals; and close to a trillion (that’s a million million) synapses, the connections between neurons.

The memory capacity in this small volume is potentially immense. Electrical impulses that arrive at a synapse give the recipient neuron a small chemical kick that can vary in size. Variation in synaptic strength is thought to be a means of memory formation. Sam’s lab has shown that synaptic strength flips between extreme high and low states, a flip that is reminiscent of a computer storing a “one” or a “zero” — a single bit of information.

But unlike a computer, connections between neurons can form and break too, a process that continues throughout life and can store even more information because of the potential for creating new paths for activity. Although we’re forced to guess because the neural basis of memory isn’t understood at this level, let’s say that one movable synapse could store one byte (8 bits) of memory. That thimble would then contain 1,000 gigabytes (1 terabyte) of information. A thousand thimblefuls make up a whole brain, giving us a million gigabytes — a petabyte — of information. To put this in perspective, the entire archived contents of the Internet fill just three petabytes.

To address this challenge, Kurzweil invokes Moore’s Law, the principle that for the last four decades, engineers have managed to double the capacity of chips (and hard drives) every year or two. If we imagine that the trend will continue, it’s possible to guess when a single computer the size of a brain could contain a petabyte. That would be about 2025 to 2030, just 15 or 20 years from now.

This projection overlooks the dark, hot underbelly of Moore’s law: power consumption per chip, which has also exploded since 1985. By 2025, the memory of an artificial brain would use nearly a gigawatt of power, the amount currently consumed by all of Washington, D.C. So brute-force escalation of current computer technology would give us an artificial brain that is far too costly to operate.

Compare this with your brain, which uses about 12 watts, an amount that supports not only memory but all your thought processes. This is less than the energy consumed by a typical refrigerator light, and half the typical needs of a laptop computer. Cutting power consumption by half while increasing computing power many times over is a pretty challenging design standard. As smart as we are, in this sense we are all dim bulbs.

A persistent problem in artificial computing is the sensitivity of the system to component failure. Yet biological synapses are remarkably flaky devices even in normal, healthy conditions. They release neurotransmitter only a small fraction of the time when their parent neuron fires an electrical impulse. This unreliability may arise because individual synapses are so small that they contain barely enough machinery to function. This may be a trade-off that stuffs the most function into the smallest possible space.

In any case, a brain’s success is not measured by its ability to process information in precisely repeatable ways. Instead, it has evolved to guide behaviors that allow us to survive and reproduce, which often requires fast responses to complex situations. As a result, we constantly make approximations and find “good-enough” solutions. This leads to mistakes and biases. We think that when two events occur at the same time, one must have caused the other. We make inaccurate snap judgments such as racial prejudice. We fail to plan rationally for the future, as explored in the field of neuroeconomics.

Still, engineers could learn a thing or two from brain strategies. For example, even the most advanced computers have difficulty telling a dog from a cat, something that can be done at a glance by a toddler — or a cat. We use emotions, the brain’s steersman, to assign value to our experiences and to future possibilities, often allowing us to evaluate potential outcomes efficiently and rapidly when information is uncertain. In general, we bring an extraordinary amount of background information to bear on seemingly simple tasks, allowing us to make inferences that are difficult for machines.

If engineers can understand how to apply these shortcuts and tricks, computer performance could begin to emulate some of the more impressive feats of human brains. However, this route may lead to computers that share our imperfections. This may not be exactly what we want from robot overlords, but it could lead to better “soft” judgments from our computers.

This gets us to the deepest point: why bother building an artificial brain?

As neuroscientists, we’re excited about the potential of using computational models to test our understanding of how the brain works. On the other hand, although it eventually may be possible to design sophisticated computing devices that imitate what we do, the capability to make such a device is already here. All you need is a fertile man and woman with the resources to nurture their child to adulthood. With luck, by 2030 you’ll have a full-grown, college-educated, walking petabyte. A drawback is that it may be difficult to get this computing device to do what you ask.

We’re grateful to Olivia for the opportunity to write these four columns. Our topics provoked many reactions in the comments section — a fascinating look into the minds of some of The Wild Side’s readers. Thank you all for lending us this seat at the table.

**********

NOTES:

Thanks are due to Charles F. Stevens at the Salk Institute for the thimble idea.

For additional views of the brain’s evolutionary quirks, see “The Accidental Mind” by David Linden, “Kluge” by Gary Marcus and our own book, “Welcome to Your Brain.”

To read about the minimum storage capacity of 1 bit per synapse, see O’Connor, D.H., Wittenberg, G.M. & Wang, S.S.-H. “Graded bidirectional synaptic plasticity is composed of switch-like unitary events.” Proc. Natl. Acad. Sci. USA 102:9679-9684 (2005).

The increase in storage capacity that comes with the ability to change connectivity is described in Chklovskii, D.B., Mel, B.W. & Svoboda, K. “Cortical rewiring and information storage.” Nature 431:782-788 (2004).

Moore’s law extends not only to computing power, but also to memory capacity (in which case it is sometimes called Kryder’s law) and power consumption.

The power consumption of the District of Columbia (and the states) can be found here.

    * Copyright 2009 The New York Times Company
    * Privacy Policy
    * NYTimes.com 620 Eighth Avenue New York, NY 10018

Report to moderator   Logged

Walter Watts
Tulsa Network Solutions, Inc.


No one gets to see the Wizard! Not nobody! Not no how!
MoEnzyme
Heretic
*****

Gender: Male
Posts: 2256
Reputation: 1.66
Rate MoEnzyme



infidel lab animal

View Profile WWW
Re:Computers vs. Brains
« Reply #1 on: 2009-04-02 12:06:02 »
Reply with quote

[[ author reputation (1.66) beneath threshold (3)... display message ]]

Report to moderator   Logged

I will fight your gods for food,
Mo Enzyme


(consolidation of handles: Jake Sapiens; memelab; logicnazi; Loki; Every1Hz; and Shadow)
Pages: [1] Reply Notify of replies Send the topic Print 
Jump to:


Powered by MySQL Powered by PHP Church of Virus BBS | Powered by YaBB SE
© 2001-2002, YaBB SE Dev Team. All Rights Reserved.

Please support the CoV.
Valid HTML 4.01! Valid CSS! RSS feed