« Reply #30 on: 2009-02-21 22:35:51 »
[Fritz]Interesting to see this pop up in main stream media
Source: Star Tribune Author: Karen Youso, Date: February 21, 2009 - 9:17 AM
Change: Approaching 'singularity'
The pace of change is accelerating. Get used to it. It's not slowing down or going away, and there's a name for where we're headed.
Consider the telephone. Since its invention in the 1800s, it went from crank-style to push-button to cell by the 1980s. Quickly it became smaller and smaller, smarter and smarter. Now phones take pictures, play music, send text and soon will wrap around your wrist like a bracelet. Or, consider the nanoparticle. The number of products on store shelves using nanotechnology -- manipulating atoms to create new materials -- in 1990 was near zero. In 2004, it was 212. Today, it's more than 800 and increasing by three or four a week.
See a pattern here? Things are moving faster and faster. It feels like you can't keep up. Remember the last months of 2008, and the dramatic series of events from presidential campaign, election, Wall Street collapse, wild stock market swings, the bailouts? It was enough to make your head spin. Indeed, society is changing at a pace that can seem like it's whirling out of control. And you haven't seen anything yet.
"We're in the midst of accelerating change, and changes in technology always bring changes in society, and vice versa," said John Moravec, director of the University of Minnesota's College of Education and Human Development's Leapfrog Institutes.
Like a snowball rolling downhill, change is only going to get faster, and its effects larger, experts predict. There's a word for where it's headed -- and it's not "crash" at the bottom of the hill. It's headed straight for the "singularity." That's the watershed moment when accelerating technology becomes so advanced that it surpasses what the human brain can comprehend. And because it can improve its own programming, change happens instantly, almost without us being aware of it.
« Reply #31 on: 2009-03-07 18:14:35 »
The Singularity comes a step closer
Self-programming chips loom
By Robert Munro @ Friday, February 27, 2009 1:24 PM
Computer circuits that require fewer components than our existing technology employs, combine several functions and are capable of 'self-learning' have been fabricated for the first time.
Researchers at Hewlett-Packard Laboratories in Palo Alto, California combined memristors with transistors in a hybrid circuit array to demonstrate conditional self-programming and show that just a few such device elements can be configured to act as logic, switching and memory components simultaneously. The use of fewer circuit elements offers the benefits of smaller circuit size and lower power consumption.
The term 'memristor' means 'memory resistor', the fourth type of passive circuit element in addition to the (fixed) resistor, the capacitor and the inductor. Having been predicted by theory in 1971, the first memristor device wasn't fabricated until 2008. The memristor is a two-terminal circuit element that changes its resistance in response to the positive or negative polarity of the voltage applied to it or the amount of current flowing through it.
Moore's Law, which predicts a doubling of transistor density every 18 months, won't give us superhuman intelligence in a reasonable timeframe by itself, says author Vernor Vinge in this new video from the Ideas Project.
But that doesn't mean the Singularity isn't coming — it's just coming from a few different places. Vinge packs a lot of ideas into a short video, including the fact that we're already seeing more embedded networks everywhere, and networks can visualize the geometry of their idea based on the "ID number of the node they're pinging off of, and the round-trip time." And he's confident that cyberspace will be everting, and we'll be living in a consensual reality, sooner than we think.
Researchers have developed a robot capable of learning and interacting with the world using a biological brain.
Kevin Warwick’s new robot behaves like a child. “Sometimes it does what you want it to, and sometimes it doesn’t,” he says. And while it may seem strange for a professor of cybernetics to be concerning himself with such an unreliable machine, Warwick’s creation has something that even today’s most sophisticated robots lack: a living brain.
Life for Warwick’s robot began when his team at the University of Reading spread rat neurons onto an array of electrodes. After about 20 minutes, the neurons began to form connections with one another. “It’s an innate response of the neurons,” says Warwick, “they try to link up and start communicating.”
For the next week the team fed the developing brain a liquid containing nutrients and minerals. And once the neurons established a network sufficiently capable of responding to electrical inputs from the electrode array, they connected the newly formed brain to a simple robot body consisting of two wheels and a sonar sensor.
A relay of signals between the sensor, motors, and brain dictate the robot’s behavior. When it approaches an object, the number of electrical pulses sent from the sonar device to the brain increases. This heightened electrical stimulation causes certain neurons in the robot’s brain to fire. When the electrodes on which the firing neurons rest detect this activity, they signal the robot’s wheels to change direction. The end result is a robot that can avoid obstacles in its path.
At first, the young robot spent a lot of time crashing into things. But after a few weeks of practice, its performance began to improve as the connections between the active neurons in its brain strengthened. “This is a specific type of learning, called Hebbian learning,” says Warwick, “where, by doing something habitually, you get better at doing it.”
The robot now gets around well enough. “But it has a biological brain, and not a computer,” says Warwick, and so it must navigate based solely on the very limited amount of information it receives from a single sensory device. If the number of sensory devices connected to its brain increases, it will gain a better understanding of its surroundings. “I have another student now who has started to work on an audio input, so in some way we can start communicating with it,” he says.
But it would be a bit shortsighted to say that adding sensory input devices to the robot would make it more human, as theoretically there is no limit to how many sensory devices a robot equipped with a biological brain could have. “We are looking to increase the range of sensory input potentially with infrared and other signals,” says Warwick.
A robot that experiences its environment through devices like sonar detectors and infrared sensors would perceive the world quite differently from a person. Imagine having a Geiger counter plugged into your brain — or perhaps better yet, an X-ray detector. For future generations of Warwick’s robot, this isn’t just a thought experiment.
But Warwick isn’t interested only in building a robot with a wide range of sensory inputs. “It’s fun just looking at it as a robot life form, but I think it may also contribute to a better understanding of how our brain works,” he says. Studying the ways in which his robot learns and stores memories in its brain may provide new insights into neurological disorders like Alzheimer’s disease.
Warwick’s robot is dependent upon biological cells, so it won’t live forever. After a few months, the neurons in its brain will grow sluggish and less responsive as learning becomes more difficult and the robot’s mortal coil begins to take hold. A sad thought perhaps — but such is life.
« Reply #35 on: 2009-04-02 23:57:16 »
New robots think like scientists
Source: Reuters UK Credits: Uncredited Dated: 2009-04-02
Researchers say they have created machines that could reason like scientists and discover scientific knowledge on their own, marking a major advance in artificial intelligence.
Such robo-scientists could work on unraveling complex biological systems, designing new drugs, modeling the world's climate or understanding the cosmos.
In Wales, one robot called 'Adam' carried out experiments on yeast metabolism and could reason about the results and plan the next experiment. It is the world's first example of a machine that has independently discovered new scientific knowledge -- in this case -- new facts about the genetic make-up of baker's yeast. The team's next robot 'Eve' will have a lot more brain power and will search for new medicines.
SOUNDBITE (ENGLISH): PROFESSOR ROSS KING, DEPARTMENT OF COMPUTER SCIENCE, ABERYSTWYTH UNIVERSITY: "It is autonomous. On its own it can think of the hypotheses and then do the experiments and we've checked that it's got the results correct."
"People have been working on this since the 1960's trying to do this. When we first sent robots to Mars they really dreamt of the robots doing their own experiments on Mars. After 40, 50 years we've now got the capability of actually do that."
"What you do in drug design is, traditionally, that you just test thousands upon thousands of compounds against what is called an assay which is designed to tell you about whether its going to be a compound for the disease or not. And what we've done in Eve is to make that process a bit cleverer, a bit more intelligent, so the computer itself gets to chose which compounds to try next."
In just over a day, a powerful computer program accomplished a feat that took physicists centuries to complete: extrapolating the laws of motion from a pendulum's swings. Developed by Cornell researchers, the program deduced the natural laws without a shred of knowledge about physics or geometry. The research is being heralded as a potential breakthrough for science in the Petabyte Age, where computers try to find regularities in massive datasets that are too big and complex for the human mind. (See Wired magazine's July 2008 cover story on "The End of Science.") "One of the biggest problems in science today is moving forward and finding the underlying principles in areas where there is lots and lots of data, but there's a theoretical gap. We don't know how things work," said Hod Lipson, the Cornell University computational researcher who co-wrote the program. "I think this is going to be an important tool." Condensing rules from raw data has long been considered the province of human intuition, not machine intelligence. It could foreshadow an age in which scientists and programs work as equals to decipher datasets too complex for human analysis. Lipson's program, co-designed with Cornell computational biologist Michael Schmidt and described in a paper published Thursday in Science, may represent a breakthrough in the old, unfulfilled quest to use artificial intelligence to discover mathematical theorems and scientific laws: Half a century ago, IBM's Herbert Gelernter authored a program that purportedly rediscovered Euclid's geometry theorems, but critics said it relied too much on programmer-supplied rules. In the 1970s, Douglas Lenat's Automated Mathematician automatically generated mathematical theorems, but they proved largely useless. Stanford University's Dendral project, was started in 1965 and used for two decades to extrapolate possible structures for organic molecules from chemical measurements gathered by NASA spacecraft. But it was ultimately unable to assess the likelihood of the various answers that it generated. The $100,000 Leibniz Prize, established in the 1980s, was promised to the first program to discover a theorem that "profoundly effects" math. It was never claimed. But now artificial intelligence experts say Lipson and Schmidt may have fulfilled the field's elusive promise. Unlike the Automated Mathematician and its heirs, their program is primed only with a set of simple, basic mathematical functions and the data it's asked to analyze. Unlike Dendral and its counterparts, it can winnow possible explanations into a likely few. And it comes at an opportune moment — scientists have vastly more data than theories to describe it. Lipson and Schmidt designed their program to identify linked factors within a dataset fed to the program, then generate equations to describe their relationship. The dataset described the movements of simple mechanical systems like spring-loaded oscillators, single pendulums and double pendulums — mechanisms used by professors to illustrate physical laws. The program started with near-random combinations of basic mathematical processes — addition, subtraction, multiplication, division and a few algebraic operators. Initially, the equations generated by the program failed to explain the data, but some failures were slightly less wrong than others. Using a genetic algorithm, the program modified the most promising failures, tested them again, chose the best, and repeated the process until a set of equations evolved to describe the systems. Turns out, some of these equations were very familiar: the law of conservation of momentum, and Newton's second law of motion. "It's a powerful approach," said University of Michigan computer scientist Martha Pollack, with "the potential to apply to any type of dynamical system." As possible fields of application, Pollack named environmental systems, weather patterns, population genetics, cosmology and oceanography. "Just about any natural science has the type of structure that would be amenable," she said. Compared to laws likely to govern the brain or genome, the laws of motion discovered by the program are extremely simple. But the principles of Lipson and Schmidt's program should work at higher scales. The researchers have already applied the program to recordings of individuals' physiological states and their levels of metabolites, the cellular proteins that collectively run our bodies but remain, molecule by molecule, largely uncharacterized — a perfect example of data lacking a theory. Their results are still unpublished, but "we've found some interesting laws already, some laws that are not known," said Lipson. "What we're working on now is the next step — ways in which we can try to explain these equations, correlate them with existing knowledge, try to break these things down into components for which we have clues." Lipson likened the quest to a "detective story" — a hint of the changing role of researchers in hybridized computer-human science. Programs produce sets of equations — describing the role of rainfall on a desert plateau, or air pollution in triggering asthma, or multitasking on cognitive function. Researchers test the equations, determine whether they're still incomplete or based on flawed data, use them to identify new questions, and apply them to messy reality. The Human Genome Project, for example, produced a dataset largely impervious to traditional analysis. The function of nearly every gene depends on the function of other genes, which depend on still more genes, which change with time and place. The same level of complexity confronts researchers studying the body's myriad proteins, the human brain and even ecosystems. "The rules are mathematical formulae that capture regularities in the system," said Pollack, "but the scientist needs to interpret those regularities. They need, for example, to explain" why an animal population is affected by changes in rainfall, and what might be done to protect it. Michael Atherton, a University of Minnesota cognitive neuroscientist who recently predicted that computer intelligence would not soon supplant human artistic and scientific insight, said that the program "could be a great tool, in the same way visualization software is: It helps to generate perspectives that might not be intuitive." However, said Atherton, "the creativity, expertise, and the recognition of importance is still dependent on human judgment. The main problem remains the same: how to codify a complex frame of reference." "In the end, we still need a scientist to look at this and say, this is interesting," said Lipson. Humans are, in other words, still important. Citations: "Distilling Free-Form Natural Laws from Experimental Data." By Michael Schmidt and Hod Lipson. Science, Vol. 324, April 3, 2009.
"Automating Science." By David Waltz and Bruce Buchanan. Science, Vol. 324, April 3, 2009.
« Reply #38 on: 2009-12-03 17:32:14 »
Well this would get silicon technology a big step closer
Intel puts cloud on single megachip .... One die, 48 cores
Source: The Register Author: Rik Myslewski in San Francisco Date: 2nd December 2009 23:08 GMT
Intel's research team has unveiled a 48-core processor that it claims will usher in a new era of "immersive, social, and perceptive" computing by putting datacenter-style integration on a single chip.
And, no, it's not the long-awaited CPU-GPU mashup, Larrabee. This processor, formerly code-named Rock Creek and now known by the more au courant moniker of Single-chip Cloud Computer (SCC), is a research item only.
As Intel CTO Justin Rattner emphasized during his presentation (PDF) on Wednesday to reporters in San Francisco, "This is not a product. It never will be a product." But the SCC does provide an insight into the direction into which Intel is heading - and the path the company is treading is many-cored.
Rattner characterized the many-core future to be "more perceptive," saying that "The machines we build will be capable of understanding the world around them much as we do as humans. The will see, and they will hear, they will probablly speak, and do a number of other things that resemble human-like capabilities. And they will demand, as a result, very substantial computing capability."
But the ancestor of those future chips, the SCC, is up and running today - as Rattner proudly pointed out while displaying a multi-die manufacuring wafer. "We're beyond the wafer level. [We have] packaged and running parts. This is not the typical Intel 'flash the wafer and then wait six months'."
The SCC is the second-generation experimental processor in Intel's Tera-scale Computing Research Program, the first being the 80-core Polaris, which it demoed in 2007.
While a move from 80 to 48 cores may seem like a step backwards, the SCC has one massive advantage over Polaris: its cores are fully IA-compliant. Polaris was a specialized beast, purely a proof-of-concept part. The SCC, by contrast, can do actual work - which Rattner and his crew proudly demoed.
One of the demos pointed directly towards the SCC's practical focus: Hadoop's Mahout machine-learning tools running an object-categorization task on the SCC with only minimal tweaking. As Mike Ryan, a software engineer from Intel Research Pittsburgh, explained to The Reg, "I didn't have to change any software. The only thing I had to do was permute some of the memory-configuration options as well as well as the distributed file-system options."
In other words, the SCC ran off-the-shelf, real-world software thanks to its IA compliance, and functioned in the Hadoop demo as a datacenter-on-a-chip. "The move to Intel Architecture–compatible cores gives us an opportunity to make more ambitious efforts on the programming side," Rattner said.
At 567mm2 and 1.3 billion transitors, the SCC is a hefty chip, but Rattner claims that as its performance scales - both frequency and voltage can be tweaked in real time - the SCC dissipates between 25 and 125W.
The SCC's 48 IA-32 cores were described by Rattner as "Pentium-class cores that are simple, in-order designs and not sophisticated out-of-order processors you see in the production-processor families - more on the order of an Atom-like core design as opposed to a Nehalem-class design."
Tech specs for the 45nm CMOS high-k metal gate part include four DDR3 channels in a 6-by-4 2D-mesh network. The cores communicate by means of a software-configurable message-passing scheme using 384KB of on-die shared memory.
The SCC was designed by a 40-person research team of collaborating software and hardware engineers with members in Braunschweig, Germany; Bangalore, India; and Hillsboro, Oregon. As Rattner joked, "Not only did we manage to do somewhat over a billion transistors, but we did it on three continents in time zones that are roughly 10 to 12 hours apart - in one sense, somebody was working on it 24 hours a day."
Perhaps some day in the many-core future, those 40 engineers will be supplemented by seeing, hearing, and speaking computing assistants with "human-like capabilities." ®
« Reply #39 on: 2010-04-09 19:20:07 »
to bad if it remains an HP monopoly .... still another step closer.
HP's Memristor tech - better than flash?
It will be, says HP
Source: The Register Author: Chris Mellor Date: 8th April 2010
Free whitepaper – An ERP platform strategy based on industry-standard servers
HP will claim today to have pushed Memristor technology to equal the switching speed and endurance shown by current NAND flash cells.
The Memristor or memory resistor is said to be a fundamental electrical circuit element, along with the resistor, capacitor and inductor. Its electrical state remains unaltered between a device being switched on and off - just like flash memory, for which it is a follow-on candidate. In this it competes with Phase-Change Memory (PCM). Once NAND flash runs out of process shrinkage room it stops working reliably and new technology is needed.
HP implemented the first Memristor device in 2008 and said it might have a working prototype in 2009. It has since claimed to have found a way to build three-dimensional Memristor devices, with 2D switch arrays stacked on top of each other like chip towers, to build relatively huge capacity devices.
HP is expected to reveal it has increased Memristor switching speed and endurance to that seen in current NAND flash cells, the New York Times reports. The company thinks it can do even better and scale the technology to far lower process geometries than flash. It is working on 3nm Memristors that switch at one nanosecond. Today's most advanced flash is transitioning to 25nm process geometries. The next level is thought to lie in the 24-20nm area, potentially with a 19-15nm follow-on. Problems are then expected to mount as flash's operational reliability could be compromised.
However, flash density could be increased by upping the cell count in multi-layer cells (MLC). Two-bit flash is common now, three bit is coming and SanDisk has four-bit MLC patents. But flash write performance and endurance slows as more bits are added to cells, and flash controllers have to overcome this obstacle to make 3X and 4X MLC flash usable.
HP thinks it can build a competing Memristor device with a density of 20GB/sq in to flash chips by 2013, which it reckons will be double what flash can do. HP thinks Memristor technology is better than PCM as well, since PCM involves heating cells to change their physical state - requiring more power - and has a slower switching speed.
If HP is right and it can actually build Memristor chips in large capacities and large numbers and at an acceptable price, then it could blow Numonyx, Samsung, Toshiba and SanDisk's flash businesses and PCM follow-on efforts to blazes, and make a killing in license fees.
HP scientists are also claiming that the human brain uses quasi-Memoristor equivalents, so HP could build a Memristor-based brain that could learn and do human stuff. This could well be hot air - the brain's operations have been likened to most technology advances before and every single one has been shown to be inadequate.
For now the best known way to create a human brain is to start with two humans, one man and one woman, and bring them into conjunction.
« Reply #40 on: 2010-04-17 11:31:24 »
[Fritz] Yes, No, and Should Have !
Chinese go beyond binary with ternary molecule Memory sandwich molecule munches three bits
Source: The Register Author: Chris Mellor Date: 16th April 2010 09:22 GMT
Chinese scientists have developed an organic molecule which can have three electrically-readable states, making a ternary rather than binary device possible.
Binary devices have two electrically readable states, corresponding to a one or zero. Ternary devices have three: zero, one or two. Consequently they could, if built and programmed, store or process more information than binary RAM or NAND.
The scientists built a prototype device which functioned as a plastic DRAM cell. It was composed of a new synthesised organic azo compound sandwiched between aluminum and indium tin oxide (ITO) electrodes. Azo compounds may have vivid yellow, orange and red colours, and are used in dyes.
According to a American Chemical Society journal abstract the proto cell was "based on a donor-functionalized polyimide (TP6F-PI), which exhibited the ability to write, read, erase, and refresh the electrical states. The device had an ON/OFF current ratio up to 105, promising minimal misreading error. Both the on and off states were stable under a constant voltage stress of 1 V and survived up to 108 read cycles at 1 V."
The azo compound exhibits a high, medium or low conductivity state depending on the voltage applied to it through the aluminium electrodes and these three states can be read out as 0, 1 or 2. The abstract says: "Two electron pull groups and one electron push group are identified as the species responsible for electron flow."
It's very interesting science, but the entire binary computing infrastructure would have to alter to use it. Don't expect a ternary iPad in your lifetime. Sorry! ®
« Reply #42 on: 2010-05-13 09:51:05 »
Yet another step closer.
Silky circuits Making electronic circuits that will work inside a person’s body
Source: The Economist Author: From The Economist print edition Date: May 6th 2010
OVER the years, electronics have found their way into almost every aspect of human life. They are in homes, offices, cars and just about all gadgets. Some electronic circuits have also made their way into the bodies of people in the form of heart pacemakers and cochlear implants. Now new kinds of bodily electronics are coming.
Most electronics are made in the form of integrated circuits, which are tiny chips that contain transistors and other components etched onto silicon wafers. While fine for computers and other products, they are inflexible and cannot be easily wrapped around curved surfaces or pliable ones, making them hard to be used in the body. Researchers have devised ways to make flexible electronics, for such things as electronic paper. Now, John Rogers of the University of Illinois, Urbana-Champaign, who is one of the pioneers of flexible electronics, has devised a new technique to create ultra-thin and flexible circuits suitable for medical use.
Dr Rogers first fabricated a mesh containing a circuit of silicon electronics by thinning silicon until it becomes flexible. But this causes a problem. Since it is so thin, it soon collapses. To avoid this, Dr Rogers deposited the circuit onto a special silk to provide structural support without sacrificing flexibility. The silk was engineered by Tufts University, near Boston, from a silkworm cocoon that had been boiled to create a silk solution that can be deposited as a thin film. When the film containing the circuit is placed on biological tissue, it dissolves naturally. What it leaves behind is the circuit, attached to the tissue by capillary forces and supported by it.
To apply his technique to medicine, Dr Rogers has teamed up with Brian Litt of the University of Pennsylvania, a neurology expert interested in creating electronic implants to monitor and treat epilepsy. In a paper recently published in Nature Materials the pair explained how they placed one of Dr Rogers’s silk-supported electronic meshes on the exposed brain of an anaesthetised cat. After the silk dissolved, the electrodes in the mesh followed the contours of the cat’s brain and were able to detect neurological activity more accurately than conventional implanted electrodes.
Next, Dr Rogers and Dr Litt hope to test the technique on epileptic dogs to see if the electrodes can detect seizures. Eventually it may be possible to use such circuits to prevent epileptic seizures. Other applications could include electrically stimulated repairs to spinal injuries or controlling drug delivery inside the body. For such applications it may even be possible to engineer circuits and components that dissolve once they have done their work.
Source: Time Author: Lev Grossman Date: 2011.02.10
Technologist Raymond Kurzweil has a radical vision for humanity's immortal future Photo-Illustration by Ryan Schude for TIME
On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.
On the show (see the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200. (See TIME's photo-essay "Cyberdyne's Real Robot.")
Kurzweil then demonstrated the computer, which he built himself — a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher.
But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.
That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away. (See the best inventions of 2010.)
Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing.
So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.
If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville.
Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity. (Comment on this story.)
The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation. <snip> 5 more pages at the site ....
The Singularity: A problem to solve, to solve more problems
The Singularity: A problem to solve, to solve more problems There are a lot of questions facing our world today, and many more that we'll be faced with in the future. What are we going to do about famine, poverty, disease, overpopulation, and our growing energy needs? Will we be able to manage the changing climate of our planet? How does human consciousness work? What is the fundamental nature of the universe? Is there a way we could extend our lifespan? Some of these problems have solutions that are within our grasp. Others seem like they might never be solved. So what happens if we can't? Do we give up and conclude that it's just too difficult - even impossible? Or is there another option?
Suppose we could develop some kind of intelligence that was slightly better than the human intellect, whether through an enhancement to our own minds, or the creation of a fully synthetic system. Could this be done? It seems improbable that humanity represents the pinnacle of all possible minds that could exist. After all, we're descended from much less intelligent ancestors, and other, superior lifeforms may one day descend from us. And there's no reason why our own cognitive architecture must be the only kind that's capable of intelligence. This is just how we happened to evolve in the environments of Earth.
So, suppose we were to focus on creating a better-than-human intelligence. Not vastly greater, but greater nonetheless. What could this new and improved intelligence be directed towards? Might it be able to figure out some of the problems we have to deal with? Could it come up with better answers than we would have? Perhaps, or maybe not. Maybe it still wouldn't be smart enough. So what if we put it to work on developing another intelligent system that's slightly smarter than itself? Being superior to humans, it would be even more proficient at this than we would, making it better and faster at designing its own successor. And the system that results from this would be even better at designing a greater intelligence.
This series of progressively superior intellects could thus continue in this way, each becoming even more skilled at developing the next generation of intelligent systems until some kind of limit is reached. The accelerating and escalating production of ever greater intelligence by greater intelligence is known as the technological singularity. It means taking incremental steps to bridge a very large gap in intelligence, in a much shorter period of time.
With a vastly greater amount of intelligence available to us, we would no longer have to settle for the human pace of technological development or the limits of human problem-solving ability. That is what the singularity offers us: solutions, smarter than the ones we could come up with, quicker than we could come up with them. Even with all of the problem-solving ability that could conceivably exist, there may still be some problems to which there are no good answers. But we'll be in a much better position to determine that once we have the best tools available to tackle them rather than settling for the status quo.
There is a tradeoff here: When we create a system to do the things we don't know how to do, we also have no way of knowing just what it is going to do. To know that, we would have to be that smart already - but we're not. Some people have pointed out that this is dangerous: a smarter system could outsmart us, and a very smart system might be able to figure out how to do almost anything it wanted. So the development of human-superior intelligence is often considered too unpredictable and too uncontrollable to attempt. But there may not be another option here. If we choose to refrain from doing this, someone else might decide to do it first - and there's no telling what the results would be.
Intelligent systems like AIs have commonly been portrayed as a bloodthirsty threat to humanity. But the reality is that nothing of the sort is inherent to all AIs. An artificial intelligence is fully specified by how it's initially written, and there are as many different kinds of AI minds as could possibly be programmed. But when AIs are the ones writing the AIs, this is something we have to be especially careful with. When an intelligence can modify itself or decide what the systems it produces will want to do, it's very important that this intelligence has a stable and benevolent system of goals. This is absolutely crucial to ensuring a suitable outcome, and designing a safe singularity may end up being one of the most pressing problems we'll have to face in the coming century.
The future is an endless network of dim and potentially lethal corridors which we must navigate with the utmost caution, and so far, the most we can do is fumble in the dark. Here we have a chance to turn on the light, chart a path ahead, and find the best way through. Is it guaranteed? No. But if we can solve this problem, it might unlock the rest. So don't give up just yet. There are better answers out there. We just have to look a little harder.