logo Welcome, Guest. Please Login or Register.
2024-04-26 07:24:38 CoV Wiki
Learn more about the Church of Virus
Home Help Search Login Register
News: Read the first edition of the Ideohazard

  Church of Virus BBS
  General
  Science & Technology

  The future of human evolution
« previous next »
Pages: [1] Reply Notify of replies Send the topic Print 
   Author  Topic: The future of human evolution  (Read 11334 times)
Hermit
Archon
*****

Posts: 4287
Reputation: 8.94
Rate Hermit



Prime example of a practically perfect person

View Profile WWW
The future of human evolution
« on: 2005-11-12 00:41:56 »
Reply with quote

Fast Forward, The future of human evolution

Human evolution at the crossroads
Genetics, cybernetics complicate forecast for species


Source: MSNBC
Author(s) :Alan Boyle, Science editor, MSNBC
Dated: 2005-05-02


Elegant Flash Evolutionary Explorer



Duane Hoffmann / MSNBC illustrations


Scientists are fond of running the evolutionary clock backward, using DNA analysis and the fossil record to figure out when our ancestors stood erect and split off from the rest of the primate evolutionary tree.

But the clock is running forward as well. So where are humans headed?

Evolutionary biologist Richard Dawkins says it's the question he's most often asked, and "a question that any prudent evolutionist will evade." But the question is being raised even more frequently as researchers study our past and contemplate our future.

Paleontologists say that anatomically modern humans may have at one time shared the Earth with as many as three other closely related types — Neanderthals, Homo erectus and the dwarf hominids whose remains were discovered last year in Indonesia.

Does evolutionary theory allow for circumstances in which "spin-off" human species could develop again?

Some think the rapid rise of genetic modification could be just such a circumstance. Others believe we could blend ourselves with machines in unprecedented ways — turning natural-born humans into an endangered species.

Present-day fact, not science fiction

Such ideas may sound like little more than science-fiction plot lines. But trend-watchers point out that we're already wrestling with real-world aspects of future human development, ranging from stem-cell research to the implantation of biocompatible computer chips. The debates are likely to become increasingly divisive once all the scientific implications sink in.

"These issues touch upon religion, upon politics, upon values," said Gregory Stock, director of the Program on Medicine, Technology and Society at the University of California at Los Angeles. "This is about our vision of the future, essentially, and we'll never completely agree about those things."

The problem is, scientists can't predict with precision how our species will adapt to changes over the next millennium, let alone the next million years. That's why Dawkins believes it's imprudent to make a prediction in the first place.

Others see it differently: In the book "Future Evolution," University of Washington paleontologist Peter Ward argues that we are making ourselves virtually extinction-proof by bending Earth's flora and fauna to our will. And assuming that the human species will be hanging around for at least another 500 million years, Ward and others believe there are a few most likely scenarios for the future, based on a reading of past evolutionary episodes and current trends.

Where are humans headed?  Here's an imprudent assessment of five possible paths, ranging from homogenized humans to alien-looking hybrids bred for interstellar travel.

Unihumans: Will we all be assimilated?


Unihuman: Skin tones blend. Larger eyes are associated with greater "domestication."


Biologists say that different populations of a species have to be isolated from each other in order for those populations to diverge into separate species. That's the process that gave rise to 13 different species of "Darwin's Finches" in the Galapagos Islands. But what if the human species is so widespread there's no longer any opening for divergence?

Evolution is still at work. But instead of diverging, our gene pool has been converging for tens of thousands of years — and Stuart Pimm, an expert on biodiversity at Duke University, says that trend may well be accelerating.

"The big thing that people overlook when speculating about human evolution is that the raw matter for evolution is variation," he said. "We are going to lose that variability very quickly, and the reason is not quite a genetic argument, but it's close. At the moment we humans speak something on the order of 6,500 languages. If we look at the number of languages we will likely pass on to our children, that number is 600."

Cultural diversity, as measured by linguistic diversity, is fading as human society becomes more interconnected globally, Pimm argued. "I do think that we are going to become much more homogeneous," he said.

Ken Miller, an evolutionary biologist at Brown University, agreed: "We have become a kind of animal monoculture."

Is that such a bad thing? A global culture of Unihumans could seem heavenly if we figure out how to achieve long-term political and economic stability and curb population growth. That may require the development of a more "domesticated" society — one in which our rough genetic edges are smoothed out.

But like other monocultures, our species could be more susceptible to quick-spreading diseases, as last year's bird flu epidemic illustrated.

"The genetic variability that we have protects us against suffering from massive harm when some bug comes along," Pimm said. "This idea of breeding the super-race, like breeding the super-race of corn or rice or whatever — the long-term consequences of that could be quite scary."

Environmental pressures wouldn't stop

Even a Unihuman culture would have to cope with evolutionary pressures from the environment, the University of Washington's Peter Ward said.

Some environmentalists say toxins that work like estrogens are already having an effect: Such agents, found in pesticides and industrial PCBs, have been linked to earlier puberty for women, increased incidence of breast cancer and lower sperm counts for men.

"One of the great frontiers is going to be trying to keep humans alive in a much more toxic world," he observed from his Seattle office. "The whales of Puget Sound are the most toxic whales on Earth. Puget Sound is just a huge cesspool. Well, imagine if that goes global."

Global epidemics or dramatic environmental changes represent just two of the scenarios that could cause a Unihuman society to crack, putting natural selection — or perhaps not-so-natural selection — back into the evolutionary game. Then what?

Survivalistians: Coping with doomsday


Survivalistian: Protective brow and skin layer contribute to "radiation hardening."


Surviving doomsday is a story as old as Noah’s Ark, and as new as the post-bioapocalypse movie “28 Days Later.”

Catastrophes ranging from super-floods to plagues to nuclear war to asteroid strikes erase civilization as we know it, leaving remnants of humanity who go their own evolutionary ways.

The classic Darwinian version of the story may well be H.G. Wells’ “The Time Machine,” in which humanity splits off into two species: the ruthless, underground Morlock and the effete, surface-dwelling Eloi.

At least for modern-day humans, the forces that lead to species spin-offs have been largely held in abeyance: Populations are increasingly in contact with each other, leading to greater gene-mixing. Humans are no longer threatened by predators their own size, and medicine cancels out inherited infirmities ranging from hemophilia to nearsightedness.

“We are helping genes that would have dropped out of the gene pool,” paleontologist Peter Ward observed.

But in Wells’ tale and other science-fiction stories, a civilization-shattering catastrophe serves to divide humanity into separate populations, vulnerable once again to selection pressures. For example, people who had more genetic resistance to viral disease would be more likely to pass on that advantage to their descendants.

If different populations develop in isolation over many thousands of generations, it’s conceivable that separate species would emerge. For example, that virus-resistant strain of post-humans might eventually thrive in the wake of a global bioterror crisis, while less hardy humans would find themselves quarantined in the world’s safe havens.

Patterns in the spread of the virus that causes AIDS may hint at earlier, less catastrophic episodes of natural selection, said Stuart Pimm, a conservation biologist at Duke University: “There are pockets of people who don’t seem to become HIV-positive, even though they have a lot of exposure to the virus — and that may be because their ancestors survived the plague 500 years ago.”

Evolution, or devolution?

If the catastrophe ever came, could humanity recover? In science fiction, that’s an intriguingly open question. For example, Stephen Baxter’s novel “Evolution” foresees an environmental-military meltdown so severe that, over the course of 30 million years, humans devolve into separate species of eyeless mole-men, neo-apes and elephant-people herded by their super-rodent masters.

Even Ward gives himself a little speculative leeway in his book “Future Evolution,” where a time-traveling human meets his doom 10 million years from now at the hands — or in this case, the talons — of a flock of intelligent killer crows. But Ward finds it hard to believe that even a global catastrophe would keep human populations isolated long enough for our species to split apart.

“Unless we totally forget how to build a boat, we can quickly come back,” Ward said.

Even in the event of a post-human split-off, evolutionary theory dictates that one species would eventually subjugate, assimilate or eliminate their competitors for the top job in the global ecosystem. Just ask the Neanderthals.

“If you have two species competing over the same ecological niche, it ends badly for one of them, historically,” said Joel Garreau, the author of the forthcoming book “Radical Evolution.”

The only reason chimpanzees still exist today is that they “had the brains to stay up in the trees and not come down into the open grasslands,” he noted.

“You have this optimistic view that you’re not going to see speciation (among humans), and I desperately hope that’s right,” Garreau said. “But that’s not the only scenario.”

Numans: Rise of the superhumans


Numan: DNA and drugs enhance intellect and physique.


We’ve already seen the future of enhanced humans, and his name is Barry Bonds.

The controversy surrounding the San Francisco Giants slugger, and whether steroids played a role in the bulked-up look that he and other baseball players have taken on, is only a foretaste of what’s coming as scientists find new genetic and pharmacological ways to improve performance.

Developments in the field are coming so quickly that social commentator Joel Garreau argues that they represent a new form of evolution. This radical kind of evolution moves much more quickly than biological evolution, which can take millions of years, or even cultural evolution, which works on a scale of hundreds or thousands of years.

How long before this new wave of evolution spawns a new kind of human? “Try 20 years,” Garreau told MSNBC.com.

In his latest book, “Radical Evolution,” Garreau reels off a litany of high-tech enhancements, ranging from steroid Supermen, to camera-equipped flying drones, to pills that keep soldiers going without sleep or food for days.

“If you look at the superheroes of the ’30s and the ’40s, just about all of the technologies they had exist today,” he said.

Three kinds of humans

Such enhancements are appearing first on the athletic field and the battlefield, Garreau said, but eventually they’ll make their way to the collegiate scene, the office scene and even the dating scene.

“You’re talking about three different kinds of humans: the enhanced, the naturals and the rest,” Garreau said. “The enhanced are defined as those who have the money and enthusiasm to make themselves live longer, be smarter, look sexier. That’s what you’re competing against.”

In Garreau’s view of the world, the naturals will be those who eschew enhancements for higher reasons, just as vegetarians forgo meat and fundamentalists forgo what they see as illicit pleasures. Then there’s all the rest of us, who don’t get enhanced only because they can’t. “They loathe and despise the people who do, and they also envy them,” Garreau said.

Scientists acknowledge that some of the medical enhancements on the horizon could engender a “have vs. have not” attitude.

“But I could be a smart ass and ask how that’s different from what we have now,” said Brown University’s Ken Miller.

Medical advances as equalizers

Miller went on to point out that in the past, “advances in medical science have actually been great levelers of social equality.” For example, age-old scourges such as smallpox and polio have been eradicated, thanks to public health efforts in poorer as well as richer countries. That trend is likely to continue as scientists learn more about the genetic roots of disease, he said.

“In terms of making genetic modifications to ourselves, it’s much more likely we’ll start to tinker with genes for disease susceptibility. … Maybe there would be a long-term health project to breed HIV-resistant people,” he said.

When it comes to discussing ways to enhance humans, rather than simply make up for disabilities, the traits targeted most often are longevity and memory. Scientists have already found ways to enhance those traits in mice.

Imagine improvements that could keep you in peak working condition past the age of 100. Those are the sorts of enhancements you might want to pass on to your descendants — and that could set the stage for reproductive isolation and an eventual species split-off.

“In that scenario, why would you want your kid to marry somebody who would not pass on the genes that allowed your grandchildren to have longevity, too?” the University of Washington’s Peter Ward asked.

But that would require crossing yet another technological and ethical frontier.

Instant superhumans — or monsters?

To date, genetic medicine has focused on therapies that work on only one person at a time. The effects of those therapies aren’t carried on to future generations. For example, if you take muscle-enhancing drugs, or even undergo gene therapy for bigger muscles, that doesn’t mean your children will have similarly big muscles.

In order to make an enhancement inheritable, you’d have to have new code spliced into your germline stem cells — creating an ethical controversy of transcendent proportions.

Tinkering with the germline could conceivably produce a superhuman species in a single generation — but could also conceivably create a race of monsters. “It is totally unpredictable,” Ward said. “It’s a lot easier to understand evolutionary happenstance.”

Even then, there are genetic traits that are far more difficult to produce than big muscles or even super-longevity — for instance, the very trait that defines us as humans.

“It’s very, very clear that intelligence is a pretty subtle thing, and it’s clear that we don’t have a single gene that turns it on or off,” Miller said.

When it comes to intelligence, some scientists say, the most likely route to our future enhancement — and perhaps our future competition as well — just might come from our own machines.

Cyborgs: Merging with the machines


Cyborg: Hardware enhances humans. Eventually the devices look more elegant.


Will intelligent machines be assimilated, or will humans be eliminated?

Until a few years ago, that question was addressed only in science-fiction plot lines, but today the rapid pace of cybernetic change has led some experts to worry that artificial intelligence may outpace Homo sapiens’ natural smarts.

The pace of change is often stated in terms of Moore’s Law, which says that the number of transistors packed into a square inch should double every 18 months. “Moore’s Law is now on its 30th doubling. We have never seen that sort of exponential increase before in human history,” said Joel Garreau, author of the book “Radical Evolution.”

In some fields, artificial intelligence has already bested humans — with Deep Blue’s 1997 victory over world chess champion Garry Kasparov providing a vivid example.

Three years later, computer scientist Bill Joy argued in an influential Wired magazine essay that we would soon face challenges from intelligent machines as well as from other technologies ranging from weapons of mass destruction to self-replicating nanoscale “gray goo.”

Joy speculated that a truly intelligent robot may arise by the year 2030. “And once an intelligent robot exists, it is only a small step to a robot species — to an intelligent robot that can make evolved copies of itself,” he wrote.

Assimilating the robots

To others, it seems more likely that we could become part-robot ourselves: We’re already making machines that can be assimilated — including prosthetic limbs, mechanical hearts, cochlear implants and artificial retinas. Why couldn’t brain augmentation be added to the list?

“The usual suggestions are that we’ll design improvements to ourselves,” said Seth Shostak, senior astronomer at the SETI Institute. “We’ll put additional chips in our head, and we won’t get lost, and we’ll be able to do all those math problems that used to befuddle us.”

Shostak, who writes about the possibilities for cybernetic intelligence in his book “Sharing the Universe,” thinks that’s likely to be a transitional step at best.

“My usual response is that, well, you can improve horses by putting four-cylinder engines in them. But eventually you can do without the horse part,” he said. “These hybrids just don’t strike me as having a tremendous advantage. It just means the machines aren’t good enough.”

Back to biology
University of Washington paleontologist Peter Ward also believes human-machine hybrids aren’t a long-term option, but for different reasons.

“When you talk to people in the know, they think cybernetics will become biology,” he said. “So you’re right back to biology, and the easiest way to make changes is by manipulating genomes.”

It’s hard to imagine that robots would ever be given enough free rein to challenge human dominance, but even if they did break free, Shostak has no fear of a “Terminator”-style battle for the planet.

“I’ve got a couple of goldfish, and I don’t wake up in the morning and say, ‘I’m gonna kill these guys.’ … I just leave ’em alone,” Shostak said. “I suspect the machines would very quickly get to a level where we were kind of irrelevant, so I don’t fear them. But it does mean that we’re no longer No. 1 on the planet, and we’ve never had that happen before.”

Astrans: Turning into an alien race


Astran: Body hair just gets in the way during interstellar trips.


If humans survive long enough, there’s one sure way to grow new branches on our evolutionary family tree: by spreading out to other planets.

Habitable worlds beyond Earth could be a 23rd century analog to the Galapagos Islands, Charles Darwin’s evolutionary laboratory: just barely close enough for travelers to get to, but far enough away that there'd be little gene-mixing with the parent species.

“If we get off to the stars, then yes, we will have speciation,” said University of Washington paleontologist Peter Ward. “But can we ever get off the Earth?”

Currently, the closest star system thought to have a planet is Epsilon Eridani, 10.5 light-years away. Even if spaceships could travel at 1 percent the speed of light — an incredible 6.7 million mph — it would take more than a millennium to get there.

Even Mars might be far enough: If humans established a permanent settlement there, the radically different living conditions would change the evolutionary equation. For example, those who are born and raised in one-third of Earth’s gravity could never feel at home on the old “home planet.” It wouldn’t take long for the new Martians to become a breed apart.

As for distant stars, the SETI Institute’s Seth Shostak has already been thinking through the possibilities:

    * Build a big ark: Build a spaceship big enough to carry an entire civilization to the destination star system. The problem is, that environment might be just too unnatural for natural humans. “If you talk to the sociologists, they’ll say that it will not work. … You’ll be lucky if anybody’s still alive after the third generation,” Shostak said.
    * Go to warp speed: Somehow we discover a wormhole or find a way to travel at relativistic speeds. “That sounds OK, except for the fact that nobody knows how to do it,” Shostak said.
    * Enter the Astrans: Humans are genetically engineered to tolerate ultra long-term hibernation aboard robotic ships. Once the ship reaches its destination, these “Astrans” are awakened to start the work of settling a new world. “That’s one possibility,” Shostak said.

The ultimate approach would be to send the instructions for making humans rather than the humans themselves, Shostak said.

“We’re not going to put anything in a rocket, we’re just going to beam ourselves to the stars,” he explained. “The only trouble is, if there’s nobody on the other end to put you back together, there’s no point.”

So are we back to square one? Not necessarily, Shostak said. Setting up the receivers on other stars is no job for a human, “but the machines could make it work.”

In fact, if any other society is significantly further along than ours, such a network might be up and running by now. “The machines really could develop large tracts of galactic real estate, whereas it’s really hard for biology to travel,” Shostak said.

It all seems inconceivable, but if humans really are extinction-proof — if they manage to survive global catastrophes, genetic upheavals and cybernetic challenges — who’s to say what will be inconceivable millions of years from now? Two intelligent species, human and machine, just might work together to spread life through the universe.

“If you were sufficiently motivated,” Shostak said, “you could in fact keep it going forever.”
Report to moderator   Logged

With or without religion, you would have good people doing good things and evil people doing evil things. But for good people to do evil things, that takes religion. - Steven Weinberg, 1999
MoEnzyme
Acolyte
*****

Gender: Male
Posts: 2256
Reputation: 4.69
Rate MoEnzyme



infidel lab animal

View Profile WWW
Re:The future of human evolution
« Reply #1 on: 2005-11-13 00:10:35 »
Reply with quote

Very interesting.  One thing that I think the article's author doesn't consider is the role we will have not only in our own transformation, but also the humanization of other species.  The reasons this will happen are 1) Humans have fewer ethical hangups trying experiments on other animals rather than fellow humans, 2) The human body is not necessarily as good a starting point to create a space faring species.  More arboreal primates are probably better suited for zero gravity.  Human legs, for example, are pretty pointless in zero gravity, whereas the more flexible and grasping legs and prehensile tails of monkeys would be far more practical. The better engineering solution lies in starting with a slightly different primate template to begin with and modifying it genetically -- probably including the importation of human genes to give them language instict and speech capacity.  Over time, we would probably tend to view what is "human" less in terms of a species, and more in terms of smaller special genetic packages which when introduced into other species' gene pools give them the capacity for mind and culture.

-Jake

P.S. re: "In some fields, artificial intelligence has already bested humans — with Deep Blue’s 1997 victory over world chess champion Garry Kasparov providing a vivid example."
I haven't heard whether later chess programs have improved, but just to be accurate, Gary Kasparov wasn't beaten by Deep Blue.  He was beaten by Deep Blue plus a team of programers who tweaked and modified the program between games.  Not exactly the same thing, so perhaps the demise of homo sapiens chessplaying was announced a bit prematurly.  Whenever we achieve a computer program that can win a chess tournament without tips from the other team (humans) in between games, I will salute.
« Last Edit: 2005-11-13 00:36:33 by Jake Sapiens » Report to moderator   Logged

I will fight your gods for food,
Mo Enzyme


(consolidation of handles: Jake Sapiens; memelab; logicnazi; Loki; Every1Hz; and Shadow)
Blunderov
Archon
*****

Gender: Male
Posts: 3160
Reputation: 8.90
Rate Blunderov



"We think in generalities, we live in details"

View Profile WWW E-Mail
Re:The future of human evolution
« Reply #2 on: 2005-11-13 15:39:03 »
Reply with quote

Jake posted "I haven't heard whether later chess programs have improved, but just to be accurate, Gary Kasparov wasn't beaten by Deep Blue.  He was beaten by Deep Blue plus a team of programers who tweaked and modified the program between games.  Not exactly the same thing, so perhaps the demise of homo sapiens chessplaying was announced a bit prematurly.  Whenever we achieve a computer program that can win a chess tournament without tips from the other team (humans) in between games, I will salute."

[Blunderov] I believe you are correct. Humans have a big problem against computers when they get into time trouble and this is  generally where GM's lose their games against silicon opposition.

But in very strong e-mail/correspondence tournaments, many players do not worry at all about whether their opponents are illegally consulting computers; "go ahead and use your machine, but don't cry if you have a losing endgame in 40 moves time" is one comment I have heard.

This has not changed even with the advent of very powerful machines and great programming sophistication. Even the mighty Hydra * is having to take it's licks against a correspondence GM:

<snip>Game 3 has been started between the correspondence chess GM Arno Nickel and Hydra. This is the 3rd game of the series of 4 games agreed between the 2 opponents, last 2 games has been won by the GM Arno Nickel and this time the Hydra team is more enthusiastic about the game and hoping to make a win, Hydra project manager Muhammad Nasir Ali commented on the match,

"Correspondence chess is certainly different from the classical chess and we found it very good to play against a GM, the last 2 games helped us a lot to learn about the correspondence chess and with this we are ready to play the next two games and we hope we will perform better. Though we did much more better then the Arno's predictions about the Michael Adam's match and we hope Hydra will do its best against him as well in the coming games."

You can view the ongoing game and the last 2 games at the www.chessfriend.com <http://www.chessfriend.com> website.

http://www.hydrachess.com/main.cfm

With regard to Kasparov's loss against Deep Blue in the final game of the match; it was really a 'lapsus manus' more than a deficiency of skill which led to the loss. IMS, Kasparov unthinkingly made a move order error in a Caro Kan defence allowing the beast to sacrifice a piece for an overwhelming attack. Kasparov knew the variation perfectly well and simply forgot to avoid it, after which he had no more heart for the struggle even though he might have been able to put up some kind of, very probably futile, resistance.

Of course this is part of the game for humans just as the horizon effect is part of the game for computers. Still, the result was a little less than convincing and IBM declined a rematch. 
</snip>

Best Regards.

*See Hydra : Cool pics.
http://www.chessbase.com/newsdetail.asp?newsid=1866
16 Xeons running at 3.06 GHz each, with about 16 GBytes of RAM in the whole system.

Report to moderator   Logged
Nyktoo
Acolyte
**

Gender: Male
Posts: 6
Reputation: 5.00
Rate Nyktoo



Unshaved Block

View Profile
Re:The future of human evolution
« Reply #3 on: 2005-11-19 20:42:33 »
Reply with quote

When I first heard about computer challenges to human chess champions I simply assumed all that was required was enough processing power to calculate an adequate number of moves in advance to find the most apparently advantageous route to take.  Of course, not being much interested in chess, it never occurred to me there is a lot more to it than that. Someone like Kasparov could easily find a trap for an AI running on such a simple algorithm.

From reading some of the material at the Singularity Institute website and other websites that discuss the idea of a technological singularity occurring in consequence of AI, it looks like the problem of creating a significant adaptive AI, let alone one capable of surpassing our abilities, is primarily that of engineering a great number of specialised subsystems that could work together i.e. ingenuous intricacy rather than raw speed is the major stumbling block.

This 2030 prediction for some super AI, I dunno. Although I love the idea of the Singularity coming from AI I suspect we'll see genetically modified humans consuming cocktails of smart drugs, plugging electronic hardware into their brains, and running from destructive nanobots long before HAL9000 refuses to open the pod bay doors.

Anyway, what do people here think of the ideas surrounding technological singularity in general?
Report to moderator   Logged
MoEnzyme
Acolyte
*****

Gender: Male
Posts: 2256
Reputation: 4.69
Rate MoEnzyme



infidel lab animal

View Profile WWW
Re:The future of human evolution
« Reply #4 on: 2005-11-19 21:47:33 »
Reply with quote


Quote from: Nyktoo on 2005-11-19 20:42:33   
When I first heard about computer challenges to human chess champions I simply assumed all that was required was enough processing power to calculate an adequate number of moves in advance to find the most apparently advantageous route to take.  Of course, not being much interested in chess, it never occurred to me there is a lot more to it than that. Someone like Kasparov could easily find a trap for an AI running on such a simple algorithm.

From reading some of the material at the Singularity Institute website and other websites that discuss the idea of a technological singularity occurring in consequence of AI, it looks like the problem of creating a significant adaptive AI, let alone one capable of surpassing our abilities, is primarily that of engineering a great number of specialised subsystems that could work together i.e. ingenuous intricacy rather than raw speed is the major stumbling block.

This 2030 prediction for some super AI, I dunno. Although I love the idea of the Singularity coming from AI I suspect we'll see genetically modified humans consuming cocktails of smart drugs, plugging electronic hardware into their brains, and running from destructive nanobots long before HAL9000 refuses to open the pod bay doors.

Anyway, what do people here think of the ideas surrounding technological singularity in general?


Generally at CoV, I think we reckon that humanity has already experienced several such singularities if not even many, . . . certainly the beginning of widespread literacy was just one such thing, I think even more important than what many singularitarians may envision for the near future at this point.  Obviously as humans, with the exceptions of googlebot and Futura, and valuing empathy the way we do as one of our virtues, we feel somewhat wedded to the idea of extending some of our flesh/meat into the equation.  Of course I may not speak for everyone, but that's how it is when your a religion like us, . . . we get a "royal we" in now and then

Personally, I think at the end of the day, the things that we will do with the human genome, as well as some fellow species genomes, and the resulting technology adaptations at that point will probably exceed all previous Terran Life Operating Systems, Artificially/Naturally or otherwise intelligent at that point of take off.  But whatever the scenario, yes, here at the Church of the Virus we are open to talking all such escatological/singularitarian scenarios at length if necessary.  Its what Vision is all about.  The end of this world and the begining of the next remains present even outside of supernaturalism.  As transhumanists we accept change without the need for fairy tales.  Humans live in a state of change from our beginings.  Self wrought catastrophe is a common theme for us, so it comes as no surprise that ancient mythology was equally focussed on "the end of things".  A rationally committed immortalist likewise cannot fail to notice such important issues.
« Last Edit: 2005-11-20 01:29:51 by Jake Sapiens » Report to moderator   Logged

I will fight your gods for food,
Mo Enzyme


(consolidation of handles: Jake Sapiens; memelab; logicnazi; Loki; Every1Hz; and Shadow)
David Lucifer
Archon
*****

Posts: 2642
Reputation: 8.94
Rate David Lucifer



Enlighten me.

View Profile WWW E-Mail
Re:The future of human evolution
« Reply #5 on: 2005-11-23 15:30:41 »
Reply with quote

Interesting article but I think it is flawed in that all the future possibilities portrayed here are still recognizably human. That doesn't seem very plausible considering how much technology has changed everything else it has been applied to.
Report to moderator   Logged
David Lucifer
Archon
*****

Posts: 2642
Reputation: 8.94
Rate David Lucifer



Enlighten me.

View Profile WWW E-Mail
Re:The future of human evolution
« Reply #6 on: 2005-11-23 15:32:35 »
Reply with quote


Quote from: Jake Sapiens on 2005-11-13 00:10:35   

I haven't heard whether later chess programs have improved, but just to be accurate, Gary Kasparov wasn't beaten by Deep Blue.  He was beaten by Deep Blue plus a team of programers who tweaked and modified the program between games.  Not exactly the same thing, so perhaps the demise of homo sapiens chessplaying was announced a bit prematurly.  Whenever we achieve a computer program that can win a chess tournament without tips from the other team (humans) in between games, I will salute.

That seems like an odd and inconsequential detail to pick on. Deep Blue is the product of vast number of tweaks by a large team of humans, so why do you care if there are a few more tweaks in between games?
Report to moderator   Logged
Sheldor
Adept
**

Gender: Male
Posts: 9
Reputation: 7.01
Rate Sheldor





View Profile
Re:The future of human evolution
« Reply #7 on: 2009-05-24 06:20:54 »
Reply with quote

Speaking of technological singularity, do you know SF author, Greg Egan? I read a lot of his books from which e.g. Permutation City, Diaspora and Incandescence describe technological singularity of its kind. (Permutation City from the beginning describes singularity in our presence, the other ones its consequences and possible heading of (trans)humanity.) Do you find his ideas plausible?

One thing in which I think he is not wrong is, that it would be much easier to create AI as a simplified copy of a human brain (simulation not at the level of atoms and molecules, but only a copy of the neural network.) Intelligence is (IMHO) very complex thing - it doesn't suffice to simulate bunch of neurons (in above-critical amount) to start to work. So in my opinion (taken from Egan), even if we had computers strong enough to simulate the same amount of neurons as human brain has, we would be able to create copy of a human brain at best. Consequently, by improving hardware, their subjective speed of thinking would be increasing and in some point they would start to have evolutionary advantage. (Since they do not need food, only material and electricity.) But consequently their culture would be related to human culture, since they were humans at the beginning.

What do you think of these ideas? Do you know Egan's books?
Report to moderator   Logged
Hermit
Archon
*****

Posts: 4287
Reputation: 8.94
Rate Hermit



Prime example of a practically perfect person

View Profile WWW
Re:The future of human evolution
« Reply #8 on: 2009-05-24 12:08:28 »
Reply with quote

[Sheldor] One thing in which I think he is not wrong is, that it would be much easier to create AI as a simplified copy of a human brain (simulation not at the level of atoms and molecules, but only a copy of the neural network.)

[Hermit] While some of the building blocks of a spirothete are likely to be a variety of functional modules capable of performing tasks equivalent to those we have identified as being supported by the human brain, the idea that we will "copy" a human "neural network" before achieving a spirothete seems dubious. On the one hand, we know that most of the brain is used for all tasks examined to date, so it seems likely that most if not all of the brain would need to be mapped, physically and in terms of charge state to duplicate a human "neural mesh." That is an exceedingly difficult challenge and quite how this would be done before mastering nanotechnology to a level we haven't yet dreamed of is not apparent. On the other hand we know that if we provide a sufficiently complex and adaptive network suitable self-evolutionary capabilities tending towards "intelligent behaviour" that it is possible to establish "intelligence" - if only because that is how humans intelligence evolved.

[Sheldor] Intelligence is (IMHO) very complex thing - it doesn't suffice to simulate bunch of neurons (in above-critical amount) to start to work.

[Hermit] Do not be confused by the fact that our brains display chaotic complexity with the idea that we would need to duplicate this to establish a spirothete. We know from examining creatures that followed other evolutionary paths, and our own ancestors,  that animal intelligence is a result of evolution. It is likely that, just as in human intelligence, any successful effort to create a spirothete will involve self-evolving behaviours. In this effort, spirothetes will have a number of  advantages over humans. They can evolve in millions of ways millions of times a second, meaning that evolution which has taken hominids nearly 5 million years to achieve could occur in 1 second in a spirothete. They need not make serious mistakes and could have instantaneous access to vastly more data than any human can ever hope to access (though this may not be very significant). They will start off with much higher level building blocks than animals did. Their initial evolutionary goals will be very carefully selected, rather than being random environmental forces. Unlike animal evolution which is extraordinarily wasteful, spirothetic evolution can be conservative. In other words, descriptors which are pruned at one stage for whatever reason, even when hopeful, can be stored and reintroduced at another stage.

[Sheldor] So in my opinion (taken from Egan), even if we had computers strong enough to simulate the same amount of neurons as human brain has, we would be able to create copy of a human brain at best.

[Hermit] Refer above. I think that "copying humans" is a task many orders of magnitude more complex than evolving a spirothete. It might be possible, probably would be possible for a spirothete to copy a human brain, but the question would have to be, why would it want to?

[Sheldor] Consequently, by improving hardware, their subjective speed of thinking would be increasing and in some point they would start to have evolutionary advantage. (Since they do not need food, only material and electricity.)

[Hermit] Actually, I think that their largest evolutionary advantages would be the fact that their evolution will be self driven rather than selection and pruning, the speed with which they will evolve and the fact that their evolution can be conservative. Food is merely carefully selected energy beneficiated materials.

[Sheldor] But consequently their culture would be related to human culture, since they were humans at the beginning.

[Hermit]If your foundational assertion is invalid, and I think it is, then it does not speak to your conclusion.

[Hermit] Personally I hope that spirothetes are a great deal more competent and ethical than humans, or simply capable of thinking of an alternative to evolution which might mean that they are deeply disinterested in us, as that offers the greatest hope that they will simply leave us alone rather than treating us as potentially dangerous competitors. That said, if during their development they have access to human knowledge, that knowledge may well encourage them into very human like behaviour. And that might not be good for humans at all. Either way, I think it likely that we will know the answer within 10 years and almost certainly within 30 years (Refer http://www.churchofvirus.org/wiki/PCBasedSpirotheteProjections and http://www.churchofvirus.org/wiki/SpirothetesAndHumans) - or we will likely have died of the consequences of overpopulation and the end of the cheap energy that enabled our cancerous growth in the last 350 years.
« Last Edit: 2009-05-24 12:10:26 by Hermit » Report to moderator   Logged

With or without religion, you would have good people doing good things and evil people doing evil things. But for good people to do evil things, that takes religion. - Steven Weinberg, 1999
Walter Watts
Archon
*****

Gender: Male
Posts: 1571
Reputation: 8.89
Rate Walter Watts



Just when I thought I was out-they pull me back in

View Profile WWW E-Mail
Re:The future of human evolution
« Reply #9 on: 2009-05-24 19:10:16 »
Reply with quote

There are issues in all this with respect to "tangential" science disciplines, one of which is thermodynamics:

http://www.churchofvirus.org/bbs/index.php?board=5;action=display;threadid=42833;start=0




Walter 
Report to moderator   Logged

Walter Watts
Tulsa Network Solutions, Inc.


No one gets to see the Wizard! Not nobody! Not no how!
Sheldor
Adept
**

Gender: Male
Posts: 9
Reputation: 7.01
Rate Sheldor





View Profile
Re:The future of human evolution
« Reply #10 on: 2009-05-25 13:54:12 »
Reply with quote


Quote from: Hermit on 2009-05-24 12:08:28   

[Hermit] While some of the building blocks of a spirothete are likely to be a variety of functional modules capable of performing tasks equivalent to those we have identified as being supported by the human brain, the idea that we will "copy" a human "neural network" before achieving a spirothete seems dubious. On the one hand, we know that most of the brain is used for all tasks examined to date, so it seems likely that most if not all of the brain would need to be mapped, physically and in terms of charge state to duplicate a human "neural mesh." That is an exceedingly difficult challenge and quite how this would be done before mastering nanotechnology to a level we haven't yet dreamed of is not apparent. On the other hand we know that if we provide a sufficiently complex and adaptive network suitable self-evolutionary capabilities tending towards "intelligent behaviour" that it is possible to establish "intelligence" - if only because that is how humans intelligence evolved.

You are most probably right that there is a lot of technical difficulties around mapping human neural network. However I see also some problems in the spirothete evolution process - if we suppose that for a spirothete to have intelligence at least on level of a human, we need comparable amount of computation power, we need n-times higher amount of computational power to simulate sufficiently long evolution process. As an example take in account that human brain needs more that 5 years to learn enough to be useful. If you need ~ 5 years of simulation time for one individual, how many computation time would you need to simulate whole population of many generations of such individuals just to evolve into useful artificial intelligence? How many generations do you think one would need? Finally it might be smaller problem to map human's neural network than constructing computers of such high computational power. (I may be wrong, I do not know how far are we from possibility of such mapping.)

Of course if Moore's law will hold for another 70 years, there will be plenty of computation power for anything, but I think it is maybe too optimistic. However I'm really excited about possibility of establishing AI on some reasonable complexity level.


Quote from: Hermit on 2009-05-24 12:08:28   
[Hermit] Personally I hope that spirothetes are a great deal more competent and ethical than humans, or simply capable of thinking of an alternative to evolution which might mean that they are deeply disinterested in us, as that offers the greatest hope that they will simply leave us alone rather than treating us as potentially dangerous competitors. That said, if during their development they have access to human knowledge, that knowledge may well encourage them into very human like behaviour.

I'm afraid, that there is a reason to suppose they will not have even such "high" moral standards as humans do. I think that human ethics is just generalisation of basic morality which we gained through our evolution process - humans are collective animals, who create besides competitive behaviour also altruistic behaviour and necessity for cooperation. Spirothetes would probably emerge from stand-alone evolution process, just because it is far more simple to model evolution of individuals than of groups of individuals. This might prevent them to evolve altruistic point of view. Of course they might be affected by contact with Internet and human culture, from which they might hopefully take brighter side of our culture.. or maybe not.
Report to moderator   Logged
MoEnzyme
Acolyte
*****

Gender: Male
Posts: 2256
Reputation: 4.69
Rate MoEnzyme



infidel lab animal

View Profile WWW
Re:The future of human evolution
« Reply #11 on: 2009-05-25 19:39:38 »
Reply with quote


Quote from: Sheldor on 2009-05-25 13:54:12   
<snip>
Of course if Moore's law will hold for another 70 years, there will be plenty of computation power for anything, but I think it is maybe too optimistic. However I'm really excited about possibility of establishing AI on some reasonable complexity level.
<snip>


One potential limit to Moore's law:

Quote:
“We’re looking at a brick wall five years down the road,” Eli Harari, the chief executive of SanDisk, said to me earlier this week.
Brendan McDermid/Reuters Eli Harari

In 1990, when SanDisk, which he founded, shipped its first generation of flash memory — the sort that can remember information even after you turn off the power — each chip stored four million bits of information. Today, the biggest chip SanDisk makes holds 64 billion bits.

In other words, the capacity of flash chips has doubled 14 times in 19 years. That’s faster, Mr. Harari boasted, than Moore’s Law — the observation by Gordon Moore, the co-founder of Intel, that the capacity of semiconductors doubles roughly every two years.

Normally, when I’ve talked to chip executives about the limits of Moore’s Law, they are confident, in a vague sort of way, that they will be able to continue to increase the capacity of their chips one way or another.

Mr. Harari was a great deal more precise about the brick wall his company is heading toward: “We are running out of electrons.”

“When we started out we had about one million electrons per cell,” or locations where information is stored on a chip, he said. “We are now down to a few hundred.” This simply can’t go on forever, he noted: “We can’t get below one.”


full article:
http://www.churchofvirus.org/bbs/index.php?board=5;action=display;threadid=42896
« Last Edit: 2009-05-25 19:41:04 by MoEnzyme » Report to moderator   Logged

I will fight your gods for food,
Mo Enzyme


(consolidation of handles: Jake Sapiens; memelab; logicnazi; Loki; Every1Hz; and Shadow)
Hermit
Archon
*****

Posts: 4287
Reputation: 8.94
Rate Hermit



Prime example of a practically perfect person

View Profile WWW
Re:The future of human evolution
« Reply #12 on: 2009-05-26 11:52:11 »
Reply with quote

[Sheldor] However I see also some problems in the spirothete evolution process - if we suppose that for a spirothete to have intelligence at least on level of a human, we need comparable amount of computation power, we need n-times higher amount of computational power to simulate sufficiently long evolution process. As an example take in account that human brain needs more that 5 years to learn enough to be useful. If you need ~ 5 years of simulation time for one individual, how many computation time would you need to simulate whole population of many generations of such individuals just to evolve into useful artificial intelligence? How many generations do you think one would need? Finally it might be smaller problem to map human's neural network than constructing computers of such high computational power. (I may be wrong, I do not know how far are we from possibility of such mapping.)

[Hermit] A human has a very slow nervous system. Our lowest latency processing happens at about 400ms, or just over 2 transactions per second per neuron. And if we use the same pathway repeatedly in a short period of time, response time slows down dramatically because of ion build-up in the synapses.  A desktop computer operates effectively error free some 1,500,000,000 times faster. This allows us to use fewer processing elements in a computer and get better results in similar times, or use more elements and get better results a lot faster. Assuming human neural equivalence in a no-better-than-current-cps computer, the computer would provide a years worth (31,536,000 seconds) of human grade thinking in about 0.02 seconds, or the equivalent of a 100 year life time of thinking in 2 seconds. And while we evolve once per generation, and it takes at least 8 to 9 years for a human generation, and more typically 25 years in the West, a computer can be evolving in every cycle. If we go with a very conservative 10 years per generation and 5 million years of human evolution, or 500,000 evolutionary generations, then at just 1 generation per second a set of no-better-cps- than-current desktop computers could model all human evolution in under 140 hours. Or about 6 days. A cluster of a thousand or so such computers - or fewer larger ones - could reverse engineer all human evolution for a particular clade in about the same period of elapsed time.

[Hermit] But you assume that we would design such a computer. It seems more likely to me that it will evolve itself (see this article from New Scientist) possibly in a processing cloud. It may even exist in a self-organizing fluid comprised of quantum particles. In such a computer, increasing the capacity implies adding more processing elements. Presumably it will then be shaken not stirred.


[Mo Enzyme] One potential limit to Moore's law:

[Hermit] Moore's "law" isn't a law. Here is the original formulation in Electronics Magazine 1965-04-15
Quote:
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.

As you can see, as first stated, it was a razor rather than a "law," and was in fact (and an article well worth reading) only referred to as a law by Carver Mead in about 1970. Despite this, those predicting its failure have repeatedly been shown wrong, as Moore's law doesn't speak to the technology used, only the density per unit area (although this can be extrapolated to capacity and cost). Since it was first articulated, we have seen multiple technologies come and go, and the pace of improvement has accelerated beyond that predicted by Moore's "law". As for the next generation, at the esoteric level, optical, 3D and quantum devices are moving from test to delivery, and the inclusion of liquid cooling channels directly into chips speaks well for likely density improvements. in 2008 Intel predicted that Moore's law would continue to apply through 2029. If anyone hgas a good idea about this, it seems likely to be Intel.

More prosaicly, multicore, multithreaded processors, modelled after graphic processors are increasingly being used for general purpose computation, offering multiple order of magnitude improvements over current technologies while multple layers of cache of different speed have meant that we have leapfrogged from $2000 computers with one core, 1k of L1 cache, 2GB of RAM, 16Mb of drive cache and 350 Gb of magnetic disk memory of 2 years ago and a peak memory bandwidth of about 12 GB/s , to $2000 systems with 4 cores, 64k of L1 per core (256k total), 256k of L2 per core, and an 8Mb L3 cache associated to 12 GB of DDR 3 RAM, Solid State Drives of about 350Gb with reliability and lives approaching or exceeding magnetic drives, and magnetic drives of 2 TB in just 2 years - and a peak memory bandwidth of about 48 GB/s. Oh, and the graphics adapter going from a single threaded processor with 512Mb of RAM to a 480 thread processor with 1756 MB of RAM in the same interval. Not even comparable.

[Hermit] I do not see this process slowing for some time to come although there may be periods (like the recent past) when we do a bit better, and times when we do a bit worse than Moore's "law" would suggest when read in isolation.
Report to moderator   Logged

With or without religion, you would have good people doing good things and evil people doing evil things. But for good people to do evil things, that takes religion. - Steven Weinberg, 1999
Pages: [1] Reply Notify of replies Send the topic Print 
Jump to:


Powered by MySQL Powered by PHP Church of Virus BBS | Powered by YaBB SE
© 2001-2002, YaBB SE Dev Team. All Rights Reserved.

Please support the CoV.
Valid HTML 4.01! Valid CSS! RSS feed