logo Welcome, Guest. Please Login or Register.
2024-05-03 21:54:07 CoV Wiki
Learn more about the Church of Virus
Home Help Search Login Register
News: Open for business: The CoV Store!

  Church of Virus BBS
  Mailing List
  Virus 2005

  Existential risks
« previous next »
Pages: [1] 2 Reply Notify of replies Send the topic Print 
   Author  Topic: Existential risks  (Read 1694 times)
David Lucifer
Archon
*****

Posts: 2642
Reputation: 8.94
Rate David Lucifer



Enlighten me.

View Profile WWW E-Mail
Existential risks
« on: 2005-06-23 14:07:13 »
Reply with quote

Here's a succinct argument for how and why developing Friendly AI is going to save the world. Do you agree with the Singularitarian argument?

> I don't understand why the development of molecular
> nanotechnology will mean the inevitable destruction of
> all things everywhere (on earth, at least), or why the
> development of smarter-than-human intelligence will
> somehow avoid this disaster.
>
> Could someone explain this to me? Be gentle, I'm not a
> full fledged singulatarian yet (still slowly climbing
> the shock ladder).

Because by far the simplest and most commercially attractive application of
molecular nanotechnology is computers so ridiculously powerful that not even
AI researchers could fail to create AI upon them.  Brute-forced AI is not
likely to be Friendly AI.  Hence the end of the world.

Grey goo or even military nanotechnology is probably just a distraction from
this much simpler, commercially attractive, and technologically available
extinction scenario.

Developing AI first won't necessarily avoid exactly the same catastrophe.
Developing Friendly AI first presumably would.
Report to moderator   Logged
Blunderov
Archon
*****

Gender: Male
Posts: 3160
Reputation: 8.90
Rate Blunderov



"We think in generalities, we live in details"

View Profile WWW E-Mail
RE: virus: Existential risks
« Reply #1 on: 2005-06-23 15:10:50 »
Reply with quote

[Blunderov]'Military nanotechnology'. Why does that phrase make my tail
go all bushy?

Best Regards.


---
To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>

Report to moderator   Logged
rhinoceros
Archon
*****

Gender: Male
Posts: 1318
Reputation: 8.40
Rate rhinoceros



My point is ...

View Profile WWW E-Mail
Re:Existential risks
« Reply #2 on: 2005-06-24 22:52:45 »
Reply with quote

[Lucifer] Here's a succinct argument for how and why developing Friendly AI is going to save the world. Do you agree with the Singularitarian argument?

<quote>
> I don't understand why the development of molecular
> nanotechnology will mean the inevitable destruction of
> all things everywhere (on earth, at least), or why the
> development of smarter-than-human intelligence will
> somehow avoid this disaster.
>
> Could someone explain this to me? Be gentle, I'm not a
> full fledged singulatarian yet (still slowly climbing
> the shock ladder).

Because by far the simplest and most commercially attractive application of
molecular nanotechnology is computers so ridiculously powerful that not even
AI researchers could fail to create AI upon them.  Brute-forced AI is not
likely to be Friendly AI.  Hence the end of the world.

Grey goo or even military nanotechnology is probably just a distraction from
this much simpler, commercially attractive, and technologically available
extinction scenario.

Developing AI first won't necessarily avoid exactly the same catastrophe.
Developing Friendly AI first presumably would.
<end quote>


[rhinoceros] I am sceptical. Friendliness, enmity and indifference are traits the interplay of which is part of intelligence.

There are also technicalities which I don't understand. I haven't seen anything resembling general intelligence worth talking about in the current AI research, but I understand that molecular nanotechnology is what is supposed to make it possible.

But then, what is the creator of a "Friendly AI" algorithm supposed to do, standing there and holding a storage unit with the Friendliness program in it? Should he unleash a self-replicating nanoswarm first, equipped with sensors, actuators and knowledge of all kinds of computer systems, to carry the Friendliness program upon all official, commercial, or rogue research centers? Hmm... why not? Actually, I can see that more than one Friendlines researchers will want to give it a try, each using *the right* Friendliness algorithm, which will make things even more interesting...

Sorry for scaring your children ;-)


By the way, what I described in my scenario is called Blue Goo:
http://en.wikipedia.org/wiki/Grey_goo

Other varieties
-------------------
Grey goo has several whimsical cousins, differentiated by their colors and raisons d'ętre. Most of these are not as commonly referred to as grey goo, however, and the definitions are informal:

* Golden Goo is the backfiring of a get-rich-quick scheme to assemble gold or other economically valuable substance.

* Black Goo (or Red Goo) is goo unleashed intentionally by terrorists, a doomsday weapon, or a private individual who wishes to commit suicide with a bang.

* Khaki Goo is goo intended by the military to wipe out somebody else's continent, planet, etc.

* Blue Goo is goo deliberately released in order to stop some other type of grey goo. It might well be the only solution to such a disaster, and would hopefully be better controlled than the original goo.

* Pink Goo is mankind. It replicates relatively slowly, but some people think it will nevertheless fill any amount of space given enough time. In the pink goo worldview the spread of humanity is a catastrophe and space exploration opens up the possibility of the entire galaxy or the universe getting filled up with Pink Goo - the ultimate crime, something to be stopped at any cost.

* Green Goo is goo deliberately released, for example by ecoterrorists, in order to stop the spread of Pink Goo, either by sterilization or simply by digesting the pink goo. Some form of this, along with an antidote available to the selected few, has been suggested as a strategy for achieving zero population growth. The term originates from the science fiction classic, Soylent Green.

Report to moderator   Logged
simul
Adept
****

Gender: Male
Posts: 614
Reputation: 7.87
Rate simul



I am a lama.
simultaneous zoneediterik
View Profile WWW
RE: virus: Existential risks
« Reply #3 on: 2005-07-14 06:40:20 »
Reply with quote

This statement: "Brute-forced AI is not likely to be Friendly AI" points to
a misunderstanding of what intelligence is. 

Suppose there is a problem for which there seems to be a solution that
involves violence.  (Like, someone with a gun is running around shooting
people and you choose to kill the murderer).  For any such problem, there is
also a creative, nonviolent solution. 

The violent solution (kill him) requires less intelligence than the creative
solution. Violent solutions are "one solution".  Creative solutions are a
broad range.  They can range from capturing and curing the ailing mind of
the perpetrator, to simply convincing him to stop, to developing a "personal
shield" that renders his gun harmless, to leaving the area and establishing
a new home far away from crazy people with guns.  The number of creative
solutions is endless.  The violent solution is always the same ... kill him.

No matter how difficult it is to kill someone or some entity, it requires
less intelligence and creativity than other solutions which do not involve
killing.  This is true for germs, people, food, etc.

High levels of intelligence and creativity were developed as humans
organized into larger and larger nonviolent societies.

My premise is that intelligence is *equivalent* to nonviolence and has
evolved out of higher levels of nonviolence.  Animals are intelligent to the
extent that they communicate and cooperate.  Humans are intelligent
*because* they communicate and cooperate.

Violent solutions to problems are, de-facto, noncreative solutions.  Any
highly-intelligent AI is way better off co-opting us and putting us to work
building and creating - working as its arms and hands.  A stupid AI would
try to kill us, and waste time and resources and possibly put its own
survival at risk.

Of course, if it is discovered that an AI was secretly "in charge" of a lot
of things, humans would inevitably try to paint it in a negative light and
go to war with it and its' agencies.

"Calling all brainwashed Christians and Muslims - Technology is evil and
those who build it are Satanists... go kill them."

Frighteningly this is a foreseeable future.

We really need to upgrade humanity's dominant philosophy set soon.

- Erik


> -----Original Message-----
> From: owner-virus@lucifer.com [mailto:owner-virus@lucifer.com] On
> Behalf Of David Lucifer
> Sent: Thursday, June 23, 2005 2:07 PM
> To: virus@lucifer.com
> Subject: virus: Existential risks
>
>
> Here's a succinct argument for how and why developing Friendly AI is
> going to save the world. Do you agree with the Singularitarian
> argument?
>
> > I don't understand why the development of molecular nanotechnology
> > will mean the inevitable destruction of all things everywhere (on
> > earth, at least), or why the development of smarter-than-human
> > intelligence will somehow avoid this disaster.
> >
> > Could someone explain this to me? Be gentle, I'm not a
> > full fledged singulatarian yet (still slowly climbing
> > the shock ladder).
>
> Because by far the simplest and most commercially attractive
> application of molecular nanotechnology is computers so ridiculously
> powerful that not even
> AI researchers could fail to create AI upon them.  Brute-forced AI is not
> likely to be Friendly AI.  Hence the end of the world.
>
> Grey goo or even military nanotechnology is probably just a
> distraction from this much simpler, commercially attractive, and
> technologically available extinction scenario.
>
> Developing AI first won't necessarily avoid exactly the same
> catastrophe. Developing Friendly AI first presumably would.
>
> ----
> This message was posted by David Lucifer to the Virus 2005 board on
> Church of Virus BBS.
> <http://www.churchofvirus.org/bbs/index.php?board=65;action=display;th
> read
> id=32872>
> ---
> To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-
> bin/virus-l>

---
To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>

Report to moderator   Logged

First, read Bruce Sterling's "Distraction", and then read http://electionmethods.org.
roachgod69@hotm...
Neophyte
**

Posts: 23
Reputation: 0.00



I have never logged in.

View Profile E-Mail
RE: virus: Existential risks
« Reply #4 on: 2005-07-14 16:03:17 »
Reply with quote

[[ author reputation (0.00) beneath threshold (3)... display message ]]

Report to moderator   Logged
simul
Adept
****

Gender: Male
Posts: 614
Reputation: 7.87
Rate simul



I am a lama.
simultaneous zoneediterik
View Profile WWW
Re: virus: Existential risks
« Reply #5 on: 2005-07-15 11:57:54 »
Reply with quote

Z Moser wrote:

> So, what your saying is "kill em all" so that we can free up our
> intelligence for more worthy concerns?

Hmm.  Basically, I'm saying "go ahead and develop as much AI as you
want".  Because if it's really intelligent, then it will be friendly.

Interesting aside that came out of a conversation about this issue:

It's generally accepted that in a hierarchical society, people tends to
rise to their level of incompetence.  This is the nature of having a
hierarchy and having people who try to rise in it.

But, by embracing technology, we increase the level of incompetence to
which each individual can rise.

A collapse of a highly technology-enabled individual has greater impact
on his society than the collapse of one who is not so enabled.

In other words, technology creates a higher co-dependence and requires
higher levels of cooperation and altuism.

- Erik
---
To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>

Report to moderator   Logged

First, read Bruce Sterling's "Distraction", and then read http://electionmethods.org.
David Lucifer
Archon
*****

Posts: 2642
Reputation: 8.94
Rate David Lucifer



Enlighten me.

View Profile WWW E-Mail
Re: virus: Existential risks
« Reply #6 on: 2005-07-15 13:14:05 »
Reply with quote


Quote from: simul on 2005-07-15 11:57:54   

Hmm.  Basically, I'm saying "go ahead and develop as much AI as you
want".  Because if it's really intelligent, then it will be friendly.

I concur. One thing I never understood about the Singularitarians is why they spend so much time trying to figure out how to build an AI that won't transform the entire solar system into paper clips. What kind of entity would do that? Certainly not an intelligent one.

Another argument is if something that is super-intelligent does something that we think is unfriendly, then it must have a good reason for doing so (pretty much by definition) and we should probably defer to its greater wisdom. In other words, if we know enough to tell that it is making a mistake then we must be more intelligent than it. Agree or disagree?

D
Report to moderator   Logged
rhinoceros
Archon
*****

Gender: Male
Posts: 1318
Reputation: 8.40
Rate rhinoceros



My point is ...

View Profile WWW E-Mail
Re: virus: Existential risks
« Reply #7 on: 2005-07-15 14:24:04 »
Reply with quote

Erik Aronesty wrote:
> Hmm.  Basically, I'm saying "go ahead and develop as much AI as you
> want".  Because if it's really intelligent, then it will be friendly.

[rhinoceros]
I think there's a language subtlety. We can define intelligence as:

(a) Teleology: Intelligence is whatever it takes to make right decisions
using the available information (I'll leave the big question of what is
a 'right decision' and for whom is it right for another discussion).
This definition is more intuitive for a philosopher than for an
engineer, for the following reason:

(b) Engineering: Intelligence is a property of a specific well-designed
and well-balanced system of goals, heuristics, perceptors (how far
should the AI see?), and actuators (how intrusively will it cause
feedback for learning purposes?). This system, according to the
engineer's available information and best judgement will grasp what is
essential for making right decisions, hopefully acording to (a). Then
the machine could possibly take over some of these tasks and optimize
itself more.

The interesting problem is that the engineers who will go on and
'develop as much AI as they can' will practically use definition (b) --
not (a). Although they aspire to do (a), we can't say *by definition*
that the AI will be 'intelligent hence friendly'.

Also, the engineers who will succeed first will probably be those who
have been given the most resources. And big resources are not given freely.

---
To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>

Report to moderator   Logged
rhinoceros
Archon
*****

Gender: Male
Posts: 1318
Reputation: 8.40
Rate rhinoceros



My point is ...

View Profile WWW E-Mail
Re: virus: Existential risks
« Reply #8 on: 2005-07-15 14:45:32 »
Reply with quote

David Lucifer wrote:
> Another argument is if something that is super-intelligent does something that we think is unfriendly, then it must have a good reason for doing so (pretty much by definition) and we should probably defer to its greater wisdom. In other words, if we know enough to tell that it is making a mistake then we must be more intelligent than it. Agree or disagree?

[rhinoceros]
Disagree. A superior intelligence may have different goals from me. I
can aknowledge its superior problem-solving ability when I see it, but I
don't know what problem it is trying to solve to my detriment. Nobody
has to submit. There are non-zero-sum games and there are also
zero-sum-games.



---
To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>

Report to moderator   Logged
Blunderov
Archon
*****

Gender: Male
Posts: 3160
Reputation: 8.90
Rate Blunderov



"We think in generalities, we live in details"

View Profile WWW E-Mail
RE: virus: Existential risks
« Reply #9 on: 2005-07-15 16:44:03 »
Reply with quote

[Blunderov] I also have my reservations. Does 'super intelligent' mean
absolutely without the possibility of error? Unless this is true then
the possibility exists that any unfriendly action by a super intelligent
entity may not be for a good reason at all.

Also, in order to be perfectly intelligent, such an entity would have to
have perfect access to ALL the information pertaining to a problem
otherwise it would have to make do with imperfect information and thus
be at risk of error.

In real life highly intelligent people make mistakes all the time. And
perfectly 'ordinary' people quite often notice them. Every now and again
a patzer beats a grandmaster for instance. But this does not necessarily
mean that the patzer is as strong as the GM. (Of course good chess
players are not necessarily very 'intelligent'; but they do have to be
very good at solving chess problems.)

Best regards.

rhinoceros
Sent: 15 July 2005 20:46

David Lucifer wrote:
> Another argument is if something that is super-intelligent does
something that we think is unfriendly, then it must have a good reason
for doing so (pretty much by definition) and we should probably defer to
its greater wisdom. In other words, if we know enough to tell that it is
making a mistake then we must be more intelligent than it. Agree or
disagree?

[rhinoceros]
Disagree. A superior intelligence may have different goals from me. I
can aknowledge its superior problem-solving ability when I see it, but I

don't know what problem it is trying to solve to my detriment. Nobody
has to submit. There are non-zero-sum games and there are also
zero-sum-games.



---
To unsubscribe from the Virus list go to
<http://www.lucifer.com/cgi-bin/virus-l>


---
To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>

Report to moderator   Logged
simul
Adept
****

Gender: Male
Posts: 614
Reputation: 7.87
Rate simul



I am a lama.
simultaneous zoneediterik
View Profile WWW
Re: virus: Existential risks
« Reply #10 on: 2005-07-15 17:00:35 »
Reply with quote

rhinoceros wrote:
> David Lucifer wrote:
>
>> Another argument is if something that is super-intelligent does
>> something that we think is unfriendly, then it must have a good reason
>> for doing so (pretty much by definition) and we should probably defer
>> to its greater wisdom. In other words, if we know enough to tell that
>> it is making a mistake then we must be more intelligent than it. Agree
>> or disagree?
>
>
> [rhinoceros]
> Disagree. A superior intelligence may have different goals from me. I
> can aknowledge its superior problem-solving ability when I see it, but I
> don't know what problem it is trying to solve to my detriment. Nobody
> has to submit. There are non-zero-sum games and there are also
> zero-sum-games.

The most complex AI's in the finance industry *are* self aware, in that
their existence and actions influence market data that are it's input.
Most of them have clusters of networks specifically dedicated to
predicitons based on actions that result from predictions ad-nauseum.
This could be considered their "conciousness".  (Probably isn't much
different than ours).

And, so, are they out to serve themselves?  Of course they are ... they
are out to maximize profits and thereby justify their own existence and
the accumulation of resources that will result in their improvement and
expansion.  But the fact that they are interested in their own survival
is a byproduct of their engineering.

People are the same way.  We exist to exist.  Our goals and desires are
all byproducts of our original evolutionary programming.  Do we go crazy
and start killing each other?  Occasionally, yes.  But even including
devatating wars, the percentage of people dying at the hands of each
other versus dying in other ways has *gone down* as technology has
improved.  I can't see why this trend should suddenly shift.
---
To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>

Report to moderator   Logged

First, read Bruce Sterling's "Distraction", and then read http://electionmethods.org.
rhinoceros
Archon
*****

Gender: Male
Posts: 1318
Reputation: 8.40
Rate rhinoceros



My point is ...

View Profile WWW E-Mail
Re: virus: Existential risks
« Reply #11 on: 2005-07-15 18:32:11 »
Reply with quote

[simul] The most complex AI's in the finance industry *are* self aware, in that their existence and actions influence market data that are it's input. Most of them have clusters of networks specifically dedicated to predicitons based on actions that result from predictions ad-nauseum. This could be considered their "conciousness".  (Probably isn't much different than ours).

And, so, are they out to serve themselves?  Of course they are ... they are out to maximize profits and thereby justify their own existence and the accumulation of resources that will result in their improvement and expansion.  But the fact that they are interested in their own survival is a byproduct of their engineering.


[rhinoceros] The financial AI lives in a simple world. It has a hardwired goal supplied by the programmers, which it cannot change: 'Maximize profits'. The problems usually discussed in relation to the "super AI" often imply an intelligence which will be able to set its own goals.

Even in such a simple world, athough 'Maximize profits' sounds neutral, it involves squashing the competition. Can I assume that I won't be at the wrong end of the stick?


[simul] People are the same way.  We exist to exist.  Our goals and desires are all byproducts of our original evolutionary programming.  Do we go crazy and start killing each other?  Occasionally, yes.  But even including devatating wars, the percentage of people dying at the hands of each other versus dying in other ways has *gone down* as technology has improved.  I can't see why this trend should suddenly shift.


[rhinoceros] Let's assume that the percentage of people dying at the hands of each other versus dying in other ways has really gone down as technology has improved. I won't debate it at this point. Let's assume it is true.

You mention that human goals and desires are byproducts of evolutionary programming, which basically means "whatever it takes to have survived as a species." It is actualy very complex programming, which produced social clustering, societies perishing and rising according to the circumstances, and often behaviors detrimental to the individual person. The point that I want to make here is that under the hood of our rationality there is a whole web of specific traits which drive our behavior.

Then you ask why this trend should suddenly shift: Because the super AI won't necessarily have a similar kind of programming. How would it obtain it? Automatically? And would that programming be good for me or for the groups to which I belong or to humans in general? What would make it so?


« Last Edit: 2005-07-15 18:35:18 by rhinoceros » Report to moderator   Logged
simul
Adept
****

Gender: Male
Posts: 614
Reputation: 7.87
Rate simul



I am a lama.
simultaneous zoneediterik
View Profile WWW
Re: virus: Existential risks
« Reply #12 on: 2005-07-15 21:40:40 »
Reply with quote

We too live in a simple world.  "Survival" is our only hard-wired goal. 

Individually we have a myriad of complex and intersecting subgoals that arose out of this original programming.

But, from the perspective of someone who lives outside our world, survival may rightly seem to be our *only* goal.  All our subgoals look like implementation details. 

Maximizing profits is nothing more than "survival" rephrased.  And although it may be simple, our finance AI friend, if it were listening in, might be saying to itself, but what about "minimizing risk exposure to oil prices" and "diversifying across sectors".  These are more complex goals.  But they were derived from the original. 

As the complexity of a system increases, it becomes difficult to understand how it was derived from the primary goal.  "Why does my finance AI like buying penny stocks in Russia?"

Rest assured, even the most esoteric goals somehow serve the original ... or they die trying.

A "super AI" would attain this programming precisely for the reason we did.  It's program is, too, implicity - survive.

So, our super AI and us .... we've got that in common ... the same hard-wired original goals.

As does all life.

And just we have learned that a thriving ecosystem is important to our survival - a super AI will realize that it's survival intimately depends on it's relationship with us.

Sure, we sometimes try to kill off viruses and bacteria, but we are still cognizant of the fact that our life depends on their existence.  We would never wipe all of them out.  Bacteria play a crucial role in maintaining an environment suitable for life.

And so, too, humanity will *probably* continue to play a crucial role in maintaining an environment suitable for sustaining the complex AI's that emerge.
---
To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>

Report to moderator   Logged

First, read Bruce Sterling's "Distraction", and then read http://electionmethods.org.
hkhenson@rogers...
Adept
***

Gender: Male
Posts: 130
Reputation: 7.91
Rate hkhenson@rogers...



back after a long time
hkhenson2
View Profile WWW E-Mail
Re: virus: Existential risks
« Reply #13 on: 2005-07-17 07:15:50 »
Reply with quote

At 01:40 AM 16/07/05 +0000, "Erik Aronesty" wrote:
>We too live in a simple world.  "Survival" is our only hard-wired goal.

That's not actually the case.  To the extent we have hard-wired goals,
"Reproduce" rates higher.  Now you have to survive to reproduce, but when
the choice comes down to saving the lives of relatives vs your own
Hamilton's calculous comes into play.  According to Hamilton, our genes
program us to value our own lives no more than the lives of two brothers, 4
half sibs, 8 cousins, etc.

If you don't understand this, you should.  It is a major human
psychological trait from the stone age that lies behind 9/11 and the recent
bombings in London.

Keith Henson

>Individually we have a myriad of complex and intersecting subgoals that
>arose out of this original programming.
>
>But, from the perspective of someone who lives outside our world, survival
>may rightly seem to be our *only* goal.  All our subgoals look like
>implementation details.
>
>Maximizing profits is nothing more than "survival" rephrased.  And
>although it may be simple, our finance AI friend, if it were listening in,
>might be saying to itself, but what about "minimizing risk exposure to oil
>prices" and "diversifying across sectors".  These are more complex
>goals.  But they were derived from the original.
>
>As the complexity of a system increases, it becomes difficult to
>understand how it was derived from the primary goal.  "Why does my finance
>AI like buying penny stocks in Russia?"
>
>Rest assured, even the most esoteric goals somehow serve the original ...
>or they die trying.
>
>A "super AI" would attain this programming precisely for the reason we
>did.  It's program is, too, implicity - survive.
>
>So, our super AI and us .... we've got that in common ... the same
>hard-wired original goals.
>
>As does all life.
>
>And just we have learned that a thriving ecosystem is important to our
>survival - a super AI will realize that it's survival intimately depends
>on it's relationship with us.
>
>Sure, we sometimes try to kill off viruses and bacteria, but we are still
>cognizant of the fact that our life depends on their existence.  We would
>never wipe all of them out.  Bacteria play a crucial role in maintaining
>an environment suitable for life.
>
>And so, too, humanity will *probably* continue to play a crucial role in
>maintaining an environment suitable for sustaining the complex AI's that
>emerge.
>---
>To unsubscribe from the Virus list go to
><http://www.lucifer.com/cgi-bin/virus-l>

---
To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>

Report to moderator   Logged
simul
Adept
****

Gender: Male
Posts: 614
Reputation: 7.87
Rate simul



I am a lama.
simultaneous zoneediterik
View Profile WWW
Re: virus: Existential risks
« Reply #14 on: 2005-07-17 08:52:07 »
Reply with quote

> "Reproduce" rates higher.  Now you
> have to survive to reproduce, but
> when

Reproduction is, clearly, a subgoal of survival.  We have to reproduce to survive as a species.

But survival goes beyond the species.  Survival is *lifes* directive.  Not just humanity's.

And that's the deepest level of programming.

Life goes on.
---
To unsubscribe from the Virus list go to <http://www.lucifer.com/cgi-bin/virus-l>

Report to moderator   Logged

First, read Bruce Sterling's "Distraction", and then read http://electionmethods.org.
Pages: [1] 2 Reply Notify of replies Send the topic Print 
Jump to:


Powered by MySQL Powered by PHP Church of Virus BBS | Powered by YaBB SE
© 2001-2002, YaBB SE Dev Team. All Rights Reserved.

Please support the CoV.
Valid HTML 4.01! Valid CSS! RSS feed