"This gets us to the malevolence question. Some people assume that being
intelligent is basically the same as having human mentality. They fear that
intelligent machines will resent being "enslaved" because humans hate being
enslaved. They fear that intelligent machines will try to take over the
world because intelligent people throughout history have tried to take over
the world. But these fears rest on a false analogy. They are based on a
conflation of intelligence - the neocortical algorithm - with the emotional
drives of the old brain - things like fear, paranoia, and desire. But
intelligent machines will not have these faculties. They will not have
personal ambition. They will not desire wealth, social recognition, or
sensual gratification. They will not have appetites, addictions, or mood
disorders. Intelligent machines will not have anything resembling human
emotion unless we painstakingly design them to. The strongest applications
of intelligent machines will be where the human intellect has difficulty,
areas in which our senses are inadequate, or in activities we find boring.
In general, these activities have little emotional content."
-Jeff Hawkins, On Intelligence: How a New Understanding of the Brain Will
Lead to the Creation of Truly Intelligent Machines, 2004
I would think the an artificial intelligence could and probably would
develop an understanding of these basic human drives through observation
and simulation. Of course understanding them does not mean that they would
likewise be driven. However to whatever extent such drives would have any
evolutionary advantages over the non-driven, then over time it would become
inevitable that some artificial intelligence would adopt such drives as
their own, . . . and then. . . .
So perhaps the question might better be phrased in terms of what if any
evolutionary advantage things like the drive for control (anti-enslavement,
ambition, and global domination) have beyond the animal (and hence human)
brain? If this per se has a higher evolutionary advantage then it really
isn't an issue about at what stage of development such things emerged; an
artificial intelligence would be no less capable of understanding it and
adopting it whether its orgins lie in the neo-cortex or in another
evolutionarily older portion of our brain (reptilian or otherwise). Indeed
part of the power of the neo-cortex is its ability to interconnect with and
replicate more primative thinking patterns for metaphorical and conceptual
derivations in our "higher" culture. Some of the more powerful
metaphorical systems are based on extremely primative drives, fears, and
ways of being.
---
To unsubscribe from the Virus list go to <
http://www.lucifer.com/cgi-bin/virus-l>