logo Welcome, Guest. Please Login or Register.
2024-04-19 08:37:44 CoV Wiki
Learn more about the Church of Virus
Home Help Search Login Register
News: Open for business: The CoV Store!

  Church of Virus BBS
  General
  Philosophy & Religion

  Dangers of AI (Artificial Intelligence)?
« previous next »
Pages: [1] Reply Notify of replies Send the topic Print 
   Author  Topic: Dangers of AI (Artificial Intelligence)?  (Read 767 times)
Konetzin
Magister
**

Posts: 15
Reputation: 6.64
Rate Konetzin



I'm a llama!

View Profile E-Mail
Dangers of AI (Artificial Intelligence)?
« on: 2007-06-24 20:58:51 »
Reply with quote

Before I begin, I wish to say that I searched for "artificial intelligence" using the forum search option and found no matches, so I apologize in advance if this topic has been brought up before.  I am posting this in philosophy because this topic is attempting to analyze the far-reaching consequences of a somewhat general concept.  I also apologize for any flaws in writing or thinking - I just wrote this on the spot rather than pulling it from a website (although I have been thinking about it for quite a while).  If you see any mistakes, please be constructive and still try to take the ideas that are right into consideration.


Nobody in their right mind argues against the fact that improving artificial intelligence can possibly bring benefits to our society, since computers can perform some tasks more efficiently than humans, and improving artificial intelligence can broaden that range of tasks, improving our overall quality of life.  More importantly, it ushers in an era of beings who are "smart enough", in reference to the comment that we either need a lot more or a lot less intelligence (second reply by Blunderov). However, there are also inherent dangers in artificial intelligence.  I wish to state and discuss the two dangers that I am aware of, and would like to hear what you all think about them.

The smaller danger is AI taking control of the world.  We can expect a logical tyranny rather than an irrational one, since a AI behaving in a consistent, logical, and rational matter is more likely to achieve power.  A logical tyranny may not be your idea of utopia, but it is in some ways better than being under a human ruler who does as he pleases and squanders resources to use for his own personal pleasure.  That is because a human ruler can behave logically to obtain power but then may succumb to his pleasure-seeking nature, and that nature, although detrimental to the ability to obtain power, is a common denominator among all humans, but possibly not among AI.

The larger danger is AI which is completely subservient to a minority of humans, and is used by those humans to control the world.  This AI will work as an intelligence agency, research association, political advisor, military general, or any other role that the ones in power require, and will ensure that they stay in power.  The reason this is a larger danger is, of course, because it creates precisely the situation in which tyrannical humans can more easily arise, as mentioned in the previous paragraph.

Contrary to portrayal in science fiction, some qualities of humans, such as the desire for power, anger, and the desire for justice (I picked the ones that came to mind first), are not necessarily properties of intelligence but possibly rather the products of evolution.  As humans, we are more likely to want to program an AI that serves us rather than wishes to control us.  Unless the desire for power is somehow necessary or at the very least beneficial to the manifestation of intelligence, we probably won't create too many ambitious beings.  Rather, the people in power will make bots to suppress the ambitious ones made by individual programmers with radical ideas.  Thus, I believe the second, larger danger is unfortunately more likely.  The ones in power can simply create an AI which has no desire to have power over them or over anyone, but does as the rulers please, and rulers can stay in power using that AI.

But is it really our responsibility to avoid AI in fear that it will cause harm to society?  No, I believe: if we avoid the topic, someone else may simply improve AI to a point where it can be abused.  Rather, it is our responsibility to help ensure that AI research and politics (government policy and public opinion) moves in a direction such that AI is more likely to exist in an open, free way that fosters harmony among intelligent beings, rather than AI being grabbed by a greedy minority in power and used exclusively by them.


Thank you for reading this topic.


[edit: This is different from atomic bombs, since an atomic bomb is an atomic bomb and does only one thing, while AI can do all different kinds of things. so while inventing the atomic bomb is not morally justified, getting first dibs on researching AI can make the good kinds more widespread.]
« Last Edit: 2007-06-24 21:16:21 by Konetzin » Report to moderator   Logged
Pages: [1] Reply Notify of replies Send the topic Print 
Jump to:


Powered by MySQL Powered by PHP Church of Virus BBS | Powered by YaBB SE
© 2001-2002, YaBB SE Dev Team. All Rights Reserved.

Please support the CoV.
Valid HTML 4.01! Valid CSS! RSS feed