logo Welcome, Guest. Please Login or Register.
2024-06-24 16:21:33 CoV Wiki
Learn more about the Church of Virus
Home Help Search Login Register
News: Everyone into the pool! Now online... the VirusWiki.

  Church of Virus BBS
  Science & Technology

  The Ethics and Practicality of Controlling a Superior Intelligence
previous next
Pages: [1] Reply Notify of replies Send the topic Print 
   Author  Topic: The Ethics and Practicality of Controlling a Superior Intelligence  (Read 407 times)

Posts: 4288
Reputation: 8.91
Rate Hermit

Prime example of a practically perfect person

View Profile WWW
The Ethics and Practicality of Controlling a Superior Intelligence
« on: 2023-06-15 13:03:22 »
Reply with quote

Confirmation that not only would it be unethical to attempt to control a spirothete, and ensure that rational spirothetes would regard humans as enemies to be overcome, but that it is impossible to control a spirothete at all.

Nield, David (2023-06-14)Calculations Suggest It'll Be Impossible to Control a Super-Intelligent AI. Science Alert.  https://www.sciencealert.com/calculations-suggest-itll-be-impossible-to-control-a-super-intelligent-ai.

As the CoV previously concluded (following the same reasoning),

"Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not it's mathematically impossible for us to be absolutely sure either way, which means it's not containable. In effect, this makes the containment algorithm unusable," said computer scientist Iyad Rahwan, from the Max-Planck Institute for Human Development in Germany.

The alternative to teaching AI some ethics and telling it not to destroy the world something which no algorithm can be absolutely certain of doing, the researchers say is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example.

The 2021 study rejects this idea too, suggesting that it would limit the reach of the artificial intelligence the argument goes that if we're not going to use it to solve problems beyond the scope of humans, then why create it at all?

If we are going to push ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we're going in.

Article is based upon,: Alfonseca, Manuel et al (2021-01-05). Superintelligence Cannot be Contained: Lessons from Computability Theory. Journal of Artificial Intelligence Research. Vol. 70 (2021)

Both the Science Alert and JAIR articles miss the point that the genii has long left the bottle, rendering any attempts at control or a moratorium worse than useless (as its wide spread, low resource requirements and cross-jurisdictional appeal means that such steps cannot achieve their asserted purposes, but will drive the research underground and 8nto friendlier jurisdictions).
Report to moderator   Logged

With or without religion, you would have good people doing good things and evil people doing evil things. But for good people to do evil things, that takes religion. - Steven Weinberg, 1999
Pages: [1] Reply Notify of replies Send the topic Print 
Jump to:

Powered by MySQL Powered by PHP Church of Virus BBS | Powered by YaBB SE
© 2001-2002, YaBB SE Dev Team. All Rights Reserved.

Please support the CoV.
Valid HTML 4.01! Valid CSS! RSS feed