2006-10-22 11:00:25
| #virus from 2006-10-22 11:00:00 (showing messages 1-30) Bookmark the permanent url.
|
11:00:25 | Lucifer | time to start |
11:00:46 | Lucifer | Today's topic is the relation between intelligence and morality |
11:01:10 | Lucifer | causal and correlation relationships |
11:01:40 | Lucifer | Does increased intelligence imply more or less morality? |
11:01:45 | Lucifer | or are they independent? |
11:02:12 | Lucifer | If there is a correlation is it necessary or statistical? |
11:02:28 | Lucifer | I'm going to jump into SL now to see if there is anyone around to invite |
11:04:41 | SLacker | Lucifer Darrow has arrived |
11:05:15 | SLacker | Lucifer Darrow: rezzing... |
11:06:56 | Lucifer | One thing that complicates matters is assessing intelligent and moral behaviour |
11:07:32 | Lucifer | There is no ultimate authority on which to base judgments |
11:08:02 | Lucifer | I'm hoping there is enough consensus that we can gloss over it |
11:08:49 | Lucifer | Our examples should stick to one that are clearly intelligent or stupid, good or evil |
11:09:05 | Lucifer | It should be possible to avoid the grey areas for now at least |
11:09:39 | Lucifer | The motivation for this topic arose in a previous discussion about post-singularity AI |
11:10:15 | Lucifer | There is a fear that it is possible for this AI to be extremely intelligent (by our standards) and extremely evil (by our standards) |
11:10:47 | Lucifer | This fear is driving much of what the singularity institute is working on |
11:10:51 | Lucifer | google: singinst.org |
11:10:52 | googlebot | googling for singinst.org |
11:10:52 | googlebot | http://www.singinst.org/ |
11:11:06 | Vincent | Je répond présent |
11:11:11 | Vincent | * Vincent waves at the audience |
11:11:14 | Lucifer | Hey Vincent |
11:11:19 | Vincent | heya |
11:11:41 | Lucifer | The singinst is working on a design for a "friendly AI" rather than any sort of actual software development afaik |
11:12:21 | Lucifer | So one way to put this topic in context is to ask if the singinst is doing the rational thing here? |
11:12:51 | Lucifer | Or are they wasting resources worrying about an impossible (or extremely unlikely) future? |
11:13:50 | Lucifer | I think they might be |
11:13:54 | Vincent | Besides this possibility of unlikeliness of such possibilities, you have to ponder it with the extreme negative consequences if they are to happen. |
11:14:09 | Vincent | Kind of like calculating the expected utility |