case: (Default)
Case ([personal profile] case) wrote in [community profile] fandomsecrets2016-01-11 06:39 pm

[ SECRET POST #3295 ]


⌈ Secret Post #3295 ⌋

Warning: Some secrets are NOT worksafe and may contain SPOILERS.

01.


__________________________________________________



02.


__________________________________________________



03.


__________________________________________________



04.


__________________________________________________



05.


__________________________________________________



06.


__________________________________________________



07.


__________________________________________________



08.


__________________________________________________



09.


__________________________________________________



10.


__________________________________________________



11.


__________________________________________________



12.


__________________________________________________



13.














Notes:

Secrets Left to Post: 03 pages, 058 secrets from Secret Submission Post #471.
Secrets Not Posted: [ 0 - broken links ], [ 0 - not!secrets ], [ 0 - not!fandom ], [ 0 - too big ], [ 0 - repeat ].
Current Secret Submissions Post: here.
Suggestions, comments, and concerns should go here.

(Anonymous) 2016-01-12 06:00 pm (UTC)(link)
What's Roko's Basilisk?

(Anonymous) 2016-01-12 08:36 pm (UTC)(link)
It's a weird-ass argument about artificial intelligence that caused some controversy in this specific community

So, first, you start off with the axiomatic assumption that a perfect simulation of a person's consciousness is internally indistinguishable from a person's consciousness - that is to say, if someone ran a perfect simulation of my consciousness on a computer, that consciousness would feel exactly as real as i do right now.

Okay. So. Say that it's possible for a super-powerful, super-human AI with a moral code aligned with utilitarian human values to come into existence. Because the AI would be superpowerful, it would be able to immensely improve the existence of many humans (this is also an axiomatic assumption) which would be a huge utilitarian good.

So there would be a strong utilitarian argument for any action that would help bring said AI into existence faster. One specific tactic for the AI to help bring itself into existence faster would be to punish anyone who does not help bring it into existence. Specifically, the AI would create perfect simulations of the consciousness of anyone who was aware of the possibility of such an AI coming into existence, but did not help, and would torture them forever. So therefore, you have to help create an AI.

The reason it's called a basilisk is because it only works if you're aware of it. If you're not aware of the possibility of being tortured, it's not an incentive to work harder, and so the AI has no moral or rational reason to torture you. On the other hand, if you're aware of it, it has a strong incentive to do so. (The 'Roko' part of the name is just the guy who came up with it).

So, summed up: unless you work to help create a future AI, the future AI will simulate your consciousness and torture you for all eternity. And this really made a few people on the forums freak out and get terrified, and then Yudkowsky freaked out about them freaking out, and then attained this whole kind of Internet-urban-myth status.