case: (Default)
Case ([personal profile] case) wrote in [community profile] fandomsecrets2026-01-31 05:38 pm

[ SECRET POST #6966 ]


⌈ Secret Post #6966 ⌋

Warning: Some secrets are NOT worksafe and may contain SPOILERS.


01.




__________________________________________________



02.



__________________________________________________



03.



__________________________________________________



04.



__________________________________________________



05.
[Practical Engineering (Youtube), Team Fortress 2]



__________________________________________________



06.
[Stranger Things]




















Notes:

Secrets Left to Post: 02 pages, 34 secrets from Secret Submission Post #995.
Secrets Not Posted: [ 0 - broken links ], [ 0 - not!secrets ], [ 0 - not!fandom ], [ 0 - too big ], [ 0 - repeat ].
Current Secret Submissions Post: here.
Suggestions, comments, and concerns should go here.

CW: Murder, Suicide, CSAM.

(Anonymous) 2026-02-01 05:40 am (UTC)(link)
I could give a few ways AI has hurt people, more than once has it lead to death because it feeds on psychosis, mostly suicides, but there was the recent case where it fed on a guy's paranoia, and he killed his mother then himself. You could argue he was already unwell, but AI clearly made it worse quicker, and there doesn't seem to be any means to reliably safeguard against this.

Grok has been used to edit pictures of random people online, including minors, to make them sexually explicit. They say they fixed it this time, but I doubt it's only happening there, and they said the same last time. So again, unreliably safeguards.

Then there is spreading misinformation, especially as AIvideos get too good.

I don't completely disagree with you or anything, if all they are doing is RPing or something, even if I subscribe to seeing it as trouble, I'll not crucify anyone for it, especially because I did it too early in its life and before with AIDungeon. I really wish AI was just a toy like Google Brain used to be, but it's not.