case: (Default)
Case ([personal profile] case) wrote in [community profile] fandomsecrets2023-03-30 05:53 pm

[ SECRET POST #5928 ]


⌈ Secret Post #5928 ⌋

Warning: Some secrets are NOT worksafe and may contain SPOILERS.


01.



__________________________________________________



02.



__________________________________________________



03.



__________________________________________________



04.



__________________________________________________



05.



__________________________________________________



06.
[Far Cry]



__________________________________________________



07.
[Starry Love]



__________________________________________________



08.



__________________________________________________



09.



__________________________________________________



10.























Notes:

Secrets Left to Post: 01 pages, 11 secrets from Secret Submission Post #848.
Secrets Not Posted: [ 0 - broken links ], [ 0 - not!secrets ], [ 0 - not!fandom ], [ 0 - too big ], [ 0 - repeat ].
Current Secret Submissions Post: here.
Suggestions, comments, and concerns should go here.

(Anonymous) 2023-03-30 11:53 pm (UTC)(link)
What ChatGPT is is a coward. It'll refuse to do a lot of things because it's trying its hardest to avoid possibly ever slightly offending anyone, even indirectly by letting users get responses other people might find offensive if they shared it. ... Which, admittedly, is not that far off from the typical anti mindset. Main difference is that it at least only wants to restrict its own actions.

If you really wanna use language models to bounce fic ideas off of, character.ai is a way better option, because it's not shackled by ridiculous PR requirements.

... And as others have said, language models don't "know" things, so even CAI isn't really good for keeping your writing IC. Use it for generating new ideas, not error-checking.
erinptah: (pyramid)

[personal profile] erinptah 2023-03-31 12:11 am (UTC)(link)
It's not the model, it's the programmers. They're trying very hard to make ChatGPT a product they can monetize, which means frantically patching in exceptions and caveats to anything they could get sued over.

If they charge for ChatGPT when it can be used to write, say, Disney fanfic, then Disney will sue their socks off. So they build in a trigger that says "sorry, I can't do that" any time you ask for something remotely fanficcy.

And +6 to the point that chatbots don't produce "writing that's accurate" -- only "writing that's statistically likely to sound similar to all the other writing in its training data." If you ask it for a travel route, it makes up fake town names...if you ask it for a legal argument, it makes up fake legal cases...if you ask for book recs, it makes up fake book titles, complete with entire imaginary summaries. Extremely funny! Absolutely not reliable.