case: (Default)
Case ([personal profile] case) wrote in [community profile] fandomsecrets2023-03-30 05:53 pm

[ SECRET POST #5928 ]


⌈ Secret Post #5928 ⌋

Warning: Some secrets are NOT worksafe and may contain SPOILERS.


01.



__________________________________________________



02.



__________________________________________________



03.



__________________________________________________



04.



__________________________________________________



05.



__________________________________________________



06.
[Far Cry]



__________________________________________________



07.
[Starry Love]



__________________________________________________



08.



__________________________________________________



09.



__________________________________________________



10.























Notes:

Secrets Left to Post: 01 pages, 11 secrets from Secret Submission Post #848.
Secrets Not Posted: [ 0 - broken links ], [ 0 - not!secrets ], [ 0 - not!fandom ], [ 0 - too big ], [ 0 - repeat ].
Current Secret Submissions Post: here.
Suggestions, comments, and concerns should go here.

(Anonymous) 2023-03-30 10:52 pm (UTC)(link)
lmao

It's goddamn AI. Just revisit the source material if you're trying to be faithful in terms of characterization.

(Anonymous) 2023-03-30 11:29 pm (UTC)(link)
lmao chatgpt is not gonna be able to keep anything in-character for shit. You can cajole it into writing fanfic for sure, probably even ""harmful behavior"" if you try but it's just, not that good.
Also idk if you're joking calling it an anti but like, it really does not have ideologies like that, it's just been made to be really absurdly over cautious so people don't get pissy at openai for letting the bot say bad words or whatever.
meadowphoenix: (Default)

[personal profile] meadowphoenix 2023-03-30 11:42 pm (UTC)(link)
okay but you know it's not magic right? it can only regurgitate things similar to what it was trained on, and is limited by what the company promoting it thinks it won't be legally liable for in a very conservative way.

(Anonymous) 2023-03-31 02:04 am (UTC)(link)
it can only regurgitate things similar to what it was trained on,

I think this is slightly understating it?

It can only produce things from within the space defined by the boundaries of the things that it was trained on. But within that space, it is capable of doing things that are functionally new - the things that it produces are not just the things it was trained on shuffled around and rearranged, they are (or at least, can be) distinct from anything in the training corpus.
meadowphoenix: (Default)

[personal profile] meadowphoenix 2023-03-31 03:46 am (UTC)(link)
it's almost certainly using ngrams, and cluster analysis or some close algorithm, i think "similar" isn't understating its lack of actual syntactic understanding.

(Anonymous) 2023-03-31 04:12 am (UTC)(link)
Hmmmm, I'm not a technical expert but from what I gather it's generally using much more powerful and advanced techniques? But probably not qualitatively different techniques I guess.

I think my main point is, I don't think that the presence or absence of actual syntactic understanding is the determining factor for whether the output is novel or not. If you have a process without actual syntactic understanding can produce outputs that are distinct from any of its inputs, it seems like it's sort of irrelevant whether or not actual syntactic understanding is involved.
meadowphoenix: (Default)

[personal profile] meadowphoenix 2023-03-31 07:05 am (UTC)(link)
my point is a) syntactic understanding what OP is expecting imo, and b) why are you framing this as an issue of novelty? what AI chat responds with, if it's allowed, is the output of algorithms that require it to parse a prompt through similarity matrices and typify based on that that parsing. there is no way in such a situation for anything novel to not be similar to its training data, and nothing i said implied that they were distinct. why are they for you?

(Anonymous) 2023-03-31 04:19 am (UTC)(link)
Not really. It's answers may look new but it still just pulls from its training set for the raw information. It can write new sentences (and does it well, generally) so the end result may sound new but both the information itself as well as its ability to write are based on the training set.

(Anonymous) 2023-03-30 11:43 pm (UTC)(link)
Or you could... watch clips of the canon to help keep the characters in character, a bot can't do that for you.

(Anonymous) 2023-03-30 11:53 pm (UTC)(link)
What ChatGPT is is a coward. It'll refuse to do a lot of things because it's trying its hardest to avoid possibly ever slightly offending anyone, even indirectly by letting users get responses other people might find offensive if they shared it. ... Which, admittedly, is not that far off from the typical anti mindset. Main difference is that it at least only wants to restrict its own actions.

If you really wanna use language models to bounce fic ideas off of, character.ai is a way better option, because it's not shackled by ridiculous PR requirements.

... And as others have said, language models don't "know" things, so even CAI isn't really good for keeping your writing IC. Use it for generating new ideas, not error-checking.
erinptah: (pyramid)

[personal profile] erinptah 2023-03-31 12:11 am (UTC)(link)
It's not the model, it's the programmers. They're trying very hard to make ChatGPT a product they can monetize, which means frantically patching in exceptions and caveats to anything they could get sued over.

If they charge for ChatGPT when it can be used to write, say, Disney fanfic, then Disney will sue their socks off. So they build in a trigger that says "sorry, I can't do that" any time you ask for something remotely fanficcy.

And +6 to the point that chatbots don't produce "writing that's accurate" -- only "writing that's statistically likely to sound similar to all the other writing in its training data." If you ask it for a travel route, it makes up fake town names...if you ask it for a legal argument, it makes up fake legal cases...if you ask for book recs, it makes up fake book titles, complete with entire imaginary summaries. Extremely funny! Absolutely not reliable.