case: (Default)
Case ([personal profile] case) wrote in [community profile] fandomsecrets2023-03-30 05:53 pm

[ SECRET POST #5928 ]


⌈ Secret Post #5928 ⌋

Warning: Some secrets are NOT worksafe and may contain SPOILERS.


01.



__________________________________________________



02.



__________________________________________________



03.



__________________________________________________



04.



__________________________________________________



05.



__________________________________________________



06.
[Far Cry]



__________________________________________________



07.
[Starry Love]



__________________________________________________



08.



__________________________________________________



09.



__________________________________________________



10.























Notes:

Secrets Left to Post: 01 pages, 11 secrets from Secret Submission Post #848.
Secrets Not Posted: [ 0 - broken links ], [ 0 - not!secrets ], [ 0 - not!fandom ], [ 0 - too big ], [ 0 - repeat ].
Current Secret Submissions Post: here.
Suggestions, comments, and concerns should go here.

(Anonymous) 2023-03-31 02:04 am (UTC)(link)
it can only regurgitate things similar to what it was trained on,

I think this is slightly understating it?

It can only produce things from within the space defined by the boundaries of the things that it was trained on. But within that space, it is capable of doing things that are functionally new - the things that it produces are not just the things it was trained on shuffled around and rearranged, they are (or at least, can be) distinct from anything in the training corpus.
meadowphoenix: (Default)

[personal profile] meadowphoenix 2023-03-31 03:46 am (UTC)(link)
it's almost certainly using ngrams, and cluster analysis or some close algorithm, i think "similar" isn't understating its lack of actual syntactic understanding.

(Anonymous) 2023-03-31 04:12 am (UTC)(link)
Hmmmm, I'm not a technical expert but from what I gather it's generally using much more powerful and advanced techniques? But probably not qualitatively different techniques I guess.

I think my main point is, I don't think that the presence or absence of actual syntactic understanding is the determining factor for whether the output is novel or not. If you have a process without actual syntactic understanding can produce outputs that are distinct from any of its inputs, it seems like it's sort of irrelevant whether or not actual syntactic understanding is involved.
meadowphoenix: (Default)

[personal profile] meadowphoenix 2023-03-31 07:05 am (UTC)(link)
my point is a) syntactic understanding what OP is expecting imo, and b) why are you framing this as an issue of novelty? what AI chat responds with, if it's allowed, is the output of algorithms that require it to parse a prompt through similarity matrices and typify based on that that parsing. there is no way in such a situation for anything novel to not be similar to its training data, and nothing i said implied that they were distinct. why are they for you?

(Anonymous) 2023-03-31 04:19 am (UTC)(link)
Not really. It's answers may look new but it still just pulls from its training set for the raw information. It can write new sentences (and does it well, generally) so the end result may sound new but both the information itself as well as its ability to write are based on the training set.