hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

8.9K
active users

Well, since private equity intend to flood the zone with tens of thousands of AI books next year, I'll say it:

We officially need an "AI free" seal of approval, and one that doesn't allow the same bullshit loopholes the FDA allows re: less than half a gram. Less than half a gram of AI is still fuckin Ai.

@hannu_ikonen I think it depends on what the AI is being used to do by authors. Don’t like replacing authors. But it can be a tool to help an author check facts, check locations that the story involves, check writing and grammar…. I think those uses are fine as that would be something that publishers would/should do anyway. I can also see using AI to do research, like a research assistant does. But then the author needs to go through and check all the research and all the citations and build on the AI’s work.

@ronaldtootall @hannu_ikonen

LLM are not reliable enough to "check facts" this isn't what they are even designed to do well.

What they are designed to do is generate plausible seeming streams of text similar to existing sets of text. That is all.

There is no logic behind that, no verification. It's pure chance.

Do not use them to check facts, please.

@futurebird @ronaldtootall @hannu_ikonen Here’s an activity I made to help visualize how LLMs can construct coherent sentences without understanding meaning:

zkolar.xyz/posts/why-cant-llms

zkolar.xyzWhy can’t LLMs understand?
More from Zak Kolar

@zak @futurebird @ronaldtootall @hannu_ikonen do you have any insight on what we mean when we say we understand things then?

It feels there's this "language model" (in the more mathematical sense) that is a projection of just another model: the world model. We say we understand because we can map from language model to world model. But in a way, it feels we are just like the LLMs fed with only text: we are fed with only the world model, with no further context, and everything we "understand" is cyclically understood in terms of this world model.

Cassandrich

@felipe @zak @futurebird @ronaldtootall @hannu_ikonen The world model you speak of corresponds to empirically testable things and is updated when it fails to do so. The language models don't and aren't.

@dalias @felipe @zak @futurebird @ronaldtootall @hannu_ikonen They do and are though.

That's the training part. The model is trained and then used. It may or may not be training while it's used.

That training is fed a context, just like you do with experimentation. The model it tested against that context, as you do empirically. The model is then adjusted if it needs to be. This is exactly the empirical process.

@crazyeddie @felipe @zak @futurebird @ronaldtootall @hannu_ikonen No it's not. This is a grossly inaccurate description of how LLMs are trained and used. The models users interact with are completely static. They are only changed when their overlords decide to change them, not by self discovery that they were wrong. They don't even have any conception of what "wrong" could mean because there is no world model only a language model.