hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.8K
active users

@thomasfuchs

Yeah they really are not trustworthy in this regard

It was one of the things I actually hope they would do well!

But I haven’t had much luck with it and research metrics like this haven’t either

@clive @thomasfuchs

They have NO understanding NOR reasonning.
Only text generator.

@Gergovie @clive @thomasfuchs I think that's way too reductive. LLMs absolutely do something that *looks* like understanding and reasoning.

The problem is that we don't have great ways to characterize what it is they *do*, so it's really hard to know when their output is good enough to use in place of actual logic and interpretation.

@Gergovie @clive @thomasfuchs The text that LLMs are trained on are an artifact of understanding and reasoning processes. And to the extent that the text outputs can capture the essence of those processes, LLMs mimic the processes themselves.

@Gergovie @clive @thomasfuchs But because LLMs are so internally complex, we're reduced to discussing them by analogy, and I think that chronically leads to over- and underestimating their utility.

@acjay @thomasfuchs @Gergovie

I think over and underestimating is a good way of putting it

I’m not as confident as you that the statistical approach that underpins LLMs produces anything like what we could reasonably call understanding, though

It may well be a *component* of understanding — making associations is key — but it’s not all clear that it can produce other elements of reasoning: logic, math, semantics, etc

Alan Johnson

@clive @thomasfuchs @Gergovie I think we pretty much agree. It's mimicry of those things. It's extremely unclear that you can even compose LLMs with other subsystems in a rigorous way to address those shortcomings.

@clive @thomasfuchs @Gergovie It reminds me of Prolog a bit. When I first learned it, I was like "holy shit, this is incredible". But then you learn the fundamental limitations, and how the workarounds to those limitations undermine all the good parts. Then you understand why it remains a niche technology.

It's possible we're already pretty close to the local maximum of LLMs as a technology. If so, I still do think it's pretty impressive.

@Gergovie @acjay @thomasfuchs

It can definitely be useful in a bunch of areas for sure

I do wonder what’ll happen in a year or so from now — the enormous expense of training and inferencing on the foundation models doesn’t seem likely to produce profits anywhere close to recouping, to say nothing of 10xing

I suspect there’ll be some hard conversations

@clive @Gergovie @thomasfuchs

Yeah, things will look a bit different when people have to pay the actual cost of goods for inference, if they can't be trained and run far more efficiently. It's all heavily subsidized right now.

@thomasfuchs @Gergovie @acjay

I never used prolog, but I knew there were a lot of people that super dug it, and still do!