a thing i don’t get is what is new. i mean, computers have long been much, much “smarter” than humans in, for example, their ability to perform arithmetic, or to remember things. recent AI tools are interesting for sure, but what superior competence of theirs makes these new systems so threatening, compared to older superior competences?
@interfluidity many people believe facility with language is the essence of human cognition. Rather than reevaluate that belief, they have decided that LLMs are super human.
@agocke it is easy to imagine, say, 150 years ago, making a case that while many animals in some sense or another “speak”, it is the uniquely human capacity for mathematics that truly distinguishes us from the beasts.
@interfluidity I would argue an abacus kind of refutes that. I really am not sure what's particularly divergent about human cognition. It seems like it combines a lot of things.
@interfluidity that said, there is particularly interesting kernel of insight -- if LLMS do represent something closer to how we think, we know intelligence arose in an undirected random process (evolution) in relatively small time scales, so the search space can't be that big. There is a chance we are not far from human cognition, whatever its essence is. I still think the main question is whether or not LLMs actually represent how we think.
@agocke if they do represent how we think, does that mean they think
@interfluidity don't all computers?
@agocke ha! my answer would probably be it’s up to each of us to answer, it’s an axiom we can accept or not, impervious to derivation or refutation.
@interfluidity the Wittgenstein answer would probably be: this is an artifact of our use of language, and not a real question. Our notion of "think" is an element in a language game that only takes meaning in context
@agocke (only humans could invent, use, and make sense of an abacus? to all other species, they were dried beans that somehow slide loosely on sticks!)