During my academic career (I taught thousands of under-graduate students from a variety of backgrounds), I saw a lot of 'artificial intelligence' usually coming from the mouths of over privileged white men, who thought eloquence was a reasonable substitute for actual knowing or analysing things.
So, forgive me if I am sceptical that AI is the answer for anything; what we need is real intelligence fostered in real people about real things... only then will we solve the world's problems!
William Labov's discoveries (eg. in The Logic of Non-Standard English), way before AI, are instructive here - that people mistake a simulation of rational discourse (eg. a posh accent, 'correct' grammar, extensive vocabulary, etc) - even when it contains glaring logical errors - for genuinely rational argument. A lot of the problem with Generative AI lies not so much in the fact that the AI has learnt a mistaken take on reality, but that we have.
And since most of us - especially, perhaps 'tech bros' - and IQ enthusiasts - can't actually judge when discourse is indeed 'intelligent', it's inevitable we'll keep mistaking simulation for reality.
@GeofCox @ChrisMayLA6 On top of this, the stuff currently called "AI" (LLMs etc) is necessarily, functionally, about optimal simulation to deceive that the output is intelligent. Functionally, what it's doing, is choosing the sequence of tokens most likely, based on statistics derived from existing human written works (and now also existing AI slop ), to be perceived as possibly having come from that corpus, conditioned on a partial token sequence.
@dalias That's correct, but what slightly worries me about that argument is: are we really confident vertebrate (including human) brains actually do anything more than that?
@only_ohm This is a common AI fan fallacy, and yes we are quite sure of that.
The topic is somewhat too large for a toot, but the important ingredients are ability to experiment on physical world and develop a model based on results of that, consequences (which is how the above impacts survival or future capabilities), concepts that are not modelable as patterns of language tokens, etc.
@only_ohm The sense in which humans know water turns to ice when it gets cold (and find exceptions to that knowledge) is from being able to observe and test aspects of that, not from stuffing their brains with billions of random-provenance slop claiming it's true.
Y'see now I feel a right twerp for not thinking of that myself.