@nini @cstross It's a mixture of everything. It's definitely being hyped, but it's also definitely being put to use (good or bad).
Let's say you have a ton of business records and you want to check if any of them include a specific shady practice. You can't just ask ChatGPT, but you can use it to annotate the records based on a natural language description of the practice.
It won't catch everything, but it will serve as an extra check.
@dalias @nini @cstross Yes, entirely agree.
I didn't say it was a good idea to do it. I just said that it was a use case where the technology works (well enough). That is why "it doesn't actually work" and "there are no use cases" are such dangerous arguments.
People with the legit use cases are going to stop listening to the anti-AI crowd, and do stupid shit like upload sensitive information to an irresponsible company that will train their next model on it.
@dalias @nini @cstross Well, it "works" in the sense that it does what the user wants it to do. It's like a drug that cures your constipation, but the side-effect is that your arms fall off. Technically, the drug works.
What Marcus is doing is saying that the drug doesn't cure constipation. Everyone who takes it and sees that it does, is going to stop listening to him, and to all the other negative Nellies.
Including the ones warning them about the side-effects.