hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.4K
active users

How much AI (i.e. LLM enhanced tools) do you want in your *personal* life?

Personal for me means: social media. news. smartphones. apps. games. music. and the like.

(Work would be HR and IT chatbots and code assistants.)

@vincent What do we consider to be LLM enhanced tools?

For instance, I use Google Translate, and other machine translation tools, extensively. They are a significant help. Not as good as a human translation, but significantly more accessible for a lot of languages.

I also find machine transcription helpful in certain circumstances; sometimes I can have trouble distinguishing some of the speech in a video, or there's some in a foreign language and I can get the gist of it from machine transcription and translation (usually not great, but better than nothing). Even when it's not perfect, these kinds of transcripts can provide some value.

These all depend on language models, but aren't necessarily generative AI, where you provide input in natural language and get output in natural language code or an image.

It's the generative AI use of LLMs that I find most bothersome, from many standpoints; they are frequently wrong, produce very average or mediocre results when they are not wrong, require ridiculous amounts of data in training sets that weren't legally obtained, and tend to need to burn mountains of fossil fuels in order to train and run them.

Anyhow, are the language models used by these translation and subscriptions services large enough to be considered LLMs? I dunno. A lot of them have been around for much longer than LLMs have, with much smaller but less accurate models; as they have improved over the years, they've been moving to bigger and newer models, improving their accuracy but also increasing my concern about the energy usage and illegitimate usage of data without consent to train them.

@unlambda Google translate uses LLMs and has for quite a while. At the inception, "AI" as a term wasn't sexy or vogue.

They've been improving for the same reason that other LLMs are improving. The algorithms are getting better, the hardware is faster, and there are more data to train on. I don't know if they've switched to Generative AI, but there's no need to as that would likely make the translations worse.

Translation LLMs are the only "AI" I find useful, too.

@vincent @unlambda This is another place I don't want LLMs. They ruined Google Translate like all their other products.

It used to do real NLP parsing-based translation, where the output sounded robotic but precise to the parsing of your input, and fairly obvious if it got something wrong.

Now it uses LLMs so the output sounds convincingly natural and it's hard to tell that the meaning is wrong or that it introduced bigoted shit into your words like assuming a person's gender based on their profession.

Brian Campbell

@dalias @vincent It does feel like it's gotten better to me; but I share the concern about it getting better because it's hallucinating.

That's one of the big problems with software as a service; you can't just compare against old versions to cross check.

Maybe it's time to check out and see if there are any worthwhile open source local models for this task. Would be good to work on de-googling.

@unlambda @vincent That's how LLMs fool people. The whole optimization problem they're solving is making something that *sounds right*. Worse results are accepted by people who don't understand and account for this just because they "sound better".