hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.4K
active users

I haven't been very worried about AI, even though I'm a writer.

Why?

Because it takes a while for the law teams employed by the titans of old media to rumble to action, but it was always clear they were coming. These are the teams that don't sue other companies unless they're certain of winning.

And today, the New York Times sued OpenAI for several billion dollars.

nytimes.com/2023/12/27/busines

A lawsuit by The New York Times could test the emerging legal contours of generative A.I. technologies.
The New York Times · New York Times Sues OpenAI and Microsoft Over Use of Copyrighted WorkBy Michael M. Grynbaum

BTW: this isn't even the lawsuit OpenAI is *really* scared of.

The House of Mouse has yet to tee up to the plate against them or Midjourney or the like.

Modern copyright laws are terrifying, y'all, and the courts have found definitively and repeatedly that AI products are derivative material, and cannot be copyrighted. Unless copyright law is completely rewritten (BTW, it needs to be) the *only* thing AI can be used for is to build better search.

Just another tech hustle, like crypto & NFTs.

@Impossible_PhD what I'm curious about is: how much effort is it to re-train such a generative model?

They're gonna get sued again and again, and each time it's gonna end with "remove our stuff from your model and pay us damages, or keep it in and pay us damages and licensing fees". And as far as I understand, the way these models work, it's impossible to "remove" anything because the training data isn't stored inside as discrete units, it all contributes to the biasing of these artificial neurons.

So will they have to re-train their models after each lawsuit with one source fewer? Or how is that gonna work.

@amberage@eldritch.cafe @Impossible_PhD@hachyderm.io

@Impossible_PhD@hachyderm.io @simontoth@hachyderm.io

they could train models on only copyright-free/copyleft/their own sources but 1) the quality would be lower and 2) they still couldn't copyright the results so they couldn't use it for certain things, just like, lazy stock art generators

Edit: copyleft/other licenses still won't be enough for them to use. The main point was that they'd be limited significantly by being held to legal uses

Cassandrich

@rachel @Impossible_PhD @amberage @simontoth No, copyleft is even worse for them. It means they have to make the model and all derivative works free under the same license, if they can. If they can't (because of other conflicting legal obligations) they can't distribute tbe derivative works at all.