hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.4K
active users

@kurtseifried @joshbressers so i read the paper and like.

Idk what to think of it but it is one of the crappiest piece of research i have seen in a long time. What the heck is "succesful exploit" from their pov?

Also these are exploit that a script can already exploit. Can someone explain to me what is the scary part here?

At best i can expect pseudo ddos from people trying to reproduce it with hundreds of shit llms

Also that cost analysis is impressively bad. We know llm cost are far bigger

@kurtseifried @joshbressers I will keep being far more affraid for the security of code written with LLM help, especially in the face of research on how it hacks confidence, than by attackers using them.

@Di4na @kurtseifried

The paper isn't great, it leaves out too many important details

But I think the danger or opportunity (it depends how you look at it) is to use something like an LLM to identify security relevant commits

They can probably do this with a reasonable amount of success (they're a lot better at reading things than it is at writing things)

The way we handle vulnerabilities today is pretty broken. The obsession with zero vulnerabilities has created perverse incentives

But now if some bored researcher decided to find a few (tens, hundreds) thousand probable security vulnerabilities, what will be the result?

The existing systems are already being crushed (look at NVD)

I'm unsure if just outright crushing the existing systems would be good or bad

Thomas Depierre

@joshbressers @kurtseifried "they are a lot better at reading" [citation needed]. I have seen nothing in research or practice that support that claim.

And yes these systems are dieing. And? They were already useless and perfunctory.

@Di4na @joshbressers @kurtseifried

"Reading" in this sense is basically a classification problem. Al/ML is definitely good at that.

@mattdm @joshbressers @kurtseifried yes but that is not LLM and that has massive limits, which we already know. But yes. You can read cve text and classify them in buckets of potential exploits methods. And?

So what?

@Di4na @kurtseifried

Well, it's easy to declare the vulnerability universe useless and good riddance

Except a lot of existing policy and standards rely on it

Blowing it up will have unexpected consequences

Unfortunately the people involved either don't think there are major problems, or are moving much slower than reality

We're probably going to find out what happens when it blows up

@joshbressers @kurtseifried expected consequences for who? These standards and policy were already actively making things harder to secure.

@joshbressers @kurtseifried ok i only have second order directional evidence of that, so i should calm my claims.

But also the support evidence for said standards is not amazing

@Di4na @kurtseifried

Citation needed :P

I mean, sure the existing standards are horrid for a number of reasons, but things were actively worse back before things like PCI. Tons of orgs were collecting credit card details over http and storing them in text files

@joshbressers @kurtseifried yep. Do we have evidence that PCI compliance enforcement actually is how we made progress? Also would PCI disappearing now change things?

@joshbressers @kurtseifried the fact they were useful before does not mean they are useful today

@Di4na @kurtseifried

It doesn't, but it's also a problem that these conversation always seem to go to this place

You say this stuff is all stupid and dumb

I agree, but I think it's how we start to get better (granted in some cases it's been decades and we should have more proof)

Then nobody works to actually make anything better

@joshbressers @kurtseifried that is simply not true. We got let's encrypt. Tls1.3. which is being adopted. We got a lot of csrf handling in frameworks by default. Same for SQLi. We got tagged strings template to handle other injections.

Things progress. But away from what we call security work.

@joshbressers @kurtseifried the secret is to accept that things not helping does not mean nothing has been done

It is just that we do not classify as security work what has been done.

safetydifferently.com/why-do-t

safetydifferently.comWhy do things go right?

@joshbressers @kurtseifried said otherwise: security is a dynamic property of the system. If the standard do not evolve, then they become a hindrance. The evolution needs to be backed in