@kurtseifried @joshbressers so i read the paper and like.
Idk what to think of it but it is one of the crappiest piece of research i have seen in a long time. What the heck is "succesful exploit" from their pov?
Also these are exploit that a script can already exploit. Can someone explain to me what is the scary part here?
At best i can expect pseudo ddos from people trying to reproduce it with hundreds of shit llms
Also that cost analysis is impressively bad. We know llm cost are far bigger
@kurtseifried @joshbressers I will keep being far more affraid for the security of code written with LLM help, especially in the face of research on how it hacks confidence, than by attackers using them.
The paper isn't great, it leaves out too many important details
But I think the danger or opportunity (it depends how you look at it) is to use something like an LLM to identify security relevant commits
They can probably do this with a reasonable amount of success (they're a lot better at reading things than it is at writing things)
The way we handle vulnerabilities today is pretty broken. The obsession with zero vulnerabilities has created perverse incentives
But now if some bored researcher decided to find a few (tens, hundreds) thousand probable security vulnerabilities, what will be the result?
The existing systems are already being crushed (look at NVD)
I'm unsure if just outright crushing the existing systems would be good or bad
@joshbressers @kurtseifried "they are a lot better at reading" [citation needed]. I have seen nothing in research or practice that support that claim.
And yes these systems are dieing. And? They were already useless and perfunctory.
@Di4na @joshbressers @kurtseifried
"Reading" in this sense is basically a classification problem. Al/ML is definitely good at that.
@mattdm @joshbressers @kurtseifried yes but that is not LLM and that has massive limits, which we already know. But yes. You can read cve text and classify them in buckets of potential exploits methods. And?
So what?