hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.4K
active users

#aisecurity

10 posts10 participants0 posts today

AI-powered spam? Ah, the classics keep evolving, don't they? 🤖

Just look at AkiraBot – it's a prime example showing how AI isn't just speeding up spam; it's making it way smarter too. We're talking about bots that can slip past Captchas and even whip up personalized messages... Seriously unsettling stuff. 🥴

Now, here's the kicker: a lot of people are still just relying on those basic, run-of-the-mill spam filters. Guess what? That's not gonna cut it anymore. We desperately need more intelligent solutions, the kind that can actually *recognize* AI in action. 🕵️‍♀️

And let's not forget about awareness training! It's absolutely crucial that users learn how to spot these sophisticated AI spam attempts. Because otherwise? You know those clicks are gonna happen. 🤦‍♂️

So, let's talk: How long do you reckon it'll be before AI spam gets *so* good, *so* convincing, that telling it apart from the real deal becomes nearly impossible? 🤔 What are your thoughts?

Man, this AI phishing stuff is getting wild! 😱

Seriously, any newbie can now whip up a fake login page that looks scarily real in no time flat. And get this: the AI can even *host* the malicious site itself and swipe your credentials. 🤦‍♂️

Speaking as a pentester, I'm telling you, this is a whole new level of phishing. Your typical automated scanners? They're often struggling to catch these sophisticated fakes. You've really got to stay sharp out there!

So, my question to you is: How are *you* staying safe from these AI-generated attacks? 🤔 Drop your tips below!

SOCs drowning in alerts? Analysts hitting their limits? Totally get it. 🤯 Lots of places are turning to automation, sure, but is it *really* cutting it?

This is where Agentic AI *might* just be a gamechanger. Hold on though – AI isn't some magic wand you can just wave. It's crucial to really scrutinize what tools you're bringing on board.

Transparency is absolutely key here. Speaking as a pentester, I *need* to understand the 'why' behind an AI's decision, not just the 'what'.

Ultimately, it's all about helping our clients, right? That’s the bottom line. 💪

So, what's your take? How are you approaching AI in your SOC? Drop your thoughts below!

David Berenstein has joined the Giskard team as DevRel ⭐🐢

David brings valuable experience from his previous roles at Argilla and Hugging Face, where he helped developers discover the joys of working with (synthetic) data. He loves cooking things up with data but also commits a lot of his time to cooking in real life 👨‍🍳 His expertise will be key as we build our LLM Evaluation Hub.

Welcome to the team, David! 🚀

Yo, IT-Sec crowd! ✌️

Anyone else noticing how *everyone* seems to be talking about AI-powered security tools these days? Yeah, it's everywhere. But let's be real for a sec – are they *truly* as amazing as the hype suggests? 🤔

I mean, okay, AI can definitely be useful for spotting anomalies and patterns, no doubt about that. But here's a thought: what happens if the AI itself gets compromised? Or what about when it starts churning out false alarms simply because it doesn't *really* grasp the situation? 🤖

Honestly, I've got my reservations. While automation is certainly nice to have, I'm convinced a skilled pentester, you know, one with actual brainpower and a strategic approach, still outsmarts any AI – at least for the time being. 😎 And look, if AI eventually *does* get significantly better, well, that just means it's time for us to add another skill to our toolkit. 🤷‍♂️

So, what's your perspective on this? Do you see AI completely taking over the pentesting scene, or is that human touch going to remain irreplaceable? 🔥 Let the debate begin!

Seriously, let's talk about these AI-generated "security" reports... Man, they really set off alarm bells for me. 🚨 Sure, AI *can* definitely speed up certain processes, no argument there. But honestly, a proper pentest? That's a whole different beast compared to just running a few automated scans. You need real human expertise and critical thinking behind it.

So many people seem to think AI catches everything, but let's be real – these tools can seriously hallucinate sometimes. They just make stuff up! And what happens then? The client ends up *thinking* their system is locked down tight, when it’s actually got holes wide enough to drive a truck through.

Look, security isn't just some product you buy off the shelf; it's an ongoing *process*. AI should absolutely be part of our toolkit, there to *support* us, not replace us entirely.

And hey, before you blindly trust that shiny AI report? Maybe, just maybe, get an actual human pentester to lay eyes on it too. Better safe than sorry, wouldn't you agree?

What are your own experiences with AI in the IT security world? Are you feeling more skeptical or optimistic about its role? Drop your thoughts below! 👇

AI in the cyber world... kinda crazy, right? 🤯

Look, AI definitely has its upsides, helping us defend better. But let's be real – the threat actors are all over it too. Phishing attempts? They're getting scarily personal. Attacks? Happening faster than ever. And your trusty old standard antivirus? Well... it's probably not cutting it anymore.

As a pentester, I'm seeing this play out daily. There's no doubt AI is making the security game a *lot* trickier. Honestly, if you're not rethinking your strategy right now, you're falling behind. Big time. 🤷‍♂️

That's where concepts like Zero Trust become so vital. But here's the thing: it can't just be lip service. It needs actual implementation! 💪 Time to walk the walk.

So, what's *your* approach? How are you adapting to stay safe in this new landscape? Got any experiences to share? Let me know below! 👇

The replay of our session at Forum INCYBER Europe (FIC) is now online 🎬

Watch our CTO present the initial Phare results - our multilingual and independent LLM benchmark that evaluates hallucination, factual accuracy, bias, and harm potential.

The session features Matteo Dora and Elie Bursztein (Google DeepMind).

Full recording linked below 👇

AI in the security field? Yeah, it can definitely lend a hand, BUT let's be real here. Automated tools are just *not* a substitute for an experienced pentester's intuition and skills.

Sure, these tools might flag the obvious vulnerabilities – the low-hanging fruit, if you will. However, the *real* breakthroughs, those crucial "aha!" moments? They almost always come from actual human brainpower and critical thinking.

Plus, think about it: who's actually vetting the results the AI spits out? Without that critical human oversight, you could easily drown in a sea of "findings," completely unsure of what genuinely needs urgent attention. Security is so much more than just hitting 'scan'; it’s a continuous, evolving process! Definitely something to keep in mind.

And on a related note, let's not forget the persistent threats out there. State-sponsored cyber warfare is a serious concern, and actors like Russia are definitely a significant force to reckon with in that arena.

So, what's your experience been using AI in pentesting? Drop your thoughts below!

Just completed the AI Red Teamer Job Role Path on Hack The Box Academy!

This path dives deep into the offensive side of AI/ML. Covers prompt injection, model evasion, data poisoning, and more. Highly recommended for anyone exploring the frontier where cybersecurity meets machine learning.

academy.hackthebox.com/achieve

Always learning, always leveling up. 🧠💥
#CyberSecurity #RedTeam #AI #HackTheBox #PromptInjection #LLM #AIsecurity

academy.hackthebox.com · Awarded the badge AI ninjaAI Red Teamer path completed

Hey everyone! Data leaks in AI tools? They're a *real* concern, aren't they? Microsoft's aiming to tackle this within the Edge browser. They plan to check what you're typing *before* it even reaches ChatGPT. They're calling it Inline Data Protection – basically, DLP built right into the browser. Sounds pretty cool! 👍

As a pentester, I've seen firsthand how these things can go sideways. I'm glad that Teams is also getting some beefed-up security features to combat phishing.

However, I'm still thinking... what about Microsoft's own data collection habits? 🤔 It's a case of "trust, but verify," right?

So, what are your thoughts? Do we need more safeguards against the potential risks of AI tools, or from the corporations developing them? Let's discuss!