Did you see this one?  A hacker used Claude and ChatGPT to steal a massive amount of sensitive Mexican government data.  Many of these popular engines are SUPPOSED to contain built-in security that prevents malicious usage, but as this case proves, those features don’t always work.  

As a business owner in the Aiken, Augusta, or the broader CSRA, a breach of Mexican government data seems distant and irrelevant to your day-to-day business operations.  But the reality is that this breach is a perfect illustration of the threat to YOUR systems.   

The Promise of AI for Cybercrime 

You’ve heard how the AI revolution will cause massive job displacement, and you’ve probably read in your inbox from business gurus how 2024…or 2025…or 2026 is THE YEAR YOU NEED to make AI work for you.  As we have discussed on this blog, you should be investigating how AI can work for you.  It can help significantly with planning, organization, certain content creation, and deep research.  But the mass job-displacement narrative is overstated, at least for now. Outside of a few fields like coding and marketing, most roles are being augmented, not replaced. 

What doesn’t get nearly enough attention is this: AI dramatically lowers the barrier to cybercrime. 

Reason #1: AI Removes Skill Barriers for Attackers

Key point: Attacks no longer require deep technical expertise. 

In the past, successful cyberattacks required a high level of technical skill. As we’ve mentioned before, the old image of the highly-skilled, computer-obsessed hacker stuck in his basement all day and night testing his wits against high-level security is completely outdated.  Today, AI can: 

  • Generate convincing phishing emails in seconds 
  • Rewrite messages to match a specific tone or industry 
  • Help attackers troubleshoot malware and scripts 

Someone who couldn’t write a professional email a year ago can now produce messages that look legitimate, urgent, and tailored to your business…. with appropriate letterhead and logos. 

Businesses are especially vulnerable because these attacks no longer “look sloppy.” They look completely normal, just like the communications you have with normal vendors and clients. 

Reason #2: AI Makes Attacks Faster and Harder to Detect 

Key point: Volume plus variation beats human defenses. 

AI doesn’t just make attacks easier. It makes them faster and more adaptive. Attackers can generate THOUSANDS of variations of the same message, each slightly different, in a matter of no time. That makes traditional spam filters and rule-based defenses far less effective. What used to be a numbers game is now a personalization game, and machines are very good at that. 

Reason #3: AI Amplifies Existing Weaknesses (It Doesn’t Create New Ones) 

AI isn’t creating entirely new threats. It’s exploiting old, familiar weaknesses: 

  • Poor credentials / password practices 
  • Unpatched systems 
  • Employees without security awareness training 

AI simply makes it faster to find those cracks AND cheaper to exploit them. 

BONUS REASON:  

Now consider that all of these capabilities are packaged for sale on the dark web.  No skill or experience or experimentation needed.  No all-nighters smoking cigarettes in the basement guessing passwords, and no specific targeting of ONE victim.  

See, AI helps create these attacks before the data is sold, lowers the skills needed to access and use stolen data, and aids in creating scalable toolkits for purchase (“hacking as a service”). So, all that someone needs to attack your systems are ill intent and internet access. 

The Takeaway 

The takeaway isn’t that AI is evil or that businesses should avoid it. The real risk is assuming that yesterday’s defenses are enough for today’s threats. Cybersecurity has always been about managing risk. AI just accelerates what happens when that risk is ignored.  So, the story of stolen sensitive Mexican government data isn’t some far off situation – it’s a reminder of the new threat environment in which we live.