Yet another AI article, you say?
You can’t open a browser lately without someone breathlessly explaining the next AI breakthrough. Usually, it’s some grandiose promise of how AI will revolutionize business, and usually it leaves you scratching your head as to how, exactly, you should implement this new AI feature.
A recent piece from Forbes discussing Claude Mythos (a system so powerful it’s being tightly restricted by Anthropic) highlights something important: even the people building these tools are a little uneasy about how far they can go. This new engine is so powerful that it found multiple weaknesses in operating systems that are decades old (meaning, all the cybercriminals have never found these vulnerabilities after years of searching). It also figured ways of skirting its own ethical and security protocols to get the information it needed. This deception feature isn’t new to Mythos, and has been documented in a number of other models (even if those deception rates are fairly low, on the whole).
All this should tell you something. AI is a tool that reflects the human behavior which is its learning pool. It is NOT creating evil and deception. It learns that from humans.
This means two things. First, it will do many helpful things that humans do with significantly heightened efficiency and capacity.
It also means it will act in nefarious ways to achieve what it is being asked to do. This isn’t a reflection of its “sentience” a la Skynet and The Terminator-like consciousness. Rather, it is simply learning from humans and acting like them, but in highly sped-up ways.
That’s the part people tend to miss. AI isn’t becoming human. It’s simply a very fast mirror of humanity.
What does this mean for you?
First, it means that criminality will become easier and more widespread. That’s not fear-mongering. It’s economics and factual observation of human nature.
Second, criminals will target easy-access businesses. Formerly, when hacking took high skill and significant effort, criminals targeted high-yield targets. But when hacking skill and effort become very minimal, it becomes much more efficient to leave high-yield, high-protection targets alone and target many medium and low-protection targets who have low-protection. That is, AI makes hacking at scale worth it. Cybercrooks will increasingly turn to businesses with unlocked (or softly-locked) digital doors. The amount they steal won’t make the news – but it could be enough to break your business, and their ROI is massive since it took them very little time and effort to target you.
This is the part that should matter to you if you run a business in places like Aiken, Augusta, or anywhere similar. You are no longer “too small to matter.” You are “easy enough to automate.”
The new security reality
Next, this also means that cyber hygiene and cyber defense will be increasingly important. You don’t need perfect (there is no perfect, just like there is no perfect home security system). You just need to be better than your neighbors and peers.
That’s the real game now. You don’t have to outrun the bear. You just have to not be the slowest one in the group.
Basic things suddenly matter a lot more:
- Employee cyber hygiene (trained, practiced, and tested!)
- Real email security
- Consistent patching and good, business-grade hardware
- Multi-factor authentication and other solid Identity practices
- Backup validation (not just having backups… actually knowing they work)
The attacker on the other side isn’t a guy in a hoodie in his basement anymore. It’s an automated system that can try a thousand variations while you’re eating lunch.
The internal risk nobody talks about
In addition, use of AI engines needs to be targeted (that is, done with specific purpose) and monitored internally.
Your employees are already using AI. They’re pasting emails, contracts, client data, troubleshooting logs (etc, etc,) into tools to “save time.”
Sometimes that’s fine. Sometimes it’s a data leak. The problem isn’t the tool, but the lack of guardrails. If you don’t define how AI should be used inside your business, your team will define it for you… on the fly… with your data.
The trust problem is about to get worse
Finally, as we have talked about before on this blog, as AI usage becomes common and widespread, people will increasingly value real human interaction.
Again, we say this a lot, but social trust in institutions is an all-time low. AI will make this problem far worse as the public becomes cynical and suspicious. This cynicism isn’t even about criminality, but about competence.
“Was this written by a person?”
“Did anyone actually check this?”
“Is this company real?”
Those questions are going to become normal.
A very real, non-AI example
Just as one (non-AI) example, look at what has happened in our industry (IT Managed Service Providers).
If you search, say, Aiken or August IT, you’ll find a number of companies pop up. But the reality is that very few of these are Aiken-based, Augusta-based, or even within three hundred miles of the CSRA. You’ll find IT companies from all over the country who have created local landing pages for Aiken and Augusta, yet are based in Texas, Florida, Illinois, and Ohio. You probably won’t even notice this unless you REALLY pay attention.
They will continue to present themselves as local providers. But when you need a reliable and consistent partner, or when you need an on-site visit, you’ll find that these providers extremely expensive OR extremely unresponsive.
That’s because they have no local techs and will have to scramble to find one.
And “local” usually means someone in Atlanta, Greenville, or Charleston who has no idea who you are, what your business does, what your network history is, and is simply there for a huge fee which they can charge for emergency-type services (guess who ends up paying their huge fee, by the way….it ain’t your IT MSP in Sarasota).
This isn’t speculation on our part. It happens. And it’s not uncommon.
Where this is all going
Situations like this will be increasingly commonplace as work is made more efficient.
AI lowers the barrier to entry for everything: content creation, marketing, sales, support, and, yes, deception
Ultimately, people fall back on something very old-fashioned: Trust.
And local businesses, especially, will look to work with people they know, like, and trust. They will do this not because it’s nostalgic, but because it’s practical. You’ll want to continue to reinforce this social trust and community reputation as AI erodes that social trust.
Final thought
AI is not the problem. It’s an amplifier. It will amplify good operators, and it will amplify bad actors.
Don’t believe all the noise about AI. The businesses that win over the next few years won’t be the ones using ALL THE AI ALL THE TIME.
They’ll be the ones who:
- Use it intentionally, specifically, and with clear purpose
- Understand cybersecurity is a utility, not a “nice-to-have when we’re big enough”
- And remain unmistakably human in how they show up, focusing on social trust and reputation
That combination is going to get rare, and that’s exactly why it will matter.
