Phishing and AI

AI captured the public imagination, particularly since ChatGPT bought AI at our fingertips. AI has many great possibilities which will undoubtedly improve our lives over time. I believe, like many technologies in the past, AI platforms will be able to enhance the situation of criminal elements. AI may improve the effectiveness and cadence of phishing attacks, and illegal use of AI in cloud platforms may create headaches for cloud account managers.

Over the past years, multiple data breaches have created a large trove of data for criminals to exploit. Using the dark web, data from malware attacks, phishing information, social engineering, and open-source intelligence, criminals are now better positioned to make data-driven choices about who their victims will be. Not only can they choose the best individuals to attack, but they can combine their data with data about institutions with weaker or less effective controls, leading to a perfect storm of individuals who have bad luck with phishing and institutions who have bad luck at detecting phishing.

Using machine learning, they may be able to process large amounts of data through various algorithms to find out what they need to know. Logistic Regression, Random Forest, Support Vector Machines (SVM), Neural Networks and Naive Bayes are algorithms that may help identify targets if they have the data to enable the algorithms. Bias concerns don't apply unless crime syndicates favour certain groups of victims or are worried about their reputation in the marketplace. Criminals may not be concerned about fairness, transparency, explainability, societal impact, well-being, and other ethical considerations. The hard bit will be data wrangling. The data sources are disparate and are undoubtedly in different formats, with many useful data elements needing to be included or corrected. Finding data scientists, who are scarce in the non-criminal world, may prove a real challenge in the criminal underworld.

Once machine learning produces a list of victims, criminals can create prompts that consider the victims' preferences and use them in Generative AI. They could develop phishing content which sounds professional and convincing to the victims. Generative AI is widely available, but guard rails may make it hard to write prompts without injecting the right words and phrases, which requires some skill. Generative AI may also help in writing code for malicious ends too.

Besides hard-to-find people skills, other barriers exist to the mass criminal use of AI. Technology is expensive, assuming you pay for it, which they probably will not. Cloud providers have warned that compromised accounts are used for Bitcoin mining, and there is no reason to believe they cannot do the same with AI technologies. AI is expensive in the cloud, and victims could include people who need to pay for unauthorised cloud product usage.

Access to good, real-time data potential is another challenge for criminals because data changes, and they must work harder to get newer and fresher data to ensure they don't experience model drift issues. Past victims improve their posture, and institutions improve their phishing detection and fraud controls in response to crime. Companies seeking to defeat phishing may also use AI to seek out and prevent phishing.

We are not on the cusp of an AI-driven crime wave, but I don't believe institutions and people should sit back and wait. Bad actors will find ways to exploit this technology, I am sure. If they do so successfully, they will have much better data to identify high-quality victims they may specifically target for phishing and other social engineering. Cloud platforms provide AI-type products, and criminals have not been above exploiting badly managed accounts to do Bitcoin mining. Criminals will be fine with cranking up machine learning tools to get what they want, so securing cloud accounts is something to take seriously. The speed and quality of attacks may increase, meaning the speed and quality of defences must also improve to match. A lack of skills may slow the pace, but plenty of courses and information about AI are available, often for free, and I am sure criminals learn as well as the rest of us!

These words are my own, and I may be wrong about criminals using AI or trying, but I don't see why they won't give it a go. I wrote this short article with help from ChatGPT and Grammarly, hoping to provoke some thought on the subject. I look forward to reading your comments if you have them.