From The Desk of the CISO
The Latest AI Cyber Threats and How to Defend Against Them
By Rob Ashcraft, CISO at KeyStone Solutions
The advent of Artificial Intelligence (AI) has dramatically reshaped the cybersecurity landscape, introducing both powerful defensive tools and sophisticated new threats. Cybercriminals are now leveraging AI to orchestrate attacks with unprecedented speed, scale, and stealth. One of the most alarming developments is the use of AI in automated phishing campaigns where large language models (LLMs) generate highly convincing and personalized emails, messages, and even deepfake audio or video. These AI-powered scams are often indistinguishable from legitimate communications, significantly increasing the success rate of social engineering attacks and making it easier for even novice attackers to launch effective campaigns.
Beyond enhanced social engineering, AI is being weaponized in various other forms. Malicious AI can be used to develop adaptive malware that evades traditional signature-based detection by learning and modifying its behavior. Adversarial AI attacks target the very AI systems used for defense, attempting to “poison” baseline data and/or manipulate inputs to mislead threat detection algorithms. Furthermore, the proliferation of AI tools as a service on the dark web has lowered the barrier to entry for cybercrime, allowing attackers to automate their operations and accelerate the development of custom intrusion tools, including ransomware and sophisticated bots that mimic human behavior to bypass security measures.
Deep-fake technology, powered by generative AI, presents a particularly nefarious threat. Cybercriminals are using hyper-realistic AI-generated videos, audio, and images to impersonate company personnel and legitimate third parties for fraudulent financial transactions, identity theft, and extortion. These deepfakes can bypass traditional biometric security systems and are increasingly difficult to detect with the human eye or ear. The ability to create convincing fake content, whether for a CEO requesting an urgent wire transfer, a fake job applicant gaining insider network access, or ACH payment information, poses significant risks to financial integrity and corporate reputation.
Defending against these evolving AI cyber threats requires a dynamic and multi-layered approach. Organizations must prioritize robust identity and access controls, including mandatory multi-factor authentication (MFA) and least-privilege policies as traditional password security is no longer sufficient. Regular security awareness training is crucial to educate employees about the new forms of phishing, social engineering, and deepfake scams, emphasizing critical thinking and verification. Furthermore, incident response plans must be continuously updated and rigorously tested to account for the rapid escalation and unpredictable nature of AI-powered attacks.