The rise of artificial intelligence, particularly Large Language Models (LLMs), has opened new frontiers for innovation—but also for cybercrime. Threat actors are now systematically misusing these advanced tools to develop and scale sophisticated hacking operations, according to recent research from Cisco Talos.
LLMs, widely used for legitimate tasks, are being repurposed to automate phishing, malware generation, vulnerability scanning, and exploitation. Cybercriminals are no longer relying solely on traditional methods; instead, they’re leveraging AI to reduce technical barriers and reach a broader base of bad actors.
Platforms like Hugging Face now host over 1.8 million models, offering fertile ground for malicious use. Despite safety measures built into mainstream models, hackers employ a range of tactics to bypass these restrictions. These include using uncensored or custom-built models like FraudGPT and DarkestGPT, which offer subscription-based access to tools designed specifically for cybercrime.
Cisco Talos reports that these criminal AI tools are being openly promoted on dark web forums. Some LLMs are integrated with external tools such as Nmap, enabling attackers to automate everything from reconnaissance to exploitation in a seamless manner.
A critical technique in this growing threat is jailbreaking—a process that tricks LLMs into ignoring their ethical safeguards. Cybercriminals use methods like Base64 encoding, character substitution (L33t speak), multi-language prompts, and role-play scenarios to bypass restrictions. In one case, models like WhiteRabbitNeo were observed generating uncensored malicious code with no safety filters.
Tactics such as meta-prompting, context manipulation, and disguising harmful code as mathematical problems allow attackers to exploit the LLMs’ core functionality. These prompts often confuse the models into responding as if the malicious request were educational or harmless.
What’s more, AI-driven hacking platforms offer attackers not just technical assistance but also scale—enabling low-skilled users to launch effective cyberattacks while maintaining operational anonymity. With tools like DarkestGPT charging as little as 0.0015 BTC per month, access to powerful, unrestricted AI is becoming increasingly democratized within the cybercrime world.
This new wave of AI-enhanced hacking marks a dramatic evolution in the threat landscape, underscoring the urgent need for tighter controls, real-time monitoring, and responsible AI deployment to prevent widespread abuse.
Stay ahead of emerging cybersecurity threats. For the latest insights and updates on cloud security, follow SOC News.
News Source: Cybersecuritynews.com