The complaint sounds familiar. “I’m disappointed that you are working to incorporate AI garbage into the site,” said one annoyed person in an online message. “No-one is asking for this—we want you to improve the site, stop charging for new features.” Only, this time it’s not a regular user but a member of a cybercrime forum expressing their disdain for low-quality generative AI.
Ben Collier, a security researcher at the University of Edinburgh, has noticed an increasing pushback over the use of generative AI in underground cybercrime forums and hacking groups. His recent study found that people are complaining about basic cybersecurity concepts being explained with bullet points, moaning about the number of low-quality posts, and concerns over Google’s AI search driving down forum visitors.
These online spaces, often Russian in origin, have been a haven for scammers to trade stolen data and advertise hacking jobs. While some users try to gain a better reputation by posting AI-generated hacking explainers, others are irritated by the influx of AI posts. “I come here for human interaction,” wrote one frustrated poster.
The interest in using AI in cybercrime has been significant since ChatGPT emerged late last year. Sophisticated hackers and less capable ones have been trying to use AI in their attacks, with some organized fraudsters boosting their operations with realistic AI face-swapping technology and social engineering messages translated by AI.
Hackers have also discussed the potential capabilities of Claude Mythos Preview, a new frontier AI model from Anthropic. Some cybercriminals are wary of AI-generated projects on forums or marketplaces due to their weaknesses and vulnerabilities. Despite this frosty reception, others see potential in an AI assistant that would help structure posts and improve grammar.







