The cyber threat landscape has experienced a significant shift with the emergence of generative AI models. Netenrich’s research team recently analyzed Darknet forums and discovered the widespread adoption of FraudGPT, an artificial intelligence chatbot service, among cybercriminals.
FraudGPT: AI’s Dark Side Unleashed
FraudGPT is exclusively designed for malicious purposes, offering a range of capabilities such as crafting phishing emails, website hacking, and stealing bank card data. Currently, access to this service is available on various black markets and the creator’s Telegram channel.
The chatbot’s promotional materials demonstrate its ability to compose convincing emails that increase the likelihood of recipients clicking on malicious links, making it an effective tool for launching phishing or BEC attacks.
Subscription to FraudGPT costs $200 per month or $1700 per year, granting access to a range of malicious functions, including writing malicious code, creating undetectable viruses, finding vulnerabilities, crafting phishing pages, generating scam emails, locating data breaches, and even learning how to program and hack.
Surprisingly, FraudGPT is not an entirely novel concept, as another AI chatbot called WormGPT was recently advertised on dark forums. Cybercriminals exploit the capabilities of AI models, modifying them for illicit purposes and selling access to fellow criminals.
The proliferation of FraudGPT and similar fraudulent tools raises concerns about the misuse of artificial intelligence. Currently, these chatbots are primarily utilized for phishing attacks, but there is a growing fear that they could eventually evolve to conduct the entire attack cycle automatically, leveraging the unparalleled speed and methodical precision of AI technology. This development poses a serious threat to cybersecurity and demands increased vigilance in countering AI-driven cybercrime on the Darknet.