• Sat. Oct 14th, 2023

WormGPT: Don’t Panic!

Avatar photo

ByEsme Greene

Aug 22, 2023
WormGPT: Don't Panic!
Esme Greene
Latest posts by Esme Greene (see all)

Large language models (LLMs) have become more popular, raising worries about how they may be used for evil endeavors like phishing and harmful software. On the Dark Web, LLMs like “WormGPT” and “FraudGPT” advertise the ability to carry out cyberattacks and frauds, which has led to frightening headlines. But a closer examination shows a more complex reality.

Unimpressive AI Threat: WormGPT and FraudGPT Lack Conviction

Based on an old LLM known as GPT-J, WormGPT is weakly protected and struggles to produce convincing phishing emails. When compared to more sophisticated LLMs, like OpenAI’s GPT-4, it performs poorly and has serious limits in producing intelligible text. The poor outcomes were verified by a test performed by the cybersecurity company SlashNext.

FraudGPT‘s claims to be “cutting-edge” are questionable because of the vagueness and exaggeration in its marketing. FraudGPT produces unconvincing messages in real life, raising more concerns about its capabilities.

Potential attackers have limited access to these harmful LLMs because of their restricted access and high subscription prices. Additionally, forums routinely remove posts advertising these LLMs for breaking rules, making it more difficult to enroll in them.

Despite the alarming buzz surrounding AI-driven hackers, the real effects may not be as bad as anticipated. The restricted availability of these LLMs prevents their general deployment, and they lack the complexity for massive attacks. Governments or companies are unlikely to be destroyed by them. As AI technology develops, it is essential to be on the lookout for possible risks and to be aware of its existing constraints.

The possibility of cyber assaults utilizing huge language models may rise as AI technology develops. It’s crucial for cybersecurity experts and researchers to keep on top of the curve, foresee emerging threats, and build strong defenses even while existing malicious LLMs may not constitute an immediate threat on a large scale. To preserve digital landscapes and defend against growing AI-driven cyber dangers, proactive measures, continuing research, and collaboration will be essential.

 
Avatar photo

Esme Greene

Esme brings a wealth of knowledge and experience to our website, specializing in all aspects of DarkWeb security. With a deep understanding of the intricate workings of the DarkWeb and its associated cybersecurity risks, Esme curates insightful and informative content for our readers.