A user has been advertising the purchase of Evil-GPT, a generative AI chatbot made with malicious intent, on hacker forums.
According to the supplier, Evil-GPT is a Python-based replacement for WormGPT that is targeted towards cybercriminals. Evil-GPT, which costs $10, seeks to give users a way to create data and code connected to malware.
AI Battleground: Unleashing Evil-GPT and its Kin
WormGPT became well-known for its powers in blackhat activities. It was developed by an unidentified developer and may be used anonymously on the Dark Web, making it difficult for law authorities to find and arrest hackers using it.
In August, an advertising for Evil-GPT appeared on the Dark Web, continuing the recent trend of AI-based tools designed to meet the needs of hackers.
This phenomenon is not unique; Wolf GPT, a ChatGPT variant that surfaced with promises of being a cutting-edge AI tool for immoral reasons, demonstrated hackers’ significant interest in utilizing such technology.
Hackers are fast to adapt as governments attempt to regulate AI chatbots, highlighting the constant struggle to achieve a balance in AI’s ethical use.
The prevalence of harmful AI chatbots like Evil-GPT and its competitors highlights how the cyber security environment is always changing. The cybersecurity sector is faced with the task of remaining one step ahead to safeguard people and businesses from the potential harm these technologies might bring as hackers deploy increasingly sophisticated tools.