Cyber experts say that hackers are seeking “full control” of Artificial Intelligence tools.
Cybersecurity experts claim that Dark Web forums have become overrun with discussions about ChatGPT as hackers look for ways to manipulate the AI chatbot.
The number of new posts regarding ChatGPT on the Dark Web, a part of the internet inaccessible through standard web browsers, increased sevenfold between January and February, while thread popularity increased by 145%.
ChatGPT: a hot topic everywhere
The forums cover a variety of topics, including how to hack ChatGPT, make artificial intelligence produce malware, and use it in cyberattacks.
In a different study, the cybersecurity firm Norton warned that criminals would be tempted to use the sophisticated chatbot because it can make responses that seem indistinguishable from human responses.
Phishing is a type of possible attack that makes use of AI technology and involves deceiving individuals into divulging sensitive information that could damage their online accounts.
To circumvent the safeguards put in place by ChatGPT’s developer OpenAI, hackers have been working to persuade the AI to execute such attacks.
Although the AI research company states that its technology is ‘trained to deny unsuitable requests,’ several flaws have already been discovered that allowed users to create simple malware.
Is the public alarmed about such a problem?
Worryingly, debates on the Dark Web have transformed over the past month from straightforward “cheats” and workarounds – intended to urge ChatGPT to do something amusing or surprising – into seizing full control of the tool and weaponizing it.
Within the first two months after its release, ChatGPT attracted 100 million users globally, making it the app with the fastest growth rate at the moment.