Despite OpenAI’s geofencing restrictions, cybercriminals can still obtain unrestricted access to ChatGPT by using stolen premium accounts.
The security implications of OpenAI’s ChatGPT have been a significant subject of debate among cybersecurity professionals since its release.
This chatbot employs the GPT language model to generate text-based responses that mimic human language and has been trained on vast amounts of internet text data.
A recent report by Check Point Research has raised concerns about the security of OpenAI’s ChatGPT. It has come to light that stolen premium accounts for the chatbot are being traded on the dark web, which enables hackers to avoid OpenAI’s geofencing regulations and get access to the software.
To enter illegally, cybercriminals are utilizing software, including “account checker,” that uses brute-force techniques to guess passwords and login credentials.
Instruments to Trade Stolen ChatGPT Premium Accounts
Cybercriminals used brute-force attacks and a ChatGPT account checker tool to gain access to accounts and scrape user data. Legitimate tools like Selenium and SilverBullet were employed for unit and web examination. Automatic pen-testing instruments were used to exploit ChatGPT credentials.
Obtained profile credentials were checked using automated tools capable of verifying up to 200 credentials in a minute, and were sold or provided for free on underground forums. These forums also offered services for proving the legitimacy of details retrieved.
Illegally purchased ChatGPT Plus account credentials were used by cybercriminals to offer lifetime upgrades for $59.99, much higher than the legitimate monthly price of $20 offered by OpenAI.
These accounts were shared with other criminals at a discounted rate of $24.99. The underground forums also had reviews posted by buyers of these stolen ChatGPT premium accounts, further complicating the matter.
Premium Accounts Advantage
Tips to ensure online safety while using ChatGPT include keeping sensitive data private and creating a separate email for its use. Italy has banned the chatbot, the UK is reviewing its regulations, and Canada is investigating a complaint of unauthorized collection and use of private information by ChatGPT.
According to Philippe Dufresne, Privacy Commissioner of Canada, AI technology’s impact on privacy is a priority for his office. He stated the need to keep pace with technological advances and focus on staying ahead of them.