AI is sweeping across industries like a wave, opening up new frontiers and leaving regulators scrambling in its wake. It’s easy to see why – with tools like ChatGPT on the rise, the line between humans and machines blurs more each day. However, just when we thought we had our hands full with job displacement debates and drafting digital policies, a new issue sneaks up – ChatGPT accounts stolen and traded on the dark web.
Some crafty cyber thieves have found a new market, not for gold or diamonds, but for AI-powered personas. These stolen ChatGPT accounts are changing hands in shadowy digital auctions, fueling the rise of cybercrime and identity theft.
Fresh from the cyber-sleuths at Singapore-based Group-IB, over 100,000 ChatGPT accounts have been hijacked by info-stealing malware and are up for grabs in the illegal bazaars of the dark web. Forty percent of these leaked accounts trace back to the Asia-Pacific region. Indian-based credentials took the dubious top spot, contributing over 12,500 to the total.
The United States isn’t far behind, ranking sixth with nearly 3,000 leaked logins. France, being seventh overall, holds the unfortunate honour of being the front-runner for Europe. It’s a stark reminder that the consequences of cybercrime ripple across borders and do not discriminate on income or profession.
Once inside, these digital trespassers get a free pass to all the chats and data stored on the accounts. In the blink of an eye, a casual chat with your AI buddy can become fodder for some bad actor on the dark web. This serves as a reminder that your chats with your AI pal are not as safe as you may have thought, and sensitive information should never be shared with any AI-powered bots or suspicious actors you come across online.