September 18, 2025
FraudGPT and the Future of Cyber crime: Proactive Strategies for Protection


Generative artificial intelligence (GenAI) has firmly embedded itself in the workplace. As of 2024, more than two-thirds of organizations in every global region have adopted GenAI. And, as always, cyber criminals are eager to capitalize on a new and potentially powerful piece of technology. Over the past few years, a GenAI tool called FraudGPT has made phishing, hacking, and identity theft as simple as entering an AI prompt.
FraudGPT and similar tools are essentially democratizing cyber crime. Threat actors no longer need years of hacking experience or a programming background — they can leverage the power of GenAI to defraud unwary victims online.
While FraudGPT and its ilk represent a real threat, forward-thinking organizations can take proactive steps to protect their sensitive data. Outsmarting malicious AI requires a combination of powerful cybersecurity tools and savvy employees.
What is FraudGPT?
What is FraudGPT? Think of it as a dark mirror of ChatGPT and similar large language models (LLMs). With a few natural-language queries, a threat actor — regardless of technical skill — can exploit known vulnerabilities, craft targeted phishing messages, and spread inventive malware.
Security researchers first discovered FraudGPT in 2023. What they found was a program that functioned almost exactly like ChatGPT, except without any ethical safeguards. A threat actor can ask, in plain English, for a spear-phishing email, a copycat login site, a way to exploit a known vulnerability, or any number of other malicious tools. The reply, also in plain English, gives the user the information they need to carry out effective cyber attacks.
FraudGPT is available for as little as $200 per month, and it’s by no means the only malicious AI tool out there. LLMs are reasonably simple to create, and the tools to do so are widely available. With a good enough set of training data, any bad actor with some coding skills could create a destructive (and reasonably effective) LLM.
Threat actors are already using GenAI to carry out attacks. Phishing attacks have increased by more than 1,000% over the past three years, with targets ranging from multinational corporations to influential government agencies. Attackers also use deepfake video and audio to target finance, healthcare, telecoms, and aviation organizations. The average cost of falling for one of these scams is $450,000.
Strategies to defend against AI cyber threats
AI threat detection
Cybersecurity often boils down to an arms race between security researchers and threat actors, and AI technology is no exception. Just as cyber criminals are using AI to attack, cybersecurity companies are using AI to defend. Lookout, for example, uses AI and machine learning (ML) to identify threats, analyze app behavior, and block malware in real-time.
GenAI can also help administrators find and address problems more quickly. Cyber threats emerge and evolve rapidly, and sifting through data on each new scam and vulnerability takes more time than a human being can reasonably spare. AI can analyze patterns and produce meaningful results in a fraction of the time.
Zero-trust frameworks
While AI-powered scams vary in execution, the end goal is often the same: Threat actors want to steal legitimate login credentials. With those, they can log into an organization’s network, identify valuable data to encrypt or steal, then extort the organization for financial gain or other reasons.
The best way to prevent unauthorized logins is to implement zero-trust solutions in your network. A zero-trust philosophy assumes that any login — even one with the correct username and password — could be from an unauthorized user. Legitimate employees must prove their identities through frequent challenges, including:
- Multi-factor authentication (MFA) via short message service (SMS) or authenticator app
- Forcing reauthentication when trying to access certain apps or data
- Denying logins from unknown devices, locations, or IP addresses
- Restricting or limiting file access for remote users
Legitimate employees may find these measures inconvenient, but they can almost always provide the necessary information with relative ease or request help from the IT team. Threat actors must work much harder to bypass these controls.
Employee awareness training
Malicious GenAI technologies can aid in almost any kind of cyber attack. However, FraudGPT and other LLMs tend to focus on social engineering threats — phishing, smishing, phony login websites, and similar tactics. As such, one of the best ways to safeguard your organization’s sensitive information is to train your employees to spot these schemes. Unfortunately, a lot of the training out there focuses on issues that are less relevant in a world where threat actors from around the world have access to FraudGPT and other such tools. For example, we tell employees to look for spelling or grammar errors as an indicator that a message might not be legitimate, but GenAI tools don’t make spelling or grammar errors.
What makes malicious LLMs dangerous is that they can “learn” how a person speaks by observing that individual’s behavior across the internet. A threat actor using FraudGPT could create a convincing facsimile of a message from a coworker. Workers need to be especially aware of:
- Spear-phishing, which uses personal information to fool specific individuals;
- Smishing, where threat actors contact targets via text messages instead of email;
- and whaling, also called CEO fraud, in which attackers pretend to be a company executive to intimidate the target.
While AI allows threat actors to craft more deceptive messages you can still train your staff to spot signs of fraud. Hold regular cybersecurity seminars and remind employees to look out for common signs of scams. Even FraudGPT can’t hide incorrect email addresses, suspicious URLs, artificial deadlines, and vague threats. It also can’t prevent employees from checking in with real people before they log into an unfamiliar website or send someone money.
Learn about AI threats in hands-on lab sessions
Even if law enforcement eventually shuts down FraudGPT, something similar will replace it. As long as GenAI is widely available, cyber criminals will use it to enhance their attacks. To learn how to identify and counteract AI threats, register for a Lookout Mobile Endpoint Security Hands-on Lab session. Every Wednesday at 11 AM ET, experts from Lookout lead workshops on a variety of timely topics, including the role of AI in cyber crime. You can also schedule a demo with us to see how Lookout software could enhance your overall cybersecurity posture.

Book a Demo
Discover how adversaries use non-traditional methods for phishing on iOS/Android, see real-world examples of threats, and learn how an integrated security platform safeguards your organization.

Book a personalized, no-pressure demo today to learn:
How Lookout can help secure your organization’s approach to GenAI without sacrificing productivity