
Fraud and Fraud Prevention in the Age of AI
Dr Maurice Tse and Mr Clive Ho
25 June 2025
Artificial intelligence (AI) is fast becoming a core technology for optimizing efficiency and driving innovation, finding wide applications across sectors such as healthcare, business operations, and public safety. However, just like the other side of the same coin, these advancements also present opportunities for criminals. As AI technology continues to grow more sophisticated, it is difficult to guard against hacking, scams, blackmailing involving fabricated videos, and dissemination of disinformation orchestrated by criminal organizations.
Particularly egregious is the abuse of AI in the financial field. By 2027, generative AI is projected to quadruple scam-related losses across the world. The past decade has witnessed not only an acceleration of digitalization in the financial and banking sectors but also the coronavirus pandemic’s reinforcement of the dominant position of digital banks. Such a change has clearly enhanced both service efficiency and business volume, but it has simultaneously given criminals an opportunity to commit fraud. In 2023, approximately US$3.1 trillion in illicit funds passed through the global financial system, involving activities like human trafficking, drug dealing, and terrorist financing. Losses from bank fraud in the same year were estimated to total US$485.6 billion.
As pointed out by the US Department of the Treasury in 2024, the existing financial risk management frameworks may be insufficient to address the challenges imposed by emerging AI technologies. This means that only by using AI against AI can an effective defense mechanism be built.
In this day and age, organized scammers rely on generative AI tools, instead of humans, to craft near-indiscernible phishing emails and deepfake frauds. Last year, there were multiple fraudulent cases where impersonation of senior corporate executives led to remittance by company staff of vast sums of money into fake accounts. This demonstrates that generative AI has become a key tool for scammers to bypass the traditional security and manipulate trust. A credit report of TransUnion reveals an 80% surge in digital fraud compared with pre-pandemic levels, with credit card scams rising 76% and account takeovers soaring between 81% and 131%.
The US Federal Trade Commission points out that losses from scams broke the US$10 billion mark in 2023, soaring by US$1 billion compared with the previous year. According to the Nasdaq Global Financial Crime Report, in the same year, fraud scams and bank fraud schemes totalled more than US$485 billion in projected losses worldwide (see Note). The rate of the 20-to-29 age group falling victim is even higher than that of the 70-plus age group, indicating that the scammers no longer target only the elderly.
“Rug pull” scams are now common in cryptocurrency investments. Investors lose every penny as a result of the currency developer fleeing with all the funds after shutting down their development projects. The globalization and professionalization of organized crime have created a new form of commercialized crime, for instance, “crime as a service”. The INTERPOL has reported that some victims have been lured by fake job advertisements and trafficked to fraud centres in Southeast Asia, South America, etc. The use of technology and personal exploitation has contributed to the rise of a large-scale and industrialized fraud industry chain.
The advent of technologies like synthetic identity generators and automated cryptocurrency account setup tools has not only expedited money laundering but may also jeopardize the overall financial system and facilitate the expansion of transnational criminal networks. Money laundering has now become a commercialized service, offering different tiers of operational solutions based on clients’ payment levels. At various stages of the money-laundering process, high-end customers can utilize low-activity accounts and conduct multiple small transactions through separate money mule networks to evade regulation.
In the face of an enormous amount of information and multifarious data, financial institutions are often hard-pressed to identify irregularities. Meanwhile, criminals exploit the overwhelming volumes of information to devise hard-to-detect fraud schemes. From initial surveillance and analysis of defence system vulnerabilities to optimization of fraud patterns, AI is widely used in large language models (LLMs), video generators, and biometric identification technology. The synthesized content can be misused for criminal purposes, e.g. money laundering and fraudulent title deed activities.
Obviously, scams conducted through online meetings with impersonated financial consultants can have a severe impact on financial services. Existing security measures, including biometric recognition and third-party data verification, are now facing formidable challenges due to the rapid evolution of AI technology.
It is undeniable that the misuse of generative AI is getting more serious than ever, particularly in scams and cybercrime. In June 2023, WormGPT―a malicious counterpart of ChatGPT with illicit capabilities―made its debut on the dark web. FraudGPT has been revealed as an LLM specifically designed to identify system loopholes, write malicious codes, and automatically generate phishing emails. Since the introduction of ChatGPT-4, “jailbreak version” models such as BlackHatGPT and “jailbreaking-as-a-service” platforms have emerged, advancing the harmful use of AI.
Currently, financial institutions are heavily reliant on mobile devices and mobile banking apps to conduct digital businesses. Despite the significant improvements in efficiency and convenience, information security risks have also escalated. From user identity verification to one-time passwords, the procedures typically depend on a single device. If a SIM card is attacked by a virus or malicious app, the entire system could be paralysed. As pinpointed by various research studies, the potential for criminal exploitation appears almost limitless. The risks are substantial, ranging from manipulative attacks and infrastructure sabotage to the full weaponization of AI systems.
Alongside increasingly complex systems, driven by specific incentives and machine learning, AI could even devise new methods of criminality on its own. As much as this may sound like science fiction, cases of illicit behaviour by AI systems have already occurred in the financial sector due to flawed incentive designs. However, the existing security framework is still ill-prepared for such risks.
Amidst the mushrooming crime techniques enabled by AI, the authorities should tackle the problems through a four-pronged approach encompassing technology, policy, international collaboration, and education.
In terms of technology, AI-driven surveillance systems serve as the first line of defence. Given their proven practicality, tools for detecting deepfake fabrication and abnormal financial transactions should be further integrated into the information security framework in order to reduce the risk of large-scale attacks. For example, CryptoTrace is a virtual asset analytics platform jointly developed by the University of Hong Kong and the Hong Kong Police Force to effectively trace cryptocurrency transactions linked to criminal cases. In April 2025, this project was awarded a Gold Medal with the Congratulations of Jury at the International Exhibition of Inventions of Geneva.
In terms of policy, policymakers should establish a regulatory environment that fosters innovation and the prevention of AI abuse. The Report on Responsible AI in Financial Markets released in May 2024 emphasizes that despite its wide applications in risk management and predictive analytics, AI has also given rise to such risks as deepfake fabrication, phishing, and algorithm manipulation. The establishment of a corresponding AI risk management framework is therefore recommended.
In terms of international collaboration, given the cross-boundary nature of AI-related crime, joints efforts are essential to addressing these problems. Both the INTERPOL and the United Nations advocate for the unification of AI usage standards, the adoption of ethical criteria and punishment mechanisms, as well as the strengthening of legal enforcement across countries.
In terms of education, it is of utmost importance to increase the public’s ability to identify scams and disinformation. Launched by the Hong Kong Police Force, the Scameter Series, designed to facilitate real-time scam detection by the public, was awarded an International Press Prize and a Gold Medal at the International Exhibition of Inventions of Geneva in April 2025.
While scammers have AI as their weapon, the community can harness it as a protective shield. On the one hand, criminals use generative models to produce highly-convincing fake invoices and fabricated accounts, thereby fueling money laundering and scams. On the other hand, AI algorithms can automatically detect forged documents and flag abnormal transaction patterns, substantially improving risk identification and protection. When used appropriately, AI can present opportunities even amid crises.
Although there have not yet been patterns of crime dominated by AI, the current technological development trend clearly indicates that preventive measures should be implemented immediately to nip potential threats in the bud. Towards this end, public-private partnership should be made a top priority. Law enforcement agencies, governments, and businesses need to work more closely together to introduce AI-based security systems. Financial institutions and enterprises can integrate risk management and information security measures into AI systems. Meanwhile, through policy guidance and funding support, governments should encourage research and innovation, with a commitment to driving coordinated responses across sectors and national borders.
Note: https://www.nasdaq.com/global-financial-crime-report