Category: phishing

Category: phishing

Financial fraud and artificial intelligence.

Fraud often involves impersonation, and artificial intelligence (AI) has emerged as a potent tool that can effectively mimic human behaviour. This means that fraudsters can leverage AI to enhance the credibility, efficiency, and speed of their attacks.

AI possesses the potential to be employed for financial fraud, similar to any other technology. It can be utilised to impersonate individuals, automate phishing attacks, and manipulate data. To safeguard against the exploitation of AI for fraudulent purposes and mitigate potential risks, robust security measures must be implemented. Moreover, it is crucial to establish ethical guidelines governing the use of AI to prevent any misuse or abuse of this technology.

AI’s Role in Fraudulent Activities:

At the consumer level, AI can generate scripts that fraudsters use to deceive people over the phone and coax them into making unauthorised bank transfers.

On a larger scale, particularly concerning financial institutions such as banks and lenders, generative AI has the capability to create fabricated videos and photographs of non-existent individuals. In other words, AI can provide deceptive “evidence” to pass identity checks, enabling the opening of fraudulent accounts, execution of unauthorised transfers, and even the creation of (fake) assets or liquidity against which loans can be secured.

Recommended Actions:

The potential for AI to be employed in financial fraud cannot be overstated, especially considering the accessibility of powerful AI models like ChatGPT that anyone can utilise anonymously. Firms that may be at risk should take the following steps:

  • Scrutinise the authenticity of all identification documentation provided for anti-money laundering (AML) and know your customer (KYC) purposes. If possible, seek information from reputable third parties such as public registries or verification firms instead of relying solely on direct sources. In case of doubts, consult an in-house or external cybersecurity team.
  • Implement measures to ensure that existing clients and customers are not being “spoofed” or impersonated. This may involve employing multi-factor authentication and, in some cases, conducting face-to-face meetings to validate identities.
  • Train vulnerable staff members to recognise patterns indicative of financial fraud. Although the methods employed by fraudsters may have evolved with the use of AI, their underlying goals remain the same. Any unexplained or out-of-character transactions or borrowing without an apparent purpose should be regarded with suspicion, regardless of how convincing the supporting documentation appears.

Regulation of AI is inevitably on the horizon. In March of this year, the UK government published a white paper outlining its proposed “pro-innovation” approach to AI. However, AI is already pervasive, and the white paper itself acknowledges that “the pace of change itself can be unsettling.” In the interim, self-help is the best and only defence.

Disclaimer: This publication provides a general summary of the law and should not substitute for tailored legal advice based on your specific circumstances.