Press, Policy & Research
R3 Blog

 

The Near Future of Fraud

The Near Future of Fraud

14 December 2023

Artificial Intelligence ("AI") is perhaps the most significant technological innovation since the internet went mainstream. AI and machine learning has featured heavily in our daily lives for some time now – most of us use ‘phone assistants, maps and navigation, help chatbots and digital assistants such as Siri and Alexa. The potential to usefully apply this technology across a range of industries is immense. It is already happening in education, healthcare, gaming and finance to name a very few sectors. The release of ChatGPT a year ago was a watershed moment, and in addition to writing articles for us on the near future of fraud (joke!), it is probably the advent of ChatGPT and similar generative AI applications (and some of their well-publicised issues) that has prompted much argument over whether AI is a force for human advancement or will trigger the rise of the machines.

Where there is opportunity there will always be bad actors, seeking to exploit others for financial gain. A recent video by Deutsche Telekom entitled Without Consent posted on social media considered this point and provoked a passionate response. The video demonstrated the risks parents take when posting footage of their children on social media, oblivious to the potential future impact. In the film, an adult “Ella”, whose pictures as a child her parents had posted, and now AI-generated from that posted footage, tells her parents about the consequences of their actions. Bad actors steal the photos and videos and use them in humiliating and criminal ways. Ella’s identity is stolen and used for credit card fraud, decimating her credit rating. Using AI tools, her voice is used in ‘phone scams, leaving her vulnerable to criminal action herself, and her original childhood images used to create child sexual abuse material ("CSAM"). To her parents, Ella’s pictures were warm memories, but to others, they are useful data to be manipulated and sold.

So, are parents negligent by posting to their own social media, or is this alarmist, anti-tech propaganda? In reality, images and data are captured every day, such as CCTV security footage, online account KYC and public service records. But a parent sharing multiple images of their child (who does not have the capacity to consent), without an adequate understanding of the security settings of their account, without knowledge of who their social media friends are, and without consideration of the convergence between memory and data is likely to face tough questions from their child in time. The ability of technology to spread any data worldwide in a nano second makes the risk of demonstrating parental pride rather more of a risk than when it just entailed passing round a few snaps around a known friend group (and then taking them back).

This is a data protection issue. For many years now we have been clicking on the “Agree” button of various terms and conditions without reading the small print. The benefits of exercising personal caution are obliterated by the frequent hacks of organisations aimed at stealing our personal information for sale on the dark web. The genie is not going back in the bottle. With AI, the maintenance of one’s virtual privacy is a complex issue. Who is responsible for maintaining transparency and trust in a digital world? Do we blame users or tech companies for the malicious use of our data, or law enforcement for a failure to pursue the bad actors? Is existing and planned legislation and regulation at all equipped to manage the issues thrown up by AI and deepfakes? Is AI detection technology keeping pace to enable us to identify this manufactured material? Of itself does it make us all more angry? The endless feeling of being under attack for one's data- nothing can be done without accepting or rejecting cookies, agreeing terms, running through a list of "legitimate interests". Legitimate to whom?  It isn’t as if there is a choice to go analogue much these days, folk are forced into this technology, ready or not.

This is not a dystopian future; this is happening now. It has been reported that AI-generated images of CSAM are on the rise, as is deepfake revenge porn and fraud, all notably caused by human abuse of an ethically neutral technology.  Can humans keep control? Will our baser selves take over through AI? Whether we are equipped to deal with these issues will be an overarching theme at The Fraud Conference in February 2024, jointly organised by R3, the Fraud Advisory Panel and INSOL Europe.

 

    Carmel King 

    Director, Grant Thornton UK
    Co-chair, Insol Europe's Anti-Fraud Forum

 

 

    Frances Coulson

    Partner at Wedlake Bell LLP
    Chair, R3's Fraud Group

 

Share this page
Stuart McBrideStuart McBride
Senior Communications Manager
020 7566 4214
Amelia FranklinAmelia Franklin
Campaigns and Communications Executive
0207 566 4203
Lyle HorneLyle Horne
Public Affairs & Policy Officer
0207 566 4202
Find INSOLVENCY & RESTRUCTURING ADVICE

R3 members can provide advice on a range of business and personal finance issues. To find an R3 member who can help you, click below.