20 août 2024
|

The ChatGPT Revolution: A new playground for fraudsters?

For over a year now, artificial intelligence (AI) has been at the center of all discussions. Thanks to the rise of ChatGPT and generative AI, it has become accessible to the general public. Now we can all generate voices, texts and images of astonishing quality and consistency, for better… or for worse. This article explores how fraudsters have been able to invest in the field of generative AI.

ChatGPT: the new Swiss army knife for fraudsters?
As fraud continues to rise, ChatGPT opens the door to new forms of fraud, including making it easier to create sophisticated phishing emails.

Le phishing n’est pas une pratique nouvelle. Nous avons tous été confrontés à cette technique de fraude bien connue, qui vise à nous tromper par le biais de communications afin d’obtenir des informations sensibles ou de nous faire cliquer sur des liens malveillants. Que ce soit dans la sphère personnelle ou professionnelle, comme dans le cas de la fraude au président, nous avons souvent la chance de pouvoir repérer ces e-mails frauduleux grâce à leur manque de contexte ou à leurs fautes d’orthographe.

Malheureusement, l’avènement de ChatGPT a le potentiel de transformer cette pratique, la rendant plus accessible et plus redoutable. À tel point que les renseignements britanniques (NCSC) se sont récemment prononcés pour nous alerter :

“By 2025, generative AI and LLM (Large Language Models) will make it difficult for everyone, regardless of their level of understanding of cybersecurity, to assess whether an email or password reset request is authentic, or to identify phishing, spoofing or social engineering attempts.”

Open-source automated recognition
Firstly, fraudsters can now use generative AI tools to efficiently collect public information about individual users. Social networks are veritable mines of personal and professional information which can be subsequently exploited.

Once this information is collected, fraudsters increase their chances of manipulation by asking ChatGPT to adapt its tone and text accordingly. For example, by analyzing a user’s public interactions on social media, an AI model can generate a message that not only appears to come from a legitimate source but is also peppered with references and nuances that enhance its credibility.

What recourse is still possible?
Faced with this growing threat, businesses and users must adopt a proactive attitude to protect themselves against phishing. Although some devices are capable of assessing more or less precisely whether texts are generated by AI (we are thinking in particular of Copyleaks), it is becoming increasingly difficult to distinguish human texts from those produced by LLMs. To date, the best defense remains awareness and ongoing training of employees and citizens about new forms of fraud, which are exacerbated by AI.

ChatGPT, like artificial intelligence in general, is a tool. A tool that can be used for virtuous or malicious purposes for society. To navigate this new era, businesses, regulators and society at large must work together to ensure that advances in AI lead to a future where innovation, privacy, Ethics and security go hand in hand, protecting individuals while promoting technological progress.