close-tab-svgrepo-com
Search result
showing 30 result for

    Synthetic identity fraud: The dark side of generative AI

    Generative AI seems to be changing our world - but it also brings dangers. A new challenge: synthetic identity fraud. What is behind it, why is the threat real and how can companies protect themselves?

    Generative AI - hype and risk

    Generative AI is the hype of the moment. Whether marketing, healthcare or finance - hardly any industry remains untouched by the new technology. The hope: more efficient processes, more creativity and more freedom. But there are two sides to every coin. AI offers companies enormous potential for efficiency gains, but also enables new criminal activities such as synthetic identity fraud. In this case, the theft of personal data and the generation of sophisticated fakes go hand in hand.

    In this article we explain:

    • What synthetic identity fraud is and how it works
    • How AI-powered fraud schemes are hurting businesses and how widespread they really are
    • What this has to do with current plans of the European Union and Switzerland
    • What PXL Vision is doing to protect the security of electronic identification procedures, and
    • What role modern security methods such as NFC and liveness checks play in this.

    What is synthetic identity fraud?

    Identity fraud - the basics from a corporate perspective

    At first glance, classic identity theft and synthetic identity theft have a lot in common: both involve gaining access to relevant information such as bank account numbers or credit card information and using it to carry out unauthorised transactions (e.g. purchases or transfers). The consequences for the victims are also the same: In addition to direct financial losses, the offences often have a negative impact on their creditworthiness and reputation.

    Synthetic identity fraud - the methods

    Compared to traditional identity theft, which is based on the stolen identities of real people, synthetic identity fraud goes one step further in terms of methodology: the perpetrators create a completely new identity from a mixture of stolen, manipulated and fictitious information. This combination is made possible by AI-supported technologies.

    Cyber criminals create fake ID documents in which they mix real and fictitious data (e.g. legitimate ID numbers with false names and addresses). In the next step, they use artificial intelligence to create synthetic facial images that perfectly match this fake data.

    In synthetic identity fraud, a distinction is usually made between two methods: manufactured and manipulated identities.

    Manufactured synthetic identities

    Created synthetic identities use valid, often stolen data records and merge multiple identities. For example, criminals create a fake identity by combining the address and name of three different - but real - people. In addition, fraudsters sometimes add invalid information to these fake identities made up of real components. This mixture makes it more difficult to recognise the identity as fake.
     

    Manipulated synthetic identities

    Manipulated synthetic identities, on the other hand, are created on the basis of a real identity. The perpetrators alter an existing identity document, for example, by manipulating the personally identifiable information on a real ID card.

    Such manipulated identities are usually used to conceal their own credit history and thus present themselves as creditworthy, even if this is not the case in reality. For example, offenders with a negative credit history can create a fictitious identity in order to use it to obtain mortgage applications, loans or credit services.

    How synthetic identity fraud threatens know-your-customer systems

    Synthetic identity fraud methods make it easier to deceive know-your-customer (KYC) systems. Fraud is becoming faster, easier and more scalable than ever before. This happens in two primary ways:

    Declining safety level

    Traditional identity fraud protection solutions are often no longer sufficient to detect and prevent synthetic identity fraud. It's the old cat-and-mouse game: if criminals discover new methods to commit their crimes, organisations have to implement new protection mechanisms to deal with the new threat.
     

    Declining detection rate

    Unlike fraud cases where a real identity is completely stolen, synthetic fraud often has no single, specific victim. As a result, such crimes are often detected and reported less frequently or later. Perpetrators can therefore be active for longer and potentially cause greater damage.

    AI-supported fraud scams on the rise

    Such AI-supported fraud attempts are no longer just a theoretical concern, but have long been a reality. This is now confirmed by initial surveys on the subject.

    According to a recent Signicat survey of more than 1,000 fraud experts from the financial sector, 76 per cent of respondents see fraud as a bigger problem today than three years ago. 66 per cent consider AI-supported identity fraud to be particularly dangerous.

    Forecasts on the topic of AI-supported fraud are also alarming: a study by Deloitte shows that generative AI could cause fraud losses totalling 40 billion US dollars in the USA alone by 2027.

    Together, these insights paint a clear picture: advances in AI will make it increasingly easy for criminals to generate highly realistic images and videos - and thus contribute to a new wave of digital fraud attempts.

    eID: A new standard with risks

    This new threat situation is a problem because identification processes are increasingly shifting to the digital space. The European Union has paved the way for the European digital identity (eID) with Regulation (EU) 2024/1183. In future, citizens will be able to use digital wallets (EUDI wallets).

    Switzerland is also planning its own wallet solution with SWIYU*. However, the transition to digital identity opens up new areas of attack - precisely because AI-supported fraud scams are likely to become increasingly sophisticated in the future.

    * In the artificial word SWIYU, the syllable SWI stands for Switzerland, the I for I, identity and innovation and the syllable YU for you and unity.

    PXL Vision: With innovative solutions against AI fraud

    As an expert in identity verification, PXL Vision takes the threat of synthetic identities and deepfakes seriously. For fraud prevention, we rely on advanced technologies such as Near Field Communication (NFC) and Liveness Checks to recognise and prevent fraud attempts.
    But what is behind these two terms? We explain:
     

    NFC

    Near Field Communication (NFC) is an international standard for wireless data transfer over short distances. It is based on Radio-Frequency Identification (RFIS) technology, which uses electromagnetic induction to transmit data. NFC-based verification processes therefore make it possible to read encrypted data for user verification via an RFID chip integrated into an ID card or identity document. This allows users to retrieve their biometric data from an ID card using their smartphone and validate the authenticity of the document.

    Interested? Find out more about our NFC module here.

    Liveness Checks

    Liveness checks work with liveness detection and are used to check the person ‘behind’ a camera during verification procedures to ensure that it is not someone who is just using a printed image of another person's face, for example. To do this, they capture the movement of the person behind the camera or a series of selfies to validate the user and thus determine in real time whether it is a real person or an attempt at deception.

    Another key component is video injection detection, which identifies manipulated or artificially generated videos for deception. This is done by analysing metadata, movement patterns and digital artefacts that may indicate manipulation. We also verify the authenticity of documents by analysing them for security features and comparing them with official standards. We also monitor confirmed suspicious activities and profiles. Through these proactive measures, we prevent fraud attempts from being scaled up and repeated and attackers from being able to deploy their methods on a broad basis. These comprehensive protection mechanisms not only ensure a high level of security, but also strengthen trust in digital identity verification.

    At the same time, PXL Vision has been working with the Idiap research institute on an innovative solution for deepfake detection since February 2024. The project is supported by the Swiss innovation promotion agency Innosuisse. By the end of 2025, PXL Vision aims to bring the world's first robust AI-supported solution for recognising facial images and travel documents to market. The aim is to significantly improve the security of digital identity verification.
     

    Staying vigilant in the digital era

    Generative AI offers companies immense opportunities, but the risks - such as synthetic identity fraud - should not be underestimated. In light of the digital transformation and the introduction of eID solutions in Europe and Switzerland, it is more important than ever to rely on robust security solutions.

    PXL Vision shows how innovative technologies can meet the criminal use of AI head-on and make the digital world a safer place.

    Want to know more about our digital identity verification solutions?

    image