AI is fighting on both sides of the fake ID war
KYC (Know Your Customer) verification is everywhere now — banks, crypto exchanges, e-commerce, anywhere that needs to confirm you are who you say you are. Companies like Regula Forensics, Veriff, AU10TIX, Persona, and Resistant AI maintain huge databases of ID templates from hundreds of countries. Simple idea: verify the document, verify the person. The problem? Some people don't want to be verified.
AI has supercharged both sides. Criminals use it to make convincing fakes. Verification platforms use it to catch them. Neither side is winning. It's an arms race with no finish line.
How AI Makes Fake IDs
Generative AI has made it shockingly easy to produce fake documents. Tools like Stable Diffusion and underground services can spit out realistic-looking driving licences, passports, and utility bills in minutes. Dark-web markets sell AI-generated ID packages — document plus matching selfie plus video clip — for $30 to $700.
Deepfakes make it worse. Pre-recorded videos and injection attacks are designed to fool the "look at the camera and blink" liveness checks. And it works — AI-generated IDs have successfully bypassed major crypto exchanges and established KYC providers (KYC360, Thistle Initiatives). Document fraud rates in banking hit 24% according to SmartSearch.
The Specific Techniques Fraudsters Use
AI-powered document fraud is more sophisticated than just slapping a face onto a template. The current generation of tools can:
- Generate entirely new documents from scratch — generative adversarial networks (GANs) produce ID images that are statistically realistic at the pixel level, including background textures, font rendering, and micro-details that earlier Photoshop forgeries missed
- Clone real documents with face swaps — starting from a genuine document image and replacing only the photo and personal details, preserving all the authentic layout and security feature positioning
- Create matching selfies and video — deepfake models generate a convincing live video of the person "holding" the forged document, complete with natural blinking and head movement, designed specifically to defeat liveness checks
- Inject video streams directly — rather than holding a phone up to a screen, injection attacks feed synthetic video directly into the app's camera API, bypassing any physical screen detection
The barrier to entry has collapsed. What once required a professional forger with specialist equipment now requires a laptop and a subscription to a dark-web service. This is why document fraud reports have surged across every sector that relies on remote identity verification.
How AI Catches Them
The verification side isn't sitting still. Modern systems scan for pixel-level weirdness, lighting inconsistencies, and the telltale motion artefacts that deepfakes produce (GBG, LSEG). Passive liveness detection checks skin texture, depth, and micro-expressions without asking the user to do anything.
Active checks are tougher — tilt your head, smile, read random text on screen. These are specifically designed to break pre-recorded video attacks. The verification engines also cross-check extracted data against external databases and flag anything that doesn't add up (Kaspersky).
The Multi-Layer Defence Stack
No single technique catches every forgery. Modern KYC platforms use a layered approach:
- Document authenticity checks — comparing the submitted document against a database of known genuine templates. Every country's ID has specific layout rules, font sizes, spacing, and colour values. AI models trained on thousands of genuine examples can spot deviations that a human reviewer would miss.
- Optical security feature detection — looking for holograms, UV-reactive elements, microprint, and laser engraving in the document image. The absence of these features, or their presence in the wrong position, is an immediate red flag.
- Biometric comparison — matching the face on the document to the selfie or video provided by the applicant. Modern facial recognition handles variations in lighting, angle, and ageing, but struggles with high-quality deepfakes — which is why liveness detection matters.
- Data cross-referencing — checking that the extracted text (name, date of birth, document number) is consistent across the document and matches external databases where available.
- Metadata and device analysis — examining the image file itself for signs of editing. Genuine phone photos have specific EXIF data patterns, compression artefacts, and resolution profiles. AI-generated images often lack these or have inconsistent metadata.
The most effective systems combine all five layers. A forgery might pass any individual check, but it's extremely difficult to pass all of them simultaneously.
Physical Features: The Bit AI Can't Fake
AI is brilliant at making digital forgeries look convincing on a screen. Physical documents are a different story. Raised text, laser engraving, microprint, UV-reactive inks — you can't reproduce these with software. Holographic overlays need industrial equipment. Polycarbonate cards with embedded chips are extremely difficult to replicate at any scale.
Even skilled forgers miss the small stuff: the exact texture of the laminate, perforation spacing, the way a hologram behaves when you tilt it under a light. These physical details are why real-world inspection, combined with digital analysis, still catches fakes that slip through online checks (AiPrise, VerifyOnline).
For a detailed look at what physical features distinguish genuine cards from fakes, see our guide to detecting fraudulent documents and our breakdown of how modern ID cards are manufactured.
What This Means for Novelty Card Design
The AI arms race in identity verification has a direct impact on the novelty card industry. As detection systems become more sophisticated, the distinction between a novelty card (which is legal) and a counterfeit document (which isn't) becomes easier for AI to detect. This is actually a good thing for legitimate novelty card makers.
A well-designed novelty card makes no attempt to replicate official security features. It uses original artwork, includes clear "NOVELTY" disclaimers, and doesn't copy the exact layout of any government-issued document. AI verification systems can tell the difference instantly — which is the point. Nobody wants their novelty cards mistaken for the real thing, because that's where legal trouble starts.
The UK legal framework around counterfeit documents is clear: original-design novelty cards are legal; replicas of official documents are not. AI verification technology is making it harder for forgeries to succeed, which reinforces the value of staying on the right side of that line.
The Arms Race Continues
The upshot: no forgery will ever be perfect. Physical security features are simply too complex. AI detection keeps improving, and the combination of digital analysis, database cross-checks, and human review means the verification side has a structural advantage — even if individual fakes occasionally slip through.
What's changed is the volume. AI makes it trivially cheap to produce large numbers of mediocre fakes. Most get caught. But when you can generate thousands, even a 1% success rate is profitable for criminals. The verification industry's response is to layer more checks, reduce false positives, and — increasingly — to detect the AI tools themselves rather than just their output.
For anyone curious about the broader landscape of document fraud, scams, and verification, our information and guidance hub covers the topic from multiple angles.