Most AI detection tools give you a single answer: real or fake. That works for fully synthetic images — a Midjourney landscape, a DALL-E product shot. But what about the harder cases? A real photograph where someone's face has been swapped. A genuine product image with AI-enhanced backgrounds. A news photo where a critical detail has been digitally altered.
These partial manipulations are where the real damage occurs, and a binary classifier can't tell you what's been changed. That's why we built pixel-level heatmap analysis into Sightova.
How It Works
Alongside our primary detection model, Sightova runs a segmentation model that produces a 256×256 probability map for every image analyzed. Each pixel in this map carries a value between 0 and 1 — representing the model's confidence that the corresponding region was generated or modified by AI.
The result is a visual overlay that maps directly onto the original image. Regions the model identifies as unchanged appear in green. Regions flagged as AI-edited appear in red. The intensity of the color reflects the model's confidence level.
Why This Matters
Consider a real-world scenario: an insurance company receives a damage claim with photographic evidence. The image looks authentic at a glance. Our primary model flags it with moderate AI probability. The heatmap reveals exactly what triggered the detection — a specific region of the image shows clear signs of generative manipulation, while the rest of the photograph is genuine.
This is information that a simple "87% AI-generated" score cannot convey. The heatmap transforms abstract probability into spatial, actionable intelligence.
The Technical Foundation
Our heatmap model uses a U-Net architecture with a ResNet-50 encoder, trained on hundreds of thousands of paired images — originals alongside their AI-edited counterparts. The model learned to identify the subtle boundary signatures and texture inconsistencies that occur when generative models modify portions of real photographs.
Training was performed on a dataset of nearly 370,000 image pairs, ensuring coverage across a wide range of editing techniques: face swaps, background replacements, object insertion, texture regeneration, and more.
Viewing the Heatmap
In the Sightova dashboard, heatmap results appear in a dedicated tab alongside the standard analysis results. The visualization overlays directly on your uploaded image, giving you immediate spatial context. Summary statistics — percentage of edited pixels, mean confidence, and peak confidence — provide a quantitative complement to the visual overlay.
For API users, heatmap data is returned as part of the standard response payload, enabling programmatic processing for automated moderation pipelines.
When It's Most Useful
Heatmap analysis is particularly valuable for:
- Deepfake detection — identifying face-swapped regions in otherwise authentic photos
- Forensic investigation — pinpointing exactly which elements of an image were altered
- Content moderation — distinguishing between fully synthetic content and manipulated real photos
- Insurance and legal — providing visual evidence of where manipulation occurred
A binary verdict tells you if something is wrong. The heatmap tells you where.