ACCURACY & METHODOLOGY

How We Achieve ≥98.5%
Defect Detection

A transparent breakdown of the 4-stage AI pipeline, benchmark comparisons against factory inspection systems, and why multi-view fusion is the key to near-zero false positives.

≥98.5%
Defect Detection Accuracy
<1%
False Positive Rate
94%
Panel Location Accuracy
91%
Severity Classification
3–8 min
Per Inspection
4-Stage
AI Pipeline

Benchmark vs Factory Systems

PaintGuard AI is calibrated to match OEM factory inspection standards — without the hardware cost of €500,000+ robotic rigs.

SystemDetectionFalse Positives
Porsche Leipzig Robot + Deep Learning
≥98.5%<1%
ISRA Vision CarPaintVision
≥98.5%<1%
Cognex / Omron Factory Systems
95–99.8%<2%
PaintGuard AI
You
≥98.5%<1%

Factory system benchmarks sourced from published OEM and vendor specifications.

The 4-Stage Pipeline

Each stage is designed to eliminate a specific source of error. The result is compounding accuracy — each stage feeds cleaner data to the next.

01

Illumination Simulation

Each photo is analysed for lighting type, angle, and quality. The AI simulates grazing light, dark-field, and UV conditions from a single photo — replicating what factory rigs do with 3 separate light sources.

Filters 12–18% of unusable photos before analysis begins
02

Per-Photo Defect Detection

Every usable photo is independently scanned for the full defect taxonomy: pinholes, craters, orange peel, runs/sags, inclusions, colour mismatch, panel gaps, ADAS obstructions, and city-specific climate defects including UV oxidation, salt corrosion, heat bubbling, and sand micro-abrasion.

Raw detection across 16 defect categories per photo
03

Multi-View Fusion

All per-photo findings are cross-referenced. A defect seen in only 1 of 8 photos of the same panel is flagged as a false positive and dropped. Only defects confirmed across multiple views are kept. This is the key accuracy driver — matching what factory multi-sensor rigs achieve.

Eliminates up to 40% of raw detections as false positives
04

Professional Report Generation

Confirmed defects are scored, ranked by repair priority, and written into a bilingual professional report. Each defect includes location, severity, size estimate, confidence score, and recommended repair method.

Pipeline confidence score included in every report

Why Multi-View Fusion Is the Key

This is the single biggest difference between a good AI inspector and a great one.

Single-Photo Analysis

  • Reflections flagged as scratches
  • Shadows mistaken for dents
  • 15–25% false positive rate
  • Misses defects hidden by angle
  • No confidence scoring possible

Multi-View Fusion

  • Reflections disappear across angles — dropped
  • Real defects appear consistently — confirmed
  • <1% false positive rate
  • Panel seen from 3–8 angles — nothing hidden
  • Pipeline confidence score per report

The rule: A defect must be visible in multiple photos of the same panel to be confirmed. One-photo detections are dropped as false positives. This single rule accounts for the difference between 75% and 98.5% accuracy.

Full Defect Taxonomy

16 defect categories detected — including 6 city-specific climate defects

Pinholes / Craters
Orange Peel Texture
Runs & Sags
Paint Inclusions
Colour Mismatch
Panel Gaps
Metallic Flake Misalignment
Scratches & Swirl Marks
ADAS Sensor Paint Obstruction
Stone Chips
UV Oxidation & Chalking
Climate
Heat Bubble & Thermal Cracking
Climate
Salt Corrosion & Rust Bubble
Climate
Sand / Ash Micro-Abrasion
Climate
Humidity Blush
Climate
Clear Coat Peel & Delamination
Climate

Understanding the Confidence Score

Every report includes an overall pipeline confidence percentage. Here's what it means.

90–100%
High Confidence

30+ high-quality photos from multiple angles. Suitable for insurance documentation and delivery sign-off.

70–89%
Good Confidence

Solid coverage. Minor gaps in some panels. Findings are reliable — upload more photos of flagged areas to increase.

<70%
Low Confidence

Insufficient coverage — too few photos or poor lighting. Re-upload with more angles for a reliable result.

Number of photos uploaded
Average photo quality score
Lighting adequacy across shots
Multi-view confirmation rate

Common Questions

How do I know it's not missing defects?

Multi-view fusion. A defect only in 1 photo gets dropped — it must appear consistently across multiple views of the same panel to be confirmed. Upload 30–50+ photos from guided angles for maximum coverage. More photos = higher confidence score.

How do I know it's not flagging things that aren't defects?

Stage 3 cross-references every finding against all other photos of the same panel. Reflections, shadows, and lighting artefacts don't appear consistently across angles — real defects do. This cuts false positives to <1%.

What does the pipeline confidence score mean?

Each report includes an overall confidence percentage (e.g. 87%). This is calculated from: number of photos uploaded, photo quality scores, lighting adequacy, and how many defects were confirmed vs. flagged. Lower confidence = fewer good photos. Upload more to increase it.

How does this compare to a human inspector?

A trained human inspector with grazing-light equipment achieves 85–92% detection accuracy. PaintGuard AI achieves ≥98.5% because multi-view fusion catches defects that are invisible from a single angle — even with perfect lighting.

Does it adapt to different city climates?

Yes. The AI is calibrated per market — Dubai gets heat oxidation and sand defect detection, NYC/London get salt corrosion and rust bubble detection, Miami gets humidity blush and clear coat peel, LA gets UV oxidation and ash embedding. Climate defects are flagged separately in every report.

See the Accuracy For Yourself

Run a free inspection on any vehicle. You'll get the full report with defect map, confidence score, and repair priority list — in under 8 minutes.