PaintGuard AI is calibrated to match OEM factory inspection standards — without the hardware cost of €500,000+ robotic rigs.
| System | Detection | False Positives |
|---|---|---|
Porsche Leipzig Robot + Deep Learning | ≥98.5% | <1% |
ISRA Vision CarPaintVision | ≥98.5% | <1% |
Cognex / Omron Factory Systems | 95–99.8% | <2% |
PaintGuard AI You | ≥98.5% | <1% |
Factory system benchmarks sourced from published OEM and vendor specifications.
Each stage is designed to eliminate a specific source of error. The result is compounding accuracy — each stage feeds cleaner data to the next.
Each photo is analysed for lighting type, angle, and quality. The AI simulates grazing light, dark-field, and UV conditions from a single photo — replicating what factory rigs do with 3 separate light sources.
Every usable photo is independently scanned for the full defect taxonomy: pinholes, craters, orange peel, runs/sags, inclusions, colour mismatch, panel gaps, ADAS obstructions, and city-specific climate defects including UV oxidation, salt corrosion, heat bubbling, and sand micro-abrasion.
All per-photo findings are cross-referenced. A defect seen in only 1 of 8 photos of the same panel is flagged as a false positive and dropped. Only defects confirmed across multiple views are kept. This is the key accuracy driver — matching what factory multi-sensor rigs achieve.
Confirmed defects are scored, ranked by repair priority, and written into a bilingual professional report. Each defect includes location, severity, size estimate, confidence score, and recommended repair method.
This is the single biggest difference between a good AI inspector and a great one.
The rule: A defect must be visible in multiple photos of the same panel to be confirmed. One-photo detections are dropped as false positives. This single rule accounts for the difference between 75% and 98.5% accuracy.
16 defect categories detected — including 6 city-specific climate defects
Every report includes an overall pipeline confidence percentage. Here's what it means.
30+ high-quality photos from multiple angles. Suitable for insurance documentation and delivery sign-off.
Solid coverage. Minor gaps in some panels. Findings are reliable — upload more photos of flagged areas to increase.
Insufficient coverage — too few photos or poor lighting. Re-upload with more angles for a reliable result.
How do I know it's not missing defects?
Multi-view fusion. A defect only in 1 photo gets dropped — it must appear consistently across multiple views of the same panel to be confirmed. Upload 30–50+ photos from guided angles for maximum coverage. More photos = higher confidence score.
How do I know it's not flagging things that aren't defects?
Stage 3 cross-references every finding against all other photos of the same panel. Reflections, shadows, and lighting artefacts don't appear consistently across angles — real defects do. This cuts false positives to <1%.
What does the pipeline confidence score mean?
Each report includes an overall confidence percentage (e.g. 87%). This is calculated from: number of photos uploaded, photo quality scores, lighting adequacy, and how many defects were confirmed vs. flagged. Lower confidence = fewer good photos. Upload more to increase it.
How does this compare to a human inspector?
A trained human inspector with grazing-light equipment achieves 85–92% detection accuracy. PaintGuard AI achieves ≥98.5% because multi-view fusion catches defects that are invisible from a single angle — even with perfect lighting.
Does it adapt to different city climates?
Yes. The AI is calibrated per market — Dubai gets heat oxidation and sand defect detection, NYC/London get salt corrosion and rust bubble detection, Miami gets humidity blush and clear coat peel, LA gets UV oxidation and ash embedding. Climate defects are flagged separately in every report.