How AI Verifies Conflict Intelligence
Full transparency on how we use AI-powered source verification, credibility scoring, and extractive reasoning to prevent hallucinations in conflict reporting.
AI Can Hallucinate. We Built a System to Stop It.
Large language models are powerful tools for synthesizing information — but they have a known failure mode: they can generate confident-sounding statements that are factually wrong. In conflict reporting, this isn't just an accuracy problem. It's a safety problem.
A single false claim about a nuclear strike, an assassination, or a city falling to enemy forces — published with the visual authority of an intelligence brief — can spread in minutes. Journalists cite it. Investors act on it. People panic.
We built ConflictZone.io to make conflict intelligence faster and more accessible. But we refused to do it at the cost of accuracy. So we engineered a verification layer directly into our data pipeline.
From Raw Feed to Published Brief
Every intel brief follows this exact sequence before it reaches your screen:
Not All Sources Are Equal
Our credibility scoring weights sources by their editorial standards and track record. A claim confirmed by Reuters scores higher than one confirmed by a regional blog.
Associated Press (AP)
BBC News
Agence France-Presse (AFP)
Deutsche Welle (DW)
France 24
The Guardian / NYT / WaPo
Government statements
Think tanks (ICG, ACLED)
UN / ICRC / HRW reports
Social media posts
Unverified Telegram channels
Single-source claims
The following sources are blocked entirely and never enter our pipeline regardless of the story: RT (Russia Today), Sputnik News, Tass, PressTV, InfoWars, and other known state-propaganda or systematically unreliable outlets.
What the Score Actually Means
Credibility scores are only generated for high-risk claims — not every brief. For routine conflict updates, no score is shown because the risk of harm from a minor inaccuracy is low.
| Score | Badge | What it means |
|---|---|---|
| 80–100 | 🟢 HIGH CREDIBILITY | Multiple Tier 1–2 sources independently confirm with exact matching headlines |
| 55–79 | 🟡 LIKELY CREDIBLE | At least one Tier 1 source confirms, or multiple Tier 2 sources agree |
| 35–54 | 🔴 UNVERIFIED | Single source, Tier 3–4 only, or claim is partially but not fully supported |
| 0–34 | 🔴 LOW CREDIBILITY | No headline explicitly confirms the claim. Score hard-capped by code if no exact quotes found |
The Safety Gate That Bypasses the AI
The most important protection in our system is not a prompt instruction — it is a hard-coded constraint enforced in PHP after Claude responds. This is the difference between asking an AI to behave well and making it impossible for it to behave badly.
exact_quotes array in Claude's response is empty, the credibility score is hard-capped at 30/100 — regardless of what score Claude attempted to assignTransparency vs. The Field
| Feature | ConflictZone.io | LiveUAMap | ACLED | Bellingcat |
|---|---|---|---|---|
| Published methodology | ✓ This page | ✗ | Partial | ✓ Per-article |
| Real-time credibility score | ✓ 0–100 live | ✗ | ✗ | ✗ |
| Exact source quotes shown | ✓ Verbatim | ✗ | ✗ | ✓ Per-article |
| Propaganda sources blocked | ✓ Hard block | Partial | ✓ | ✓ |
| AI hallucination protection | ✓ Code-enforced | N/A | N/A (human) | N/A (human) |
| Update frequency | Every 5 min | ~2 min | Weekly | Per-investigation |
| Free to access | ✓ | Freemium | ✗ Paid API | ✓ |
What We Cannot Guarantee
Transparency means being honest about what our system cannot do, not just what it can.
Corrections & Feedback
If you believe a credibility score is incorrect, a source has been misattributed, or a brief contains a factual error, please contact us at [email protected]. We review all credibility-related reports within 24 hours.
This methodology page is updated whenever our verification pipeline changes. Last updated: March 2026.
Common Questions About AI Conflict Verification
Understand the Broader Context
Our methodology draws on established practices in open-source intelligence (OSINT), computational journalism, and AI safety research. The Bellingcat Verification Handbook covers manual OSINT techniques for verifying conflict imagery and claims. The GDELT Project monitors global media coverage using automated systems. The ACLED methodology provides the gold standard for conflict data collection from academic sources.
ConflictZone.io differs from all of these: we make real-time credibility scoring accessible to anyone, not just trained analysts, with the full methodology publicly auditable on this page.
→ Open the Live Conflict Map → Conflict Intelligence Data → OSINT Event Tracking → Contact the Team