Transparency Report · AI Verification Methodology

How AI Verifies Conflict Intelligence

Full transparency on how we use AI-powered source verification, credibility scoring, and extractive reasoning to prevent hallucinations in conflict reporting.

01 — THE PROBLEM

AI Can Hallucinate. We Built a System to Stop It.

Large language models are powerful tools for synthesizing information — but they have a known failure mode: they can generate confident-sounding statements that are factually wrong. In conflict reporting, this isn't just an accuracy problem. It's a safety problem.

A single false claim about a nuclear strike, an assassination, or a city falling to enemy forces — published with the visual authority of an intelligence brief — can spread in minutes. Journalists cite it. Investors act on it. People panic.

We built ConflictZone.io to make conflict intelligence faster and more accessible. But we refused to do it at the cost of accuracy. So we engineered a verification layer directly into our data pipeline.

02 — THE PIPELINE

From Raw Feed to Published Brief

Every intel brief follows this exact sequence before it reaches your screen:

01
SOURCE INGESTION
RSS feeds from Reuters, AP, BBC, Al Jazeera, DW, France 24 and regional outlets are pulled every 5 minutes. Known propaganda sources (RT, Sputnik, PressTV) are blocked at the feed level before any AI sees them.
02
NOISE FILTER
A keyword blocklist removes celebrity, sports, financial and lifestyle content. Only geopolitical, military and humanitarian headlines pass through.
03
AI BOUNCER (Claude Haiku)
A secondary AI pass reviews remaining headlines and removes anything not genuinely related to armed conflict, military operations or geopolitical crises. Approximate cost: $0.0006 per batch.
04
INTEL BRIEF GENERATION (Claude Haiku)
Claude synthesizes the verified headlines into a structured brief. The prompt enforces strict attribution rules: Claude must use language like "Reuters reports..." or "According to AP..." — never asserting kinetic events as objective facts.
05
HIGH-RISK KEYWORD SCAN
The generated brief is scanned for 20 high-risk terms: nuclear, nuke, assassinated, coup, genocide, chemical weapon, ballistic missile, declaration of war, and others. If any are found, the brief is flagged for credibility verification before publishing.
↓ (only if flagged)
06
CREDIBILITY VERIFICATION (Claude Haiku)
A second, independent AI pass checks the flagged claim against all available headlines. It must extract exact verbatim quotes from source headlines before it is allowed to assign a confirmation. If no exact match exists, the score is hard-capped at 30/100 — a constraint enforced in code, not by the AI.
07
PUBLISH WITH CREDIBILITY BADGE
The brief is published with a visible credibility score, the confirming sources listed by name, and the exact headline that constitutes the evidence. Users can read the raw source directly.
03 — SOURCE TIERS

Not All Sources Are Equal

Our credibility scoring weights sources by their editorial standards and track record. A claim confirmed by Reuters scores higher than one confirmed by a regional blog.

🟢 Tier 1 — Highest
Reuters
Associated Press (AP)
BBC News
Agence France-Presse (AFP)
🔵 Tier 2 — High
Al Jazeera
Deutsche Welle (DW)
France 24
The Guardian / NYT / WaPo
🟡 Tier 3 — Medium
Regional outlets
Government statements
Think tanks (ICG, ACLED)
UN / ICRC / HRW reports
🔴 Tier 4 — Low
Blogs
Social media posts
Unverified Telegram channels
Single-source claims

The following sources are blocked entirely and never enter our pipeline regardless of the story: RT (Russia Today), Sputnik News, Tass, PressTV, InfoWars, and other known state-propaganda or systematically unreliable outlets.

04 — CREDIBILITY SCORING

What the Score Actually Means

Credibility scores are only generated for high-risk claims — not every brief. For routine conflict updates, no score is shown because the risk of harm from a minor inaccuracy is low.

Score Badge What it means
80–100 🟢 HIGH CREDIBILITY Multiple Tier 1–2 sources independently confirm with exact matching headlines
55–79 🟡 LIKELY CREDIBLE At least one Tier 1 source confirms, or multiple Tier 2 sources agree
35–54 🔴 UNVERIFIED Single source, Tier 3–4 only, or claim is partially but not fully supported
0–34 🔴 LOW CREDIBILITY No headline explicitly confirms the claim. Score hard-capped by code if no exact quotes found
05 — THE HARD CONSTRAINT

The Safety Gate That Bypasses the AI

The most important protection in our system is not a prompt instruction — it is a hard-coded constraint enforced in PHP after Claude responds. This is the difference between asking an AI to behave well and making it impossible for it to behave badly.

Extractive Reasoning Rules — enforced in code
R1
Claude must extract the exact verbatim text from a provided headline before it is permitted to list that source as a confirmation
R2
If the exact_quotes array in Claude's response is empty, the credibility score is hard-capped at 30/100 — regardless of what score Claude attempted to assign
R3
Claude is explicitly forbidden from using its training data as evidence — only the headlines provided in the current request are valid sources
R4
The exact quote is displayed verbatim on the frontend so any user can verify independently that the AI did not fabricate the confirmation
// PHP enforcement — runs AFTER Claude responds // Claude cannot talk its way around this $exact_quotes = $result['exact_quotes'] ?? []; $score = $result['score']; // Hard cap: no quotes = score cannot exceed 30 if (empty($exact_quotes)) { $score = min($score, 30); } // Normalise to valid range regardless $score = max(0, min(100, (int)$score));
06 — HOW WE COMPARE

Transparency vs. The Field

Feature ConflictZone.io LiveUAMap ACLED Bellingcat
Published methodology ✓ This page Partial ✓ Per-article
Real-time credibility score ✓ 0–100 live
Exact source quotes shown ✓ Verbatim ✓ Per-article
Propaganda sources blocked ✓ Hard block Partial
AI hallucination protection ✓ Code-enforced N/A N/A (human) N/A (human)
Update frequency Every 5 min ~2 min Weekly Per-investigation
Free to access Freemium ✗ Paid API
07 — KNOWN LIMITATIONS

What We Cannot Guarantee

Transparency means being honest about what our system cannot do, not just what it can.

⚠️
SOURCE CONSENSUS ≠ TRUTH
If multiple sources report the same false claim, our system will score it highly. We verify that sources say something — not that what they say is objectively true. Coordinated disinformation across multiple outlets could still pass our filters.
⚠️
BREAKING NEWS GAP
Our RSS feeds update every 5 minutes. In the first minutes of a major event, only one source may have reported it. The credibility score will be low not because the event is false, but because confirmation hasn't propagated yet. Low score ≠ false claim.
⚠️
ROUTINE BRIEFS ARE NOT VERIFIED
Credibility verification only fires for high-risk keywords. Routine conflict updates — troop movements, political statements, economic impacts — are AI-synthesized without a secondary verification pass. Attribution language is enforced in the prompt, but no score is generated.
⚠️
NOT A SUBSTITUTE FOR PROFESSIONAL INTELLIGENCE
ConflictZone.io is a journalistic and educational tool. It is not a substitute for professional intelligence assessments, government advisories, or on-the-ground reporting. Do not use this platform as the sole basis for security, financial or policy decisions.
08 — CONTACT

Corrections & Feedback

If you believe a credibility score is incorrect, a source has been misattributed, or a brief contains a factual error, please contact us at [email protected]. We review all credibility-related reports within 24 hours.

This methodology page is updated whenever our verification pipeline changes. Last updated: March 2026.

09 — FREQUENTLY ASKED QUESTIONS

Common Questions About AI Conflict Verification

Does a high credibility score mean the event is definitely true?
No. A high score means multiple trusted sources independently reported it using similar language. Sources can be wrong simultaneously, especially in the first hours of a rapidly evolving situation. Always treat conflict intelligence as a starting point for your own research, not a final verdict.
How is this different from LiveUAMap or ACLED?
LiveUAMap uses human editors to manually pin events — high granularity, slower. ACLED compiles weekly academic datasets — high accuracy, not real-time. ConflictZone.io is the only platform combining real-time AI synthesis with a transparent, public credibility scoring system and full source attribution. We are faster than ACLED, more transparent than LiveUAMap, and free.
Can I use ConflictZone.io data in journalism or research?
Yes, with proper attribution and independent verification. Always link to your primary sources (Reuters, AP, BBC) not to our platform. ConflictZone.io is a discovery and monitoring tool — not a primary source. For academic research, cite the original outlets our system surfaces.
How do I report an incorrect score or factual error?
Email intel@conflictzone.io with the conflict name, the specific claim you believe is incorrect, and any sources you can provide. We review all credibility reports within 24 hours.
10 — FURTHER READING

Understand the Broader Context

Our methodology draws on established practices in open-source intelligence (OSINT), computational journalism, and AI safety research. The Bellingcat Verification Handbook covers manual OSINT techniques for verifying conflict imagery and claims. The GDELT Project monitors global media coverage using automated systems. The ACLED methodology provides the gold standard for conflict data collection from academic sources.

ConflictZone.io differs from all of these: we make real-time credibility scoring accessible to anyone, not just trained analysts, with the full methodology publicly auditable on this page.

→ Open the Live Conflict Map    → Conflict Intelligence Data    → OSINT Event Tracking    → Contact the Team