Brand Facts
How to write brand facts that drive accurate hallucination detection — and why the quality of your fact library determines the quality of your reports.
Brand facts are verified statements about your company that you want AI to get right. They’re how CiteMetrix transforms vague observations like “AI is talking about you” into specific, actionable findings like “Perplexity claims your founding year is 2018; your verified fact says 2014.” Without brand facts, the hallucination detector has nothing to compare AI output against. With a good fact library, it produces a steady stream of work-ready remediation reports.
This article explains what makes a fact useful, how the categorization and severity system works, and how to build a library that pays for itself.
Anatomy of a brand fact
Every fact has three required fields:
- Fact text — the verified statement itself, in plain English. This is what gets compared against AI output.
- Category — one of seven groupings: Founding, Leadership, Products, Pricing, Locations, Achievements, Other. Used for organization, filtering, and severity defaults.
- Severity — Critical, Moderate, or Minor. Drives how the system reacts when a fact is contradicted.
You’ll also see optional fields for source URL (where you can prove the fact, in case a stakeholder questions a remediation report) and notes (internal context for your team).
What makes a fact useful
The single biggest determinant of report quality is fact specificity. A vague fact produces vague hallucination reports — or worse, false positives. A specific fact produces precise, actionable findings.
Useful fact: “Acme was founded in 2014 in Austin, Texas, by Jane Smith and John Doe.”
Less useful: “Acme is based in the United States.”
The first fact tells the detector exactly what to look for in AI output. If Perplexity says “Acme was founded in 2018 in Boston,” the detector flags four contradictions (year, city, state, and the missing founder names), each one pointing at a specific correction. The second fact gives the detector almost nothing to work with — Boston is in the United States, so technically it’s not a contradiction, even though it’s wrong.
A few rules of thumb:
- Include numbers and dates. Years, percentages, dollar amounts, employee counts. AI hallucinations tend to invent numbers, and your facts let the detector catch the invention.
- Include proper nouns. Names of people, places, products. Misidentifying a CEO is a different kind of error than misstating revenue, and both should be catchable.
- One fact per fact. Don’t write “Acme makes asphalt rejuvenators and was founded in 2014 in Austin.” Split it into three facts. Each one becomes its own check, and each contradiction lands as a separate, addressable item.
- State things AI is likely to get wrong. If your founding year, headquarters city, and CEO’s name are stable and prominent on your homepage, AI will probably get them right — listing them anyway costs nothing. Pricing, current product features, and recent achievements are higher-risk and more important to verify.
The seven categories
Categories aren’t just organizational. They feed into the hallucination detector’s prompt construction (which kinds of contradictions to look hardest for) and into the dashboard’s filtering UI.
| Category | Examples |
|---|---|
| Founding | Year founded, founders’ names, original company name, where the company was founded |
| Leadership | CEO, CFO, founder roles, board members, key executives |
| Products | What you sell, product names, key features, what the products are made of, who the products are for |
| Pricing | List prices, plan tiers, pricing model (subscription vs perpetual, per-seat, per-domain) |
| Locations | Headquarters, regional offices, countries served, distribution centers |
| Achievements | Awards, certifications, notable customers, partnerships, milestones |
| Other | Anything that doesn’t fit cleanly above — company size, ownership structure, mission statement |
If you’re unsure where a fact belongs, use Other. Categorization affects organization more than detection accuracy.
Severity levels
Severity controls how loudly the system reacts when a fact is contradicted by AI output:
- Critical — Contradictions trigger high-priority alerts. Push notifications fire (if you have the PWA installed). The hallucination appears in the urgent section of your weekly digest. Use Critical sparingly — for facts where AI getting it wrong materially harms your business, your brand, or your customers.
- Moderate — Contradictions are tracked and surfaced in reports, but don’t trigger urgent notifications. The default for most facts. Use Moderate for things that should be correct but where a wrong answer doesn’t damage anything immediately.
- Minor — Contradictions are tracked but don’t appear in the urgent section of your reports. Use Minor for facts where AI getting it wrong is annoying but not consequential — historical trivia, deep details that most users won’t ask about.
A useful mental model: Critical = “this would make us issue a press release if it became widely believed.” Moderate = “this is wrong and we want it fixed.” Minor = “huh, that’s not right.”
Building your initial library
For most brands, 8-15 facts is enough to start producing useful reports. Start with these:
Critical (3-5 facts) — the things that must be correct: founding year, current CEO, your single biggest product line, your headquarters.
Moderate (5-10 facts) — the supporting context: leadership team beyond the CEO, secondary product lines, key certifications, regional locations, your most prominent achievements or partnerships.
Minor (0-5 facts) — fill in over time as you discover specific hallucinations you want to track.
You can always add more later. The first time you see CiteMetrix surface a real, specific hallucination caught against one of your facts is usually the moment your facts feel worth the effort.
How facts get used at scan time
When CiteMetrix runs a citation scan and your brand is mentioned in the AI response, the hallucination detector takes the response text plus your full fact library and asks an AI model: “Is anything in this AI output contradicted by these verified facts?”
This is a separate AI call from the citation scan itself, made via Anthropic’s Claude Haiku for speed and cost reasons. If the detector finds contradictions, it captures them with severity, type (incorrect fact, outdated info, fabricated claim, misleading context, critical omission), and the exact passage that triggered the flag.
A handful of consequences:
- The more facts you have, the more thorough the detection. There’s a soft ceiling around 100-150 facts where the detector’s prompt window starts to constrain how thoroughly each fact gets checked. Most brands won’t hit this; if you do, prune the Minor-severity ones first.
- The detector understands paraphrase. AI saying “founded over a decade ago” against a fact stating “founded in 2014” is fine in 2026 but flagged in 2030 — context-aware comparison, not exact-string matching.
- Fact updates apply going forward. Updating a fact (say, when a CEO changes) doesn’t re-flag previously-stored hallucinations against the old fact. New scans use the new fact; historical hallucinations stay tied to the fact version that detected them.
What brand facts don’t do
- They don’t influence what AI says about you. They only influence what CiteMetrix flags. Improving how AI describes your brand is a content/SEO problem, not a fact-library problem. The remediation reports point you at the work that does move that needle.
- They don’t validate themselves. If you enter a fact saying “Acme was founded in 2024” and your real founding year is 2014, CiteMetrix will flag every accurate AI response as a hallucination. Garbage in, garbage out. There’s no automatic verification step against an external source of truth.
- They’re not legal documents. Use plain language. The detector parses these the way an AI model would, not the way a lawyer would.
Next steps
- Add or refine your facts in Brand Facts in the dashboard sidebar.
- Once you’ve got a library of 8+ facts, run a scan and review the Hallucinations & Remediation report it produces — that’s when the fact library starts paying for itself.