Getting Started

Getting Started with CiteMetrix

Your first 15 minutes — from adding a domain to seeing your first ModelScore.

CiteMetrix tells you how the major AI platforms — ChatGPT, Claude, Gemini, Perplexity, Grok, Copilot, DeepSeek, Mistral — describe your brand when someone asks about your industry, products, or competitors. It scans those platforms with the queries your customers actually type, captures whether your brand gets cited, scores how accurate and how favorable those citations are, and tells you what to fix.

This guide walks you from a fresh account to your first scan results. Plan on about 15 minutes of active setup, plus a short wait while the first scan runs.

Before you begin

You’ll need three things on hand:

  • The domain you want to track. One per Starter plan, up to 5 on Pro, up to 15 on Agency. Use the production domain — acme.com, not staging.acme.com.
  • Your brand identity in plain language. A few sentences describing what your company does, what makes it distinctive, and a handful of facts you want AI to get right (founding year, headquarters, key products, leadership names). You don’t need perfection here — you can refine these later.
  • API keys for at least one AI provider. CiteMetrix uses your own API keys (BYOK — Bring Your Own Keys) to scan AI platforms on your behalf. You’ll get the best results with Anthropic, OpenAI, Google (Gemini), and either Perplexity or Grok configured. We’ll cover where to get each one in API Keys (BYOK).

If you don’t have any API keys yet, you can still complete the first three steps below and add keys before you run your first scan.

Dashboard home before first domain is added

Step 1: Add your first domain

From the dashboard, click Add Domain in the top-right of the domain picker. You’ll be asked for two fields:

  • Domainacme.com. CiteMetrix normalizes this for you, so it’s fine to enter https://www.acme.com/ or acme.com — both resolve to the same record.
  • Brand name (optional but recommended) — the name your customers actually use when they search. If your domain is acmewidgets.com but everyone calls you “Acme”, enter “Acme”. This is the string CiteMetrix looks for when checking whether AI responses cite your brand.
Add Domain dialog with two fields

Click Save. The domain appears in your picker and becomes the active domain for the rest of setup.

Step 2: Add your brand facts

Brand Facts are the heart of CiteMetrix’s hallucination detection. They’re verified statements about your company that you want AI platforms to cite correctly. When CiteMetrix scans a platform and parses the response, it compares what AI said to your brand facts — flagging fabricated claims, outdated info, and contradictions.

Navigate to Brand Facts in the left sidebar and click Add Fact. For each fact:

  • Fact text — the verified statement (e.g., “Acme was founded in 2014 in Austin, Texas”).
  • Category — pick from Founding, Leadership, Products, Pricing, Locations, Achievements, or Other.
  • Severity — how important is it that AI gets this right? Critical facts trigger high-priority alerts when contradicted; Minor facts are tracked but don’t escalate.

Aim for 8-15 facts to start. The most useful ones are specific and verifiable — “headquartered in Austin, Texas” rather than “based in the United States.” You can always add more later, and CiteMetrix’s hallucination detector improves significantly as your fact library grows.

Why this matters: Without brand facts, CiteMetrix can detect that AI is talking about you, but can’t tell whether what it’s saying is correct. With facts in place, you’ll see specific, actionable hallucination reports like “Perplexity claims your founding year is 2018; your verified fact says 2014.”

Step 3: Configure API keys (BYOK)

Go to Settings → API Keys. You’ll see one row per AI platform. For each provider you want to scan:

  1. Click Get key — it’ll open the provider’s API console in a new tab.
  2. Generate a new API key. Use a descriptive name (e.g., “CiteMetrix”).
  3. Copy it back into the matching field in CiteMetrix and click Save.

Minimum viable setup: Anthropic + OpenAI. That gets you Claude and ChatGPT/Copilot scans. Add Gemini, Perplexity, and Grok keys to cover the rest of the major platforms. Add a DataForSEO login for Google AI Overview detection if your industry’s queries trigger AI Overviews.

CiteMetrix never stores your API keys in plain text — they’re encrypted in your user record and only decrypted when a scan needs to make an API call. Each key is tied to your user account; team members on Pro/Agency plans configure their own.

API Keys settings panel showing provider rows

Step 4: Add your keywords

Keywords are the queries CiteMetrix runs against each AI platform. They should reflect what your customers actually ask — not your internal jargon. Good examples for an asphalt-additives company:

  • “best asphalt rejuvenators 2025”
  • “polymer modified asphalt vs traditional”
  • “what does Acme make”
  • “asphalt sustainability solutions”

Mix branded queries (your name in them) with unbranded queries (the problem your product solves). Branded queries tell you if AI knows about you; unbranded queries tell you if AI mentions you when it should.

Navigate to Queries in the sidebar. Add 10-25 keywords to start. Each one will be scanned against every AI platform you have keys for, on whatever cadence your plan supports.

Step 5: Run your first scan

From the dashboard, click Run Scan in the top-right. A confirmation modal will tell you:

  • How many keywords will be scanned
  • How many AI platforms (based on configured keys)
  • The estimated number of API calls (= keywords × platforms)

Click Confirm. The scan runs in the background; you’ll see live progress. A typical first scan with 15 keywords and 5 configured platforms takes 2-5 minutes.

When it finishes, the dashboard populates with your first ModelScore, citation breakdown by platform, and any hallucinations the system detected against your brand facts.

Scan in progress modal showing live progress

What to expect after your first scan

Most first-time users see a ModelScore in the 30-50 range. That’s normal. AI platforms have very different views of every brand, and a fresh scan shows you the baseline they collectively hold.

Specifically, look at:

  • AI Citation Score — what percentage of your queries actually mentioned your brand. Sub-20% means AI doesn’t know you well. 60%+ means strong baseline visibility.
  • Hallucinations panel — which AI platforms made claims that contradicted your verified brand facts. Each one becomes an action item.
  • Platform breakdown — which platforms cite you most/least. Often skewed; you’ll see one or two platforms doing the lifting and others ignoring you entirely.

What to do next

In rough order:

  1. Read Understanding Your ModelScore so you know what each component means and how to move it.
  2. Open the most severe hallucination from your first scan and walk through the Remediation panel — it’ll tell you exactly what to fix and where.
  3. Set up Email Digests so you get a weekly snapshot without logging in.
  4. If you’re on a tablet or phone often, install the mobile app (PWA) — push notifications fire when something important changes.

Welcome to CiteMetrix. The first scan is the hardest part; everything after this is iteration.

Last updated: May 7, 2026 Suggest an edit ›