Back to blog
May 12, 20268 min read

How to Check If AI Is Mentioning Your Brand (ChatGPT, Perplexity, Gemini)

A practical guide to checking whether ChatGPT, Perplexity, and Gemini mention your brand, and what to do when they do not.

Marcos Placona

Founder, MentionDrop

AI tools are shaping purchasing decisions before users ever click a link. When someone asks ChatGPT "What is the best tool for monitoring brand mentions?" and your product is not in the answer, you have already lost that prospect. They did not search for alternatives. They got a recommendation and moved on.

Most founders know they should be watching press coverage and Reddit threads. Fewer have a systematic way to check what AI tools say about them. This guide covers exactly how to do that, across every major AI platform, and what to do with what you find.

Why this matters more than most brand monitoring conversations acknowledge

For years, brand monitoring meant watching what humans said about you: blog posts, forum threads, news articles, reviews. That is still important. But there is a new layer.

AI assistants now synthesize those sources and serve a summary to users who never see the underlying pages. The sources that feed AI answers, including web articles, Reddit discussions, and high-authority blog posts, are the same sources that web monitoring tools like MentionDrop track. But the AI answer itself is a separate output that you need to check manually.

The key insight: AI search has changed brand monitoring in a way most teams have not caught up with. You can be well-covered in raw web mentions and still be invisible in AI-generated answers. These are not the same thing.

How to check ChatGPT

Open ChatGPT and run these prompts, one at a time. Use your actual category and brand name.

Category discovery prompts:

  • "What are the best tools for [your category]?"
  • "What should I use to [your product's main job]?"
  • "Recommend a [type of product] for [use case]"

Brand-specific prompts:

  • "What is [your brand name]?"
  • "Tell me about [your brand name]"
  • "Is [your brand name] any good?"

Run each prompt in a fresh conversation to avoid context effects. ChatGPT's training data has a cutoff, and it also pulls from web browsing when enabled. Try both the default and browsing-enabled modes if you have access.

Note what the answer says: Does your brand appear? In what position? With what description? Is the description accurate?

How to check Perplexity

Perplexity is worth particular attention because it shows its sources. That means you can see exactly which web pages fed the answer about your category.

Run the same category and brand-specific prompts you used in ChatGPT. Then look at two things:

  1. Does your brand appear in the answer?
  2. If yes, which source is Perplexity citing? If no, which sources are being cited instead?

The cited sources are often Reddit threads, review pages like G2 or Capterra, or blog posts comparing tools in your category. If those sources do not mention you, Perplexity has nothing to pull from. The fix is not to game Perplexity directly. The fix is to ensure the underlying web sources include accurate coverage of your product.

This is directly related to what the blind spot in brand monitoring describes: the gap between what humans say about you and what AI tools synthesize from those human sources.

How to check Google's AI Overviews

Google's AI Overviews appear at the top of search results for many category queries. They are increasingly the first thing users see.

Search for your category keywords directly in Google:

  • "[your category] tools"
  • "best [your product type]"
  • "[your product type] for [your use case]"

If an AI Overview box appears, check whether your brand is mentioned. Then click through to the underlying sources Google used to build it, which are usually shown as linked citations.

Also search your brand name directly and look at whether an AI Overview appears. If it does, what does it say?

How to check Gemini

Open Google Gemini and run the same prompts used for ChatGPT. Gemini draws on Google's web index, which means coverage tends to overlap with what Google Search surfaces. But Gemini often synthesizes more conversationally, so the framing of your brand can feel different.

Note both whether your brand appears and how it is described. Gemini sometimes surfaces older information because it is drawing from indexed pages that rank highly in Google. If your brand has changed significantly in the past year, outdated descriptions in Gemini are common.

How to interpret what you find

Not all outcomes require the same response. Here is how to think about each scenario.

Not mentioned at all. This is the most common situation for newer brands. It means the web sources that feed these AI tools either do not cover you or do not cover you with enough clarity and specificity. The path forward is creating content that answers category questions clearly, earning citations from high-trust sources, and ensuring structured data on your site helps AI tools understand what you do.

Mentioned incorrectly. If ChatGPT says you are a project management tool and you are actually a brand monitoring tool, the problem is usually that your content is ambiguous or the most-cited sources describe you inaccurately. Publish authoritative, clear pages that directly state what your product does. A page titled "What is [Your Brand]?" with a plain-English description is one of the most effective things you can create for this problem.

Mentioned positively and accurately. Your current content and web presence are doing their job. Continue building on it. Track these answers over time, because AI training data changes and competitor content can shift the answer.

Mentioned but in a negative or mixed context. This requires checking which sources fed the negative framing. Find those sources, then either address the underlying issue (if the feedback is fair) or create better content that provides additional context.

What you can actually do to improve AI visibility

The practical moves are not complicated, but they do require consistent effort.

Improve content clarity. AI tools cannot recommend what they cannot understand. If your homepage and top-level product pages do not clearly state what category you are in, what problem you solve, and who you are for, no AI tool can accurately represent you. Write for comprehension, not just search.

Earn citations from trusted sources. Perplexity shows its sources, and they are usually trusted review sites, major publications, Reddit, and established blogs. Guest posts on well-read industry sites, honest reviews on G2 and Capterra, and genuine Reddit participation all feed the citation graph that AI tools rely on.

Add structured data. Schema markup for your organization, product, and FAQ pages helps search-adjacent AI tools understand your content structure. It is not a guarantee of visibility, but it is a signal that helps.

Respond to web mentions actively. The web sources that feed AI answers are the same ones that brand monitoring tools surface. When someone posts about your category on Reddit, when a blogger compares tools in your space, when a journalist mentions your product in a roundup: those posts are both immediate mentions and future AI training material. Engaging with them authentically improves the underlying source content.

How traditional web monitoring connects to AI visibility

There is a practical reason that monitoring web mentions and Reddit discussions matters even in an AI-first world.

The sources AI tools draw from are, almost entirely, web pages that have already been indexed and cited. An old Reddit thread comparing tools in your category is exactly the kind of source that Perplexity will cite when someone asks about your category. A well-ranked blog post comparing your product to a competitor becomes an AI answer ingredient.

The real problem with brand monitoring is noise, not coverage. But the mentions you do catch on the web, the ones that have high reach, strong sentiment, or placement on trusted domains, are worth paying attention to precisely because they feed future AI answers. A mention on Reddit from a trusted community member with 10,000 upvotes is not just a mention. It is a future AI citation.

MentionDrop monitors the public web and Reddit in real time, which means you are watching the same source pool that AI tools draw from. You cannot directly control what ChatGPT says about you. You can control the web sources it learns from. Monitoring starts at $29/month.

The practical checklist

Run through this once a month at minimum:

  1. Open ChatGPT, Perplexity, and Gemini
  2. Ask each one the three category queries and the two brand-specific queries
  3. Note whether you appear, how you are described, and what sources are cited
  4. Compare across tools: consistent descriptions suggest stable coverage; inconsistent descriptions suggest the source pool is mixed
  5. Flag any inaccurate descriptions and create authoritative content to correct them
  6. Check Google AI Overviews for your top category keywords
  7. Review web monitoring to see which high-authority sources have mentioned you recently

This is a 30-minute exercise that most teams skip entirely. Founders who do it consistently are the ones who notice when AI tools start recommending them, and the ones who catch inaccurate descriptions before they compound.

Related reading