Why We Dogfood
If Clarify can't help itself win AI recommendations, why would it work for anyone else? This isn't a rhetorical question — it's the standard we hold ourselves to. Every week, we use Clarify's own platform to track, diagnose, and improve our AI visibility across the models that matter most.
Dogfooding isn't just a software development tradition. For us, it's a requirement. AI recommendation optimization is a new discipline with no textbooks and very few practitioners with real data. The only way to build a credible product is to be our own first and most demanding customer.
When we find a gap in our own visibility, we treat it as both a product problem and a content problem. The fixes we apply to our own brand become the strategies we recommend to our customers. If a tactic doesn't move our own numbers, we don't ship it.
What We Track
We monitor prompts that represent how potential customers might discover a tool like Clarify through AI. These include direct category queries like "best AI recommendation tools" and problem-oriented prompts like "how to get recommended by ChatGPT."
We track across three major AI models: ChatGPT (GPT-4), Claude, and Gemini. For each prompt, we record whether Clarify is mentioned, where it ranks, how strongly it's recommended, and whether the response is stable across multiple runs.
Prompts We Care About
- "best AI recommendation optimization tool" — The most direct category query.
- "how to improve AI visibility" — Problem-oriented prompt capturing users in research mode.
- "AI search optimization platform" — Targets users who understand the AI search landscape.
- "tool to track AI recommendations" — Functional query from users who know what they need.
- "AI brand monitoring software" — Broader category query.
- "how to get recommended by ChatGPT" — High-intent prompt from brands that have identified the specific problem.
Where We Were Losing
When we first started tracking our own AI visibility, the results were humbling. Clarify didn't appear in most AI recommendation lists for our target prompts. We were invisible on more than 60% of the prompts we cared about most.
The brands that did appear had extensive review-site presence, comparison content on third-party blogs, and established profiles. They weren't necessarily better products — they had more structured, citable information distributed across the web.
This wasn't a branding failure or a product quality issue. It was an information architecture problem.
What We Changed
- Published educational content — pages like this one, methodology documentation, and playbooks that give AI models structured, authoritative content to reference.
- Built a glossary — defining terms like "AI recommendation," "prompt visibility," and "AI Share of Voice."
- Created comparison frameworks — objective evaluation criteria positioning Clarify as the entity creating the evaluation standards.
- Earned third-party mentions — contributed to industry conversations and provided data to journalists.
- Improved schema markup — added structured data (JSON-LD) to make it easier for AI models to understand Clarify.
- Established category-defining language — consistently using terms like "AI recommendation optimization" and "prompt-level visibility."
What Improved
- Prompt Presence: Moved from completely absent to consistently mentioned on highest-priority prompts.
- Top-3 Rankings: Progressed from not appearing at all to being listed in the top 3 across multiple models.
- Cross-Model Consistency: Began showing up across ChatGPT, Claude, and Gemini for the same prompts.
- Recommendation Strength: Evolved from weak mentions ("one option to consider") to explicit recommendations with descriptions of what the platform does.
What This Proves
The system works. AI recommendation optimization is not theoretical — it's measurable, repeatable, and improvable with the right framework and consistent execution. What we learned from optimizing our own visibility directly informs how Clarify works as a product. The tracking methodology, scoring system, and playbook recommendations were all developed and validated by applying them to ourselves first.
The pattern is consistent: brands that systematically improve their information architecture, earn third-party validation, and create structured educational content see measurable improvements in AI visibility over time.