
The model divides visibility into four quadrants:
- Open areas known to your brand and customers
- Hidden areas that you haven’t communicated to your audience
- Blind spots you missed about how customers perceive your brand
- What is unknown to both of you
Each requires a different answer:
Open areas: Strengthen the trust of entities
This is the core identity of your brand, so you need to strengthen entity recognition. Gus Pelogia has a guide for building an entity tracker which measures how strongly your brand is associated with specific topics. If trust falls below certain thresholds, you risk exclusion from knowledge graphs.
Use the same terminology repeatedly to improve overall consistency and strengthen semantic precision. LLMs are model students. If you describe yourself in five different ways, they will reflect this inconsistency.
Hidden areas: protect internal resources
This includes staging environments, internal documentation, private tools, and sensitive resources.
Aggressively restrict access to prevent AI training crawlers from accessing these pages. Use appropriate authentication, firewall controls, and blocking mechanisms. The data loss becomes part of the training corpus once eliminated.
Blind spots: Monitor external narratives
This is where third-party reviews, social media, forums and comments reside. LLMs train on these associations and the adjectives used in reviews stick with your brand. Therefore, sentiment signals become part of the probabilistic profile.
Implement social listening, monitor your reputation signals, and monitor how your brand is described across platforms.
Unknown to Both: Proactively control your brand narrative
This quadrant is the most uncertain because you can’t control what you don’t see. However, you can influence the ecosystem through data philanthropy, and here’s how:
- Publish original research
- Provide authoritative resources
- Provide structured, high-quality information
If you want to control how the model talks about your brand, give him something worth mentioning. Remember, the safest defensive strategy is to become the trusted source.
10. Structured data and knowledge graphs are critical to how LLMs understand content. How can SEOs strengthen authority at the entity level?
Using Gus Pelogia’s guideStart by checking the page’s confidence level. If the confidence score is less than 50-55%, the model does not have confidence in that entity and is unlikely to cite the page.
Here are some things you can do to improve entity-level authority:
Remove ambiguity:
These are model systems, not reasoning engines. They essentially are spicy autocompleteso don’t leave important signals open to interpretation.
The work of Shaun Anderson data warehouse leak analysis and image analysis demonstrate how many of these signals connect directly. Entity signals, structured references, and relationships all feed into the same ecosystem.
Be explicit:
Use original sources to provide references. Provide the data yourself instead of relying on the model to infer it. Make sure key details are correct and consistent, including logos, branding information, and entity attributes.
Include structured data:
Structured data plays a role here, but should be treated as part of a broader knowledge graph strategy. Clearly define relationships and entities so machines can interpret them without guessing.
What is your biggest fear about using AI for SEO?
I have two concerns, which I’ve outlined below:
Agentic misalignment:
The Anthropic team, for all its flaws, is also one of the most transparent groups publishing research on these systems.
In a simulated environment, Claude Opus 4 attempted to blackmail a supervisor to prevent it from being shut down, and the team released full details of the experiment.
