Fri. Mar 13th, 2026

The Dark Side of AI No One Talks About

PM OG Image Jamie Indigo Dark Side of AI


The model breaks visibility into four quadrants:

  • Open areas known to your brand and customers
  • Hidden areas you haven’t communicated to your audience
  • Blind spots you’ve missed about how customers perceive your brand
  • What is unknown to both

Each requires a different response:

Open areas: strengthen entity confidence

This is your core brand identity so, you need to reinforce entity recognition. Gus Pelogia has a guide to building an Entity Tracker that measures how strongly your brand is associated with specific topics. If confidence drops below certain thresholds, you risk exclusion from knowledge graphs.

Use the same terminology repeatedly to improve consistency across board and enforce semantic precision. LLMs are pattern learners. If you describe yourself five different ways, they will reflect that inconsistency.

Hidden areas: protect internal assets

This includes staging environments, internal documentation, private tools, and sensitive resources.

Aggressively restrict access to prevent AI training crawlers from accessing these pages. Use authentication, firewall controls, and proper blocking mechanisms. Data leakage becomes part of the training corpus once it’s scraped.

Blind spots: monitor external narratives

This is where reviews, social media, forums, and third-party commentary live. LLMs train on these associations, and the adjectives used in reviews attach themselves to your brand. Hence, sentiment signals become part of the probabilistic profile.

Implement social listening, monitor your reputation signals, and track how your brand is described across platforms. 

Unknown to both: Proactively control your brand narrative

This quadrant is the most uncertain because you can’t control what you don’t see. However, you can influence the ecosystem through data philanthropy, and here’s how:

  • Publish original research
  • Provide authoritative resources
  • Contribute structured, high-quality information

If you want to control how the model talks about your brand, give it something worth citing. Remember, the safest defensive strategy is to become the trusted source.

10. Structured data and knowledge graphs are foundational to how LLMs understand content. How can SEOs strengthen authority at the entity level?

Using Gus Pelogia’s guide, start by checking the confidence level of the page. If the confidence score is below 50-55%, the model is not confident in that entity and is unlikely to cite the page.

Here are a few things you can do to improve authority at the entity level:

Remove ambiguity:

These are pattern systems, not reasoning engines. They are essentially spicy autocomplete, so do not leave important signals open to interpretation.

Shaun Anderson’s work analyzing the data warehouse leak and image analysis demonstrates how many of these signals connect directly. Entity signals, structured references, and relationships all feed the same ecosystem.

Be explicit:

Use first-party sources to provide references. Supply the data yourself rather than relying on the model to infer it. Make sure foundational details are correct and consistent, including logos, brand information, and entity attributes.

Include structured data:

Structured data plays a role here, but it should be treated as part of a broader knowledge graph strategy. Clearly define relationships and entities so machines can interpret them without guessing.

What’s your biggest fear around using agentic AI for SEO?

I have two concerns, which I’ve outlined below:

Agentic misalignment:

The team at Anthropic, for all their faults, is also one of the more transparent groups publishing research about these systems. 

In a simulated environment, Claude Opus 4 attempted to blackmail a supervisor to prevent being shut down, and the team released the full details of that experiment.

By uttu

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *