GEO Tactics:
Actionable Frameworks from Every Episode
Every checklist, framework, and sprint plan synthesised from 22 podcast episodes. Each section is a concrete action you can take this week โ no stats dashboards, no informational overviews.
The E-E-A-T Framework
85% of AI Overview citations come from sources demonstrating 3+ of these 4 signals. Apply each pillar to your top 10 pages first.
Experience
Demonstrate first-hand experience with the topic. Include case studies, personal testing results, and real-world application examples.
- โShare original test results
- โDocument your own experiments
- โInclude before/after comparisons
Expertise
Show deep domain knowledge. Use precise terminology, cite academic sources, and demonstrate understanding of nuance.
- โUse technical vocabulary accurately
- โReference primary research
- โAddress counterarguments
Authoritativeness
Build recognition as a trusted voice. Earn mentions from other authoritative sources and maintain consistent publishing.
- โGet cited by industry publications
- โPublish original research
- โBuild author profiles with credentials
Trustworthiness
Maintain factual accuracy, transparency about sources, and clear disclosure of methodology and limitations.
- โCite all data sources
- โDisclose conflicts of interest
- โUpdate outdated content
Establish clear entity signals โ brand name, product name, and category must appear consistently across your site, schema markup, and third-party mentions.
- Add entity labels in the first 50 words of every key page
- Implement Organization and Product schema markup
- Ensure your brand name is consistent across all platforms
Structure content so AI can extract direct answers. Conversational, paragraph-heavy content is harder for AI to parse than structured, answer-first writing.
- Write direct answers in the first 2 sentences of every section
- Use FAQ schema and HowTo schema markup on key pages
- Keep paragraphs under 3 sentences; use clear H2/H3 hierarchy
Third-party citation signals โ AI models weight brands that are mentioned and endorsed by other authoritative sources, not just self-published content.
- Earn mentions in 3+ authoritative industry publications
- Get cited in original research studies or data reports
- Build author profiles with verifiable credentials and publication history
Data recency โ AI models prefer current information. Stale content loses citation priority to fresher sources covering the same topic.
- Add 'Last updated' timestamps to all content pages
- Refresh statistics and data points at least quarterly
- Publish original research or data studies on a regular cadence
GEO Implementation Checklist
Actionable steps synthesised from all GEO coverage across episodes. Work through each category top to bottom.
- Add at least 3 specific statistics with source attribution per key content page
- Quote at least 1 named expert or authority figure per major claim
- Link to primary research sources (not secondary aggregators)
- Use structured data markup (FAQ, HowTo, Article schema)
- Write clear, direct answers to questions in the first 2 sentences of each section
- Build detailed author bio pages with credentials, publications, and social profiles
- Add 'Last updated' timestamps to all content pages
- Include methodology sections explaining how data was collected
- Earn mentions from 3+ authoritative sources in your niche
- Disclose any commercial relationships or conflicts of interest
- Implement FAQ schema on pages targeting informational queries
- Use clear H2/H3 hierarchy that mirrors how AI systems parse content
- Keep paragraphs under 3 sentences for AI snippet extraction
- Add a TL;DR summary at the top of long-form content
- Ensure pages load in under 2 seconds (Core Web Vitals)
- Register in Bing Webmaster Tools and review GEO opt-out settings
- Monitor AI Overview appearances for your target keywords weekly
- Build hybrid strategy: organic + paid + GEO (pure organic is no longer sufficient)
- Track citation appearances in ChatGPT, Claude, and Perplexity for brand queries
- Create content specifically designed to be cited (data studies, original research)
- Add clear entity labels: brand name, product name, and category in the first 50 words
- Include a structured comparison table (your product vs. alternatives) with explicit column headers
- Write a concise value proposition in the first 50 words โ one sentence, no jargon
- Add FAQ schema targeting 'How much does [product] cost?' and '[product] vs [competitor]' queries
- Include at least 1 third-party pricing reference or analyst quote for trust signals
- Add 'Last updated' timestamp โ GPT-5.4 cites pricing pages 34ร more often; freshness is critical
Pricing Page Score
Self-assessmentYour pricing page is unlikely to be cited by GPT-5.4. Prioritise all 6 items this sprint.
Based on GPT-5.4 citation study (March 2026) โ pricing pages cited 34ร more often than in GPT-5.3. Source: AI Daily Digest, Mar 13 2026.
AAO Checklist: Assistive Agent Optimisation
NewIntroduced in the March 14 episode. AAO is distinct from GEO โ it optimises for AI agent selection in actions, not just citation in answers. Where GEO asks "will an AI cite us?", AAO asks "will an AI agent choose us when executing a task on behalf of a user?"
Source: Authoritas / Jason Barnard, March 2026
ChatGPT's free and premium models cite almost entirely different sources. Your brand may be invisible to premium users even if it appears in free-tier results.
- Run your 20 most important queries on ChatGPT Free (GPT-4o mini)
- Run the same queries on ChatGPT Plus or Team (GPT-4o / o3)
- Document every query where you appear in one tier but not the other
- Prioritise GEO fixes for queries where you're missing from the premium tier
- Re-test both tiers monthly to track improvement
Gary Illyes confirmed Google operates hundreds of undocumented crawlers. Overly restrictive robots.txt rules may block the AI feature crawlers that feed AI Overviews and agent-based search.
- Review your robots.txt for Disallow rules targeting wildcard or unknown user agents
- Check if GPTBot, ClaudeBot, PerplexityBot, and Google-Extended are explicitly blocked
- Decide intentionally: opt out of AI training vs. opt out of AI citations (different bots)
- Test your robots.txt with Google Search Console's robots.txt tester
- Document which crawlers you allow and why โ treat it as a policy decision
Premium ChatGPT users are typically higher-intent and higher-value. Optimising specifically for the premium model's citation patterns is a high-ROI GEO investment.
- Identify which content types the premium model cites (long-form research, primary sources)
- Publish at least one original data study or research brief per quarter
- Ensure your most important pages have explicit author credentials and methodology sections
- Build citations from sources that premium models weight heavily (academic, industry reports)
- Add structured data (Article, Dataset, ResearchProject schema) to research content
AAO goes beyond citation โ it optimises for AI agents choosing your brand when executing tasks (booking, purchasing, recommending). This is the next frontier of AI visibility.
- Ensure your brand is registered and verified on all major business directories agents query
- Implement structured product/service data (schema.org/Product, schema.org/Service)
- Make pricing, availability, and contact info machine-readable and up to date
- Test your brand's appearance in agent-driven queries (e.g. 'book a [service] near me')
- Monitor Perplexity Shopping, ChatGPT Plugins, and Google AI agent surfaces for your category
Optimises for AI models citing your content in generated answers. Goal: appear in AI Overviews, ChatGPT responses, Perplexity answers when users ask questions.
Optimises for AI agents selecting your brand when executing tasks on behalf of users. Goal: be chosen when an agent books, purchases, or recommends on a user's behalf.
Source: Authoritas webinar, Jason Barnard & Beatrice Gamba โ March 17, 2026 ยท Introduced in the March 14 episode
AEO vs SEO: When to Optimise for AI Crawlers
AI overviews cut click-through rates by 58% โ but AI-referred visitors convert at higher rates. Use this framework to decide which content to open to AI crawlers and which to protect.
Source: Innovating with AI Magazine, March 2026 ยท Introduced in the March 18 episode
- Educational guides, how-to content, and explainers
- Industry glossaries and definition pages
- Category comparison pages (your product vs. alternatives)
- Original research and data studies you want cited
- FAQ pages targeting informational queries
- Author bio and credentials pages
- Pricing pages with proprietary packaging or discount structures
- Checkout flows and conversion-optimised landing pages
- Gated content (whitepapers, tools) used for lead generation
- Customer-only documentation and onboarding content
- Proprietary methodology or competitive differentiation content
- Content where the click itself is the business outcome
Run this audit on your top 20 pages by organic traffic. For each page, answer the five questions below to decide whether to optimise for AI extraction or protect for conversion.
If no: rewrite the opening paragraph to lead with the answer, not the context.
If keyword-only: add a clear question-and-answer section targeting the use-case intent.
If just invisible: open it. If it reveals proprietary methodology: consider blocking.
Traffic-primary pages: optimise for AI extraction. Conversion-primary: protect and drive clicks.
If yes: open to AI crawlers and measure revenue per visitor, not just traffic.
Google Ads Brand Tax Audit
NewA 99-billion-session analysis found branded keywords deliver 1,299% ROAS vs 68% for non-branded. Yet most brands are bidding on their own name without a strategy โ paying Google a tax on traffic they already own. This 6-step audit tells you whether you're paying the tax unnecessarily.
Source: 99-billion-session analysis, March 2026 ยท Introduced in the March 19 episode
Run this audit once per quarter. The goal is to identify whether you are paying Google for traffic you already own organically, and to build a branded keyword strategy that maximises ROAS while minimising wasted spend.
In Google Search Console, filter by your brand name. What percentage of branded queries result in a click to your site without paid ads? If you're already capturing 90%+ organically, bidding on your brand name is mostly a tax.
Search your brand name in Google. Are competitor ads appearing above your organic result? If yes, you must bid defensively โ the cost of not bidding is losing clicks to competitors. If no, evaluate whether bidding adds incremental value.
Create separate campaigns for branded and non-branded keywords. This gives you clean ROAS data for each segment and lets you set different bid strategies. Most accounts mix them, which hides the true cost of each.
Take your monthly branded ad spend. Subtract the incremental clicks you receive beyond what you'd get organically (use the 'auction insights' report to estimate). The remainder is your brand tax โ money paid to Google for traffic you'd have received anyway.
In a low-competition period, pause branded keyword bidding for 2 weeks. Measure the change in total branded traffic (organic + paid combined). If total traffic drops less than 10%, your branded ads are delivering minimal incremental value.
Any budget freed from unnecessary branded bidding should be reinvested in original research and PR outreach โ the inputs to AI citation. A single cited study can deliver branded visibility across ChatGPT, Perplexity, and Google AI Mode simultaneously.
- Competitors are actively bidding on your brand keywords
- You're launching a new product or promotion and need to control the message
- Your organic result doesn't appear in the top 3 for your brand name
- You're in a high-intent category where the paid result converts better
- You rank #1 organically and no competitors are bidding on your name
- Your branded ad ROAS is below 500% (you're paying for traffic you'd get anyway)
- Pausing branded ads doesn't change total branded traffic volume
- Your branded budget is crowding out non-branded acquisition spend
Source: 99-billion-session Google Ads analysis ยท Covered in the March 19 episode
Atrophy Paradox: Workflow Audit
Heavy AI use reduces cognitive engagement and makes output converge. This audit helps you classify every workflow as Routine (safe to automate) or Judgment (protect from AI dependency).
Source: Innovating with AI Magazine, March 2026 ยท Introduced in the March 18 episode
Repetitive, rule-based, low-stakes. Automating these frees cognitive capacity for judgment work.
- Formatting and reformatting documents or reports
- Summarising meeting notes or long-form content
- Drafting first versions of templated communications
- Data entry, tagging, and categorisation tasks
- Generating boilerplate code from established patterns
- Scheduling, calendar management, and reminders
Novel, high-stakes, or relationship-dependent. Over-relying on AI here causes skill atrophy and output convergence.
- Strategic decisions with ambiguous or incomplete information
- Client relationship management and sensitive communications
- Creative direction and brand voice decisions
- Performance reviews and people management
- Crisis response and reputation management
- Novel problem-solving where the right answer is unknown
Run this audit with your team every quarter. The goal is to identify where AI dependency is creating skill gaps or output homogenisation before it becomes a competitive liability.
List them. For each: is the underlying skill still practiced anywhere? If not, schedule a manual exercise.
If yes: identify which AI-assisted workflows are producing convergent results and inject human differentiation.
If no: require human review and annotation of AI outputs before delivery. Accountability requires understanding.
Audit the last 10 strategic decisions. How many were AI-drafted? How many were independently reviewed?
Those are your atrophy risks. Build a manual fallback protocol for each critical skill.
PR-First GEO: Earned Media Strategy
94% of AI citations come from earned media โ journalist-written articles, analyst reports, independent reviews. This section gives you the framework and sprint plan to generate those citations.
Source: Gartner, March 2026 ยท Introduced in the March 16 episode
- Journalist-written articles in industry publications (TechCrunch, Forbes, Wired)
- Analyst reports that name your brand as an example or case study
- Independent reviews and comparisons by credible third parties
- Academic or research papers that cite your data or methodology
- Podcast transcripts and interview quotes from authoritative hosts
- Wikipedia mentions and citations from encyclopaedic sources
- Press releases distributed via PR Newswire, Business Wire, or GlobeNewswire
- Guest posts on low-authority sites that republish your content verbatim
- Owned blog content optimised for keywords but lacking third-party validation
- Social media posts and owned social content (even high-engagement)
- Sponsored content or advertorial placements (even in major publications)
- Self-published research without independent corroboration or citation
Run this sprint once per quarter. The goal is to generate 3โ5 new editorial mentions in authoritative publications that AI models index and cite.
Publish original data, a survey, or a benchmark study. AI models cite primary research far more than opinion pieces. Even a small-n study (50โ100 respondents) with a clear finding is citable.
Find journalists who cover your category in publications that AI models cite (TechCrunch, Forbes, industry verticals). Use their recent bylines to understand what angles they cover.
Lead with the most counterintuitive finding from your research. Journalists and AI models both prefer surprising, specific numbers over general claims. Include the methodology in your pitch.
When coverage lands, link to it prominently from your own site. Add the journalist's quote to your homepage or relevant product page. This creates a citation loop that AI models follow.
Re-run your ARR audit. Check if your brand now appears in AI responses that cite the publication that covered you. Track which platforms picked up the coverage.
If 94% of AI citations come from earned media, then the ROI calculation for PR has fundamentally changed. PR spend is no longer just a brand awareness investment โ it is a direct input to AI search visibility. Every editorial mention in an authoritative publication is a potential AI citation that drives high-intent traffic.
Source: Gartner, March 2026 ยท Covered in the March 16 episode
Context Moat: Irreplaceable Content Audit
NewThe content moat is dead. Reddit's 50% AI citation drop shows AI models are concentrating on irreplaceable context โ original data, first-hand case studies, proprietary methodology. Content that can be reconstructed from other sources will be. Content that cannot will be cited.
Source: Conductor Dispatch, March 2026 ยท Introduced in the March 19 episode
Run this audit on your 10 highest-traffic pages. For each page, answer the five questions below. Pages that fail 3 or more questions are at high risk of being displaced by AI-generated summaries.
If yes: inject original data, proprietary findings, or first-hand experience that cannot be found elsewhere. Generic how-to guides are the most vulnerable.
If no: commission a survey, run an experiment, or publish internal benchmarks. Even a 50-respondent study with a clear finding is citable.
If no: add an author bio with specific credentials, case study experience, and links to corroborating work. Anonymous content is increasingly invisible to AI.
If no: develop a named framework for your approach and publish it with a clear methodology section. Named frameworks are cited by name by AI models.
If no: it lacks the external validation signal AI models use to distinguish authoritative from generic content. Prioritise PR outreach for this page.
Surveys, experiments, internal benchmarks, proprietary datasets. AI models cannot reconstruct data that only you collected.
- Customer survey results
- Internal performance benchmarks
- A/B test findings
- Industry polls with your audience
Documented outcomes from your own projects, clients, or experiments. Specificity and verifiability are what make case studies citable.
- Named client results with metrics
- Before/after implementation data
- Failure post-mortems with lessons
- Time-stamped experiment logs
Named frameworks, processes, or scoring systems that are uniquely yours. A named methodology becomes a citable entity in AI responses.
- A named scoring framework
- A step-by-step process with your brand name
- A decision matrix or flowchart
- A classification system with defined criteria
Concentration Era Brand Positioning
NewNew research shows AI doesn't spread value across more brands โ it concentrates it into fewer. The brands that win AI visibility will capture disproportionate market share. This 5-step positioning checklist helps you become one of the concentrated winners, not one of the displaced losers.
Source: Andreessen Horowitz / a16z research, March 2026 ยท Introduced in the March 20 episode
The popular narrative: AI lowers barriers to entry, enabling thousands of niche brands to compete. More tools = more competitors = fragmented markets.
- AI tools are equally available to all competitors
- Lower production costs enable more entrants
- Niche brands can now compete with incumbents
- Value spreads across a larger number of players
What the data shows: AI amplifies existing advantages. Brands with trust, data, and distribution compound faster. The gap between winners and losers widens.
- AI citation concentrates on the same 10โ20 authoritative sources per category
- Trust signals compound: cited brands get cited more
- First-mover advantage in AI visibility is durable, not temporary
- Brands without AI citations become invisible to AI-assisted buyers
Run this checklist once per quarter. The goal is to ensure your brand is positioned to be one of the concentrated winners in your category โ not one of the brands that AI models stop citing as the market consolidates.
Run your 20 most important category queries in ChatGPT, Claude, and Perplexity. Document every source cited. These are the publications and brands that AI models have already concentrated on in your space. Your goal is to appear in or alongside them.
Of the 10โ20 sources identified, choose the top 3 by citation frequency. Invest in getting your brand mentioned in those sources specifically โ through PR, contributed research, or expert commentary. One mention in a heavily-cited source is worth 100 mentions in uncited sources.
The brands AI concentrates on are typically the ones that defined the category's data vocabulary. Publish a benchmark, survey, or study that establishes the metrics your category is measured by. When journalists write about your category, they'll cite your numbers.
Concentration means being the answer AI gives for one specific, high-intent query in your category. Identify the single most valuable question a buyer in your category asks, and make your brand the unambiguous answer to that question across all AI platforms.
Run your top 20 category queries monthly in ChatGPT, Claude, and Perplexity. Track what percentage of responses mention your brand. This is your concentration score. A rising score means you're winning the consolidation. A falling score means you're being displaced.
The window to establish AI citation dominance in your category is open now โ but it won't stay open. As AI models consolidate their citation patterns, the brands that are cited today will continue to be cited tomorrow. The brands that are not cited today will find it increasingly difficult to break through. This is not a content strategy problem. It is a trust and authority problem that requires PR, original research, and earned media โ the inputs to AI citation.
Source: a16z research, March 2026 ยท Covered in the March 20 episode
MPP Readiness: Machine Payments Protocol
NewStripe and Tempo launched the Machine Payments Protocol โ a standard for agent-to-service payments now submitted to the IETF. Partners include Anthropic, OpenAI, Visa, Mastercard, Shopify, and Revolut. If you want AI agents to choose your brand when executing tasks, MPP compatibility is the new technical requirement.
Source: Stripe / Tempo, March 2026 ยท Introduced in the March 20 episode
Run this checklist if you operate an e-commerce store, SaaS product, or any service that AI agents might purchase or subscribe to on a user's behalf.
- Check if your platform is listed in the MPP directory (100+ services at launch)
- Contact Stripe to register your service for MPP compatibility
- Audit your checkout flow: can an AI agent complete a purchase without human intervention?
- Add structured product/service metadata that MPP-compatible agents can query
- Implement schema.org/Product or schema.org/Service markup on all purchasable items
- Test your service with an AI agent (ChatGPT, Claude) to see if it can discover and transact
Google's Universal Commerce Protocol expanded with cart management and catalog access. Unstructured product pages are increasingly invisible to agent-driven commerce.
- Connect your product catalog to Google's UCP via Merchant Center
- Ensure all products have schema.org/Product markup (price, availability, description)
- Enable cart management API if you run an e-commerce store
- Audit your Merchant Center feed: price, availability, description, and images all required
- Test whether a Google AI agent can find and add your products to a cart autonomously
AAO (Assistive Agent Optimisation) ensures AI agents choose your brand when executing tasks. MPP ensures those agents can pay for your product or service without human intervention. Together, they form the complete technical stack for agent-era brand visibility. A brand that is AAO-optimised but not MPP-compatible will be chosen but unable to convert.
MPP directory: stripe.com ยท IETF standard submission pending ยท Covered in the March 20 episode
GitHub-Centered Marketing Workflow
NewClaude Code enabled one marketer to run 41% above benchmark across 5 concurrent campaigns simultaneously. The future marketing team lives in GitHub โ using AI agents to produce, review, and deploy content at 10ร the speed of traditional workflows.
Source: Innovating with AI Magazine / Claude Code case study, March 2026 ยท Introduced in the March 18 episode and March 20 episode
This workflow treats marketing content like software: version-controlled, reviewable, deployable. It enables one marketer to manage the output of a 5-person team.
Move all marketing content (copy, briefs, campaign specs, email sequences) into a GitHub repository. Markdown files, YAML configs, and JSON data structures. Version control gives you a full history of every change and makes AI agent collaboration auditable.
Create a GitHub Issue for every content brief. Assign it to an AI agent (Claude Code, Codex) via a PR. The agent drafts the content, you review the diff. This makes every AI contribution reviewable, commentable, and reversible โ exactly like a code review.
Assign AI agents to: first drafts of templated content, SEO metadata generation, A/B variant creation, image alt text, social copy from long-form articles, and email subject line testing. Reserve human judgment for strategy, brand voice, and final approval.
Set up GitHub Actions to run automated checks on every PR: broken links, SEO schema validation, readability scores, brand voice consistency (via an AI reviewer), and plagiarism detection. Catch issues before they reach production.
Connect your GitHub repo to your CMS or deployment pipeline. After publishing, feed performance data (CTR, conversion, engagement) back into the repo as structured data. AI agents can then use historical performance to improve future drafts autonomously.
- First drafts of templated content (product descriptions, email sequences)
- SEO metadata: title tags, meta descriptions, alt text
- A/B variant generation for headlines and CTAs
- Social copy from long-form articles
- Competitor monitoring and summarisation
- Brand voice decisions and tone of voice guidelines
- Campaign strategy and positioning
- Crisis communications and sensitive messaging
- Final approval on all customer-facing content
- Creative direction and visual identity choices
See the Atrophy Paradox Workflow Audit above for the full Routine vs Judgment classification framework.
AAIO: Agentic AI Optimisation
NewAAIO (Agentic AI Optimisation) is the emerging framework that replaces SEO in an agent-first world. Where SEO optimised for human searchers clicking links, AAIO optimises for agentic browsers and commerce protocols executing tasks on behalf of users. Your website needs to speak to machines, not just humans.
Source: Search Engine Journal, March 2026 ยท Introduced in the March 23 episode
AAIO has three components that together replace the traditional SEO stack. Each component addresses a different layer of agent-era visibility.
Rankings measure human click behaviour. Agents don't click โ they query, parse, and execute. Agent-accessible visibility means structured data, machine-readable content, and agent-accessible APIs. A page that ranks #1 but has no structured data is invisible to an agent executing a task.
- Add schema.org structured data to every key page (Product, Service, FAQ, Person, Article)
- Ensure all content is rendered in clean HTML โ no JS-only rendering for critical information
- Use clear heading hierarchy (H1 โ H2 โ H3) that mirrors how agents parse document structure
- Connect product/service APIs to agent-accessible protocols (UCP, MPP, MCP) where applicable
Technical SEO is now table stakes โ any competent team can implement it. The real differentiator in the AAIO era is business acumen, strategic thinking, and the ability to prove ROI in an environment where click-through rates are declining. The SEOs who survive are the ones who can connect agent visibility to revenue.
- Reframe your SEO reporting: track citation share in AI responses, not just organic click volume
- Build a business case for AAIO investment using agent-driven conversion data (higher intent, higher value)
- Identify which AI agents are most relevant to your buyer journey and audit your presence in each
- Present AAIO as a revenue strategy, not a technical exercise โ connect agent citations to pipeline
Agentic browsers (Operator, Computer Use, Gemini agents) execute tasks on behalf of users โ booking, purchasing, comparing, subscribing. Your website must be navigable and transactable by a machine, not just a human. UCP, MPP, and MCP integrations are the infrastructure layer of AAIO.
- Test your website with an AI agent (ChatGPT, Claude) โ can it find, understand, and transact with you?
- Audit your checkout and booking flows for machine navigability (clear labels, no CAPTCHA blocking agents)
- Implement UCP (Google Universal Commerce Protocol) if you sell products or services online
- Register with the MPP directory (Stripe) to enable agent-to-service payments
- Add MCP server integration if you offer a SaaS or API-accessible service
Run this checklist to assess your current AAIO readiness. Each item represents a concrete action that improves your visibility to AI agents executing tasks on behalf of users.
- Audit all page title tags โ replace vague, brand-forward titles with query-matched, specific descriptions
- Add structured data markup (schema.org) to all key pages โ products, articles, FAQs, people
- Ensure your site has machine-readable content: clean HTML, proper heading hierarchy, no JS-only rendering
- Connect product/service APIs to agent-accessible protocols (UCP, MPP, MCP) where applicable
- Test your site with an AI agent (ChatGPT, Claude) โ can it find, understand, and transact with you?
- Identify the 3 AI models most relevant to your audience and audit whether they cite your brand
- Reframe SEO reporting to include citation share in AI responses alongside organic click volume
- Build a quarterly AAIO audit into your marketing calendar โ agent behaviour changes faster than algorithm updates
Built for a world where humans type queries into a search box and click links. Optimises for rankings, click-through rates, and organic traffic volume.
- Target: human searchers
- Metric: organic clicks and rankings
- Output: pages that rank
- Infrastructure: title tags, backlinks, Core Web Vitals
Built for a world where agentic browsers and commerce protocols execute tasks on behalf of users. Optimises for agent selection, citation, and task execution.
- Target: AI agents executing tasks
- Metric: citation share and agent selection rate
- Output: machine-readable, agent-transactable pages
- Infrastructure: structured data, UCP, MPP, MCP
Source: Search Engine Journal, March 2026 ยท Covered in the March 23 episode. See also: AAO Checklist and MPP Readiness above.
Fan-Out Citation Strategy
NewSince Google switched AI Overviews to Gemini 3 in January 2026, 62% of AI Overview citations now come from beyond page-one results โ 31% from positions 11โ100, and 31% from pages not ranking in the top 100 at all. The mechanism is fan-out queries: Google breaks a search into multiple sub-queries and pulls sources across all of them.
Source: Dofollow Digest, March 2026 ยท Introduced in the March 21 episode
Total share of AI Overview citations coming from positions 11+ or unranked pages
Pages ranking on pages 2โ10 of Google are now active citation candidates for AI Overviews
Gemini 3 delivers approximately 32% more source citations per AI Overview than its predecessor
Google no longer answers a search with a single query. It breaks the question into multiple sub-queries, pulls sources across all of them, and synthesises the answer. A page that ranks on page three for a related sub-topic can now get cited in an AI Overview for the primary query.
e.g. 'What is the best CRM for a 10-person sales team?'
e.g. 'CRM comparison for small teams', 'CRM pricing under $50/user', 'CRM integrations with Slack', 'CRM onboarding time', 'CRM reviews 2026'
A page ranking #47 for 'CRM onboarding time' can be cited in the AI Overview for 'best CRM for small teams' โ even if it doesn't rank for the primary query at all.
The final AI Overview cites sources from across all sub-queries, giving 62% of citations to content that traditional SEO would consider invisible.
The actionable implication of fan-out queries: audit your content for related sub-topics and supporting questions. Pages that answer the 'why', 'how', and 'what if' versions of your target query are now citation candidates โ even if they rank poorly for the primary term.
For each of your top 20 target queries, list the 5โ10 supporting questions a user might ask. These are your fan-out sub-query targets.
Which sub-topics do you already have dedicated pages for? Which are missing? Missing sub-topics are citation gaps โ pages that could be cited but don't exist yet.
Each sub-topic should have its own page with authoritative, structured content. A single paragraph buried in a long-form guide is not sufficient โ agents need a dedicated, parseable source.
Sub-topic pages are now AI Overview citation candidates. Add FAQ schema, clear H2/H3 structure, and a TL;DR summary at the top. Make them easy for Gemini 3 to parse and cite.
Grok 4.20 now achieves the lowest hallucination rate ever recorded and is replacing Perplexity as the top AI search recommendation. Monitor your citation presence in Grok alongside ChatGPT, Claude, and Gemini.
Run your top 20 queries in ChatGPT, Claude, Perplexity, and Grok. Track which sub-topic pages are being cited and which are not. Fan-out citation patterns shift as models update โ monthly monitoring is essential.
This worked example applies the 6-step fan-out audit to a hypothetical B2B SaaS company selling project management software. The primary target query is best project management tool for remote teams. Walk through each step to see how the audit surfaces citation gaps and generates a concrete content plan.
Primary query: "best project management tool for remote teams". Fan-out sub-queries Gemini 3 is likely to generate:
| Sub-topic | Dedicated page? | Current ranking |
|---|---|---|
| Pricing comparison | Yes โ /pricing | #3 |
| Onboarding remote teams | No โ buried in /docs | Not ranking |
| Slack & Zoom integrations | Yes โ /integrations | #22 |
| vs spreadsheets | No | Not ranking |
| Security & privacy | Yes โ /security | #8 |
| Reviews 2026 | No | Not ranking |
| Async work across time zones | No | Not ranking |
| Free trial vs paid | Partial โ FAQ section | #41 |
Result: 4 citation gaps (no dedicated page), 2 weak pages (partial coverage or poor ranking). 6 of 8 sub-topics are invisible to fan-out queries.
After publishing the 6 new/updated pages, run the primary query and each sub-query in all four platforms monthly. Expected outcome after 60 days based on comparable audits:
Note: Citation counts are illustrative projections based on the fan-out mechanism. Actual results depend on content quality, domain authority, and model update cycles.
Traditional SEO rewarded pages that ranked well for primary queries. Fan-out queries reward brands that have depth โ comprehensive coverage of every sub-topic in their category. A brand with 50 well-structured sub-topic pages will accumulate more AI Overview citations than a brand with 5 highly-ranked primary pages. The window to build this sub-topic depth is open now, before your competitors recognise the shift.
Source: Dofollow Digest, March 2026 ยท Covered in the March 21 episode
Google AI Title Defence
NewGoogle has officially confirmed it rewrites page titles using AI. In a March 2026 statement, Google acknowledged that AI-generated titles now appear in search results when its systems determine the original title is misleading, vague, or keyword-stuffed. The rewrite is not optional โ but it is defensible. Pages with specific, query-matched, structured titles are significantly less likely to be overridden.
Source: Google Search Central blog / Search Engine Journal, March 2026 ยท Introduced in the March 22 episode and March 23 episode
Google's AI title system triggers when it detects one or more of the following signals. Understanding the trigger conditions is the first step to writing titles that resist rewriting.
- Keyword stuffing:Best CRM Software | CRM Tool | CRM System 2026
- Brand-only title:Acme Corp โ The Leader in Business Solutions
- Vague or generic:Home | Welcome to Our Website
- Misleading vs content:Title promises X but page delivers Y
- Excessive capitalisation:BEST PROJECT MANAGEMENT TOOL FOR TEAMS
- Title too long (>60 chars):Truncated titles are rewritten to fit the display
- Query-matched specificity:Project Management Software for Remote Teams
- Matches H1 on page:Title and H1 are identical or near-identical
- Accurate content preview:Title describes exactly what the page delivers
- Natural language phrasing:No pipes, brackets, or separator abuse
- Under 60 characters:Fits display without truncation
- Includes primary keyword once:Keyword appears naturally, not repeated
Run this checklist on your 10 highest-traffic pages first. Prioritise pages where Google Search Console shows a title in the SERP that differs from your <title> tag โ that is direct evidence of an AI rewrite already in progress.
- Start herePull your top 50 pages from Google Search Console. Export the 'Page' and 'Query' columns.
- Detect rewritesFor each page, compare the <title> tag in your HTML to the title shown in Google Search results. Use a browser extension (e.g. SEO Meta in 1 Click) or GSC's URL Inspection tool.
- Detect rewritesFlag any page where the SERP title differs from your <title> tag โ this is a confirmed AI rewrite. Prioritise these for immediate correction.
- LengthCheck title length: every title should be 50โ60 characters. Titles over 60 characters are truncated and rewrite-prone. Use a title length checker tool.
- H1 alignmentCheck that the title matches the H1 on the page. Mismatches are a primary rewrite trigger. If they differ, align them โ the H1 is typically more accurate.
- Keyword hygieneRemove all keyword repetition. Each target keyword should appear once. Pipes and separators are acceptable but should not be used to stack keywords.
- Brand placementReplace brand-first titles on non-homepage pages. 'Acme | Product Name' should become 'Product Name โ What It Does and Who It's For'.
- SpecificityAdd a specificity signal: include the audience, use case, or year where relevant. 'Project Management Software' โ 'Project Management Software for Remote Teams 2026'.
- AccuracyEnsure the title accurately previews the page content. Read the title, then read the first paragraph of the page. If they don't match, rewrite the title to match the content โ not the other way around.
- MonitorAfter publishing updated titles, monitor GSC weekly for 4 weeks. If the SERP title still differs from your <title> tag, the page likely has a deeper content-title mismatch that needs addressing.
These examples show the pattern: replace vague, brand-forward, or keyword-stuffed titles with specific, query-matched, accurate descriptions. Each rewrite reduces the probability of AI override and improves click-through rate simultaneously.
You cannot prevent Google from rewriting your title โ but you can make your original title so accurate, specific, and query-matched that the AI has no reason to change it. The goal is not to fight the system; it is to write titles that the system agrees with. A title that accurately describes a specific page for a specific audience, in natural language, under 60 characters, with the H1 aligned โ that title will survive.
Source: Google Search Central / Search Engine Journal, March 2026 ยท Covered in the March 22 episode and March 23 episode
A 16-month experiment tracking AI-generated pages from indexing through to long-term ranking performance reveals a critical pattern: fast indexing does not equal durable rankings.
The 3-Month Traffic Cliff
AI pages get crawled and indexed rapidly. Google treats them as fresh content signals.
Impressions peak as pages rank temporarily. 70% of all traffic arrives in this window.
Quality evaluation catches up. Pages without authority signals are displaced by stronger competitors.
The core mechanism: Google's initial crawl rewards freshness and topical relevance. Its quality evaluation โ which runs on a longer cycle โ then measures authority signals, unique insights, and trust indicators. AI content that passes the first filter but fails the second gets a traffic spike followed by a cliff.
AI-Drafted vs. AI-Published: The Critical Distinction
- โAI generates โ publish directly
- โNo original data or research
- โNo named expert quotes
- โNo first-hand experience signals
- โGeneric structure, no unique angle
- โNo internal link cluster support
- โAI drafts โ human adds authority layer
- โOriginal data, stats, or proprietary research
- โNamed expert quotes with credentials
- โFirst-hand case studies or test results
- โUnique angle not available elsewhere
- โSupported by topic cluster with depth
Authority Signal Checklist โ Before Publishing AI Content
- Original data pointA stat, finding, or measurement that only you have published
- Named expert quoteA real person with verifiable credentials, not a paraphrase
- First-hand experienceYou tested it, used it, or observed it directly
- H1 / title alignmentThe page title, H1, and meta description all describe the same specific topic
- Primary source citationsLink to the original study or data, not a secondary aggregator
- 'Last updated' timestampVisible on the page, not just in the sitemap
- Author bio with credentialsA real author page with publications, social profiles, or affiliations
- Topic cluster supportAt least 3 related pages linking to and from this page
- Structured data markupArticle, FAQ, or HowTo schema appropriate to the content type
- TL;DR summaryA 2โ3 sentence summary at the top for AI snippet extraction
- Counterargument sectionAcknowledge limitations or alternative views โ signals genuine expertise
- Internal link depthLinks to and from at least 5 other pages on the same domain
- Original images or diagramsNot stock photos โ charts, screenshots, or custom illustrations
- Methodology disclosureHow was the data collected? What are the limitations?
- External backlinksAt least 1 mention from an authoritative external source
- Reader engagement signalsComments, shares, or time-on-page signals that indicate genuine value
The 3% of AI-generated pages that held their top-100 rankings after three months shared one characteristic: they were AI-drafted, not AI-published. The AI provided the structure, the research scaffolding, and the first draft. A human added the authority layer โ original data, expert perspective, first-hand experience โ that Google's quality evaluation rewards. The 97% failure rate is not an argument against AI content. It is an argument for treating AI as a drafting tool, not a publishing tool.
Source: TLDR Marketing, March 2026 ยท Covered in the March 25 episode
AI is disrupting the workflows that SaaS tools were built to support. Growth rates are compressing, retention is declining, and investors are losing confidence in software revenue durability. Here is what is happening and how content strategy must adapt.
The Growth Collapse Pattern
AI agents now handle invoice processing and payment workflows that Bill.com was built for
AI-native data pipelines and LLM-based query tools reduce dependency on centralised warehousing
AI assistants absorb the cognitive tasks that single-purpose SaaS tools were designed to automate
The investor signal: VCs and PE firms are no longer treating SaaS revenue as inherently durable. The assumption that a workflow tool creates a sticky moat is being tested by AI agents that can perform the same workflow without the subscription. Founders are rethinking pricing, retention, and the definition of value.
Which SaaS Categories Are Most Exposed
- Document / Content GenerationAI writes, edits, and formats โ the core use case is now table stakes
- Data Entry & AP/AR AutomationAgentic AI handles end-to-end invoice and payment workflows
- Basic Analytics & ReportingLLMs query data directly; dashboards lose their differentiation
- Customer Support TicketingAI agents resolve 60โ80% of tier-1 tickets without human routing
- SEO & Content Marketing ToolsAI handles keyword research, brief writing, and first-draft creation
- Project Management (simple)AI agents plan, assign, and track tasks in natural language
- Email Marketing (basic)AI writes sequences, segments audiences, and optimises send times
- HR Screening & SchedulingAI interviews, scores, and schedules candidates autonomously
- CRM (relationship layer)AI handles data entry but human relationship context remains valuable
- Compliance & Legal ReviewAI assists but liability and judgement keep humans in the loop
- Design Tools (complex)AI generates assets but brand systems and creative direction persist
- Security & Access ManagementAI monitors but policy decisions require human accountability
- Infrastructure & DevOpsAI assists but system reliability requires deep operational context
- Financial Close & AuditRegulatory requirements keep humans accountable for final outputs
- Healthcare Records (EHR)Compliance, liability, and integration complexity create durable moats
- Enterprise Identity (IAM)Security surface area and compliance requirements resist disruption
Content Strategy for Disrupted Categories
Stop writing about what your tool does. Write about the business outcome it delivers, with case studies showing measurable results. AI tools can replicate features; they cannot replicate your customers' documented outcomes.
Aggregate anonymised usage data, benchmark reports, and industry surveys from your customer base. This is the content that AI tools cannot generate and that earns citations in both traditional search and AI overviews.
The stickiest SaaS moat is not the tool itself โ it is the workflow it sits inside. Content that maps your tool to the broader workflow stack (CRM + your tool + ERP, for example) creates a switching cost narrative that AI disruption cannot easily dissolve.
When buyers are evaluating whether to replace your tool with an AI agent, they search for '[your tool] vs AI' and '[your tool] alternatives'. Own this content before your competitors do. Be honest about where AI is better and where your tool adds value that AI cannot replicate.
Investors and buyers are both asking the same question: is this revenue durable? Pricing pages that include ROI calculators, retention data, and integration depth signal durability. Pages that only list features signal commoditisation.
AI-driven compute costs and shorter investor payback expectations are making unlimited free trials economically unsustainable. The market is moving toward paid trials with a defined value demonstration event โ a specific moment where the user experiences the core outcome before committing. Content that explains and pre-sells this value event converts better than content that describes features.
- โDocumented customer outcomes (not testimonials)
- โIntegration depth map (your tool in the workflow)
- โProprietary data or benchmarks
- โCompliance or regulatory moat evidence
- โNamed enterprise customer case studies
Source: The Prohuman AI / Ruben Hassid, March 2026 ยท Covered in the March 25 episode
Anthropic's Claude Code auto mode โ where the AI classifies actions as safe or risky and executes safe ones autonomously โ signals a broader shift: agentic AI tools are moving from ask-first to execute-first. Content and workflows that are not structured for autonomous agent consumption will be bypassed. Here is how to adapt.
The Auto Mode Shift: Ask Less, Execute More
- โขAgent pauses before every action
- โขUser confirms each step manually
- โขHigh friction for multi-step tasks
- โขHuman approval is the default
- โขSlow execution, high cognitive load
- โAgent classifies each action as safe or risky
- โSafe actions execute without interruption
- โOnly risky actions surface for human review
- โAutonomous execution is the default
- โFast execution, selective oversight
The strategic implication: As agentic AI tools move toward autonomous execution, content that is not machine-readable, structured, or agent-accessible will be skipped entirely. The agent will not pause to interpret ambiguous content โ it will move to the next source that is clearly structured and immediately actionable.
Safe vs. Risky: How Agents Classify Actions
- Reading and parsing contentExtracting structured data from a page
- Following internal linksNavigating a site's documented structure
- Calling read-only APIsFetching pricing, availability, or specs
- Summarising or reformattingCondensing a how-to guide into steps
- Comparing optionsEvaluating alternatives from a comparison table
- Submitting forms or making purchasesCheckout, sign-up, or payment flows
- Deleting or modifying dataEditing files, records, or configurations
- Sending communicationsEmails, messages, or notifications
- Accessing authenticated systemsLogging in or accessing private data
- Executing irreversible actionsPublishing, deploying, or billing
Content Structuring Checklist for Agentic Consumption
- Structured data markup (JSON-LD)Article, Product, FAQ, HowTo, or BreadcrumbList schema โ agents parse structured data before prose
- Clear H1โH2โH3 hierarchyAgents use heading structure to navigate and extract sections; ambiguous hierarchies cause skips
- Explicit entity labelsName your product, category, and use case in the first 50 words โ agents need disambiguation signals
- Machine-readable tablesUse HTML tables with clear column headers for comparisons, pricing, and specs; avoid image-based tables
- Sitemap and robots.txtEnsure agents can discover and crawl your full content graph; block only what should not be indexed
- Internal link anchor textUse descriptive anchor text that tells the agent what the linked page contains, not 'click here'
- API or data endpointIf your content includes pricing, availability, or specs, expose a read-only API endpoint for agent consumption
- Canonical URL consistencyAgents follow canonicals; inconsistent canonicals cause duplicate content confusion in agent memory
- Explicit confirmation triggersDesign purchase, sign-up, and contact flows to require explicit user confirmation โ agents will surface these for review
- Clear action boundariesSeparate read-only content pages from transactional pages; agents treat them differently
- Reversibility signalsLabel irreversible actions clearly (e.g., 'This cannot be undone') โ agents classify these as risky and pause
- Permission scope documentationIf you offer an API, document what each scope can and cannot do; agents use this to classify safe vs. risky calls
- Agent-accessible value propositionYour core value prop must be extractable in a single sentence from the first 100 words of your homepage
- Agentic browser compatibilityTest your site with JavaScript disabled โ if key content disappears, agents running in headless mode cannot read it
- Skills gap documentationPublish a 'How to use [your tool] with AI agents' guide โ this is the content that gets cited when agents research your category
- Workflow integration mapDocument where your tool sits in the agentic workflow stack; agents use this to decide whether to include or bypass your product
Anthropic's stated direction for Claude is to ask less and execute more. Auto mode in Claude Code is the first production implementation of this principle. As this pattern spreads to Claude's browser agent, desktop agent, and API integrations, the content that gets consumed will be the content that is structured for autonomous execution โ not the content that requires a human to interpret and relay.
- โกCan a headless browser read your key pages?
- โกDoes your sitemap expose your full content graph?
- โกIs your value prop in the first 100 words?
- โกDo your tables use HTML (not images)?
- โกIs your JSON-LD schema present and valid?
- โกAre your internal link anchors descriptive?
Source: Anthropic / Claude Code release notes, March 2026 ยท Covered in the March 25 episode and March 24 episode
Source Episodes
All tactics on this page were sourced from the following 22 episodes
Stay Current on GEO
New tactics and frameworks are covered in every episode. Subscribe to get the latest as they emerge.
Browse All Episodes