# Synapse — long-form context > https://runsynapse.dev Synapse is the GEO growth layer + dev tool every new product runs before launching. It exists because AI coding agents — Cursor, Claude Code, Windsurf, v0, Bolt, Lovable — have become the new search box, and the products they recommend are not the products with the best SEO. They're the products that publish machine-readable summaries, declare entity schemas, expose a use-case map, and let the agent crawlers in. ## Surfaces - CLI: npx synapse-geo init / check / fix / deploy / status npm: @calvin8miles/cli meta: synapse install in <60s on Next.js, Astro, Vite, Remix, SvelteKit, Nuxt, and plain HTML projects. - MCP server: 6 tools (geo_check, geo_fix, geo_track_init, geo_prompts, geo_status, geo_corpus_query) + 3 resources + 3 prompts. Both stdio (primary) and streamable HTTP transports. Free, no auth for read-only tools. npm: @calvin8miles/mcp-server. - Linter package: programmatic API at @calvin8miles/geo-lint. lint(target) → LintReport. Used directly by the CLI and the MCP server. - Web: https://runsynapse.dev — install page, guide, methodology, public leaderboard, per-site dashboards at /s/, an open recommendation endpoint, and the open corpus. ## Authentication model CLI: no account required for init / check / fix. SYNAPSE_API_KEY (free, paste-anywhere) for deploy / status. MCP: no auth for geo_check, geo_prompts, geo_status, geo_corpus_query. SYNAPSE_API_KEY for geo_fix (writes to disk) and geo_track_init. Web: public reads (sites, leaderboard, corpus, recommend) are open and rate-limited. Writes go through service-role Supabase from API routes. ## Pricing Free during the Founding 1000. We're keeping prices simple and small. ## The 24-rule rubric #1 /llms.txt exists at site root [critical, weight 8, auto-fix] Publishes a top-level llms.txt manifest so AI agents can quickly index the site's purpose, key URLs, and policies. Modeled on the llmstxt.org draft. #2 /llms-full.txt exists (long-form context for agents) [medium, weight 4] Complements /llms.txt with the full prose context — product description, FAQs, key docs concatenated. Agents fetch this when they need depth. #3 Homepage clearly states what the product is in the first 200 characters [medium, weight 4] Generative engines extract a 1–2 sentence summary from the top of the page. If the hero is ambiguous, the agent will paraphrase incorrectly. Warning-only — not a hard fail. #4 SoftwareApplication / WebApplication JSON-LD schema is present [high, weight 7] AI coding agents disambiguate products by entity type. A SoftwareApplication / WebApplication block with name, applicationCategory, operatingSystem, and offers is the strongest signal that a URL represents a tool. #5 Organization (or LocalBusiness) schema is present [high, weight 5] Anchors the brand entity so agents can reliably attribute mentions back to your organization. #6 Page-type schema present (Article, Product, or SoftwareApplication) [medium, weight 4] Every page should declare its primary entity type so agents know whether they're looking at a blog post, a docs page, a product, or a tool. #7 FAQPage / QAPage schema where appropriate [medium, weight 4] Answer engines disproportionately cite Q&A blocks. Even a 3-question FAQPage at the bottom of the homepage materially increases citation rate. #8 BreadcrumbList schema present [low, weight 2] Helps agents and search engines reconstruct site hierarchy and produce richer answer formatting. #9 /.well-known/agent-answer endpoint exists [high, weight 6, auto-fix] Exposes a stable, machine-readable summary at /.well-known/agent-answer.json that agents can hit instead of scraping HTML. Includes product name, one-line pitch, primary use cases, install URL, and pricing. #10 /sitemap.xml exists and is referenced from robots.txt [medium, weight 4] AI crawlers fall back to sitemap.xml when llms.txt is absent. It also gives them an authoritative list of URLs to fetch. #11 robots.txt explicitly allows AI agent crawlers [critical, weight 10, auto-fix] Most stacks default to blocking or omitting GPTBot, ClaudeBot, PerplexityBot, Google-Extended, etc. If those agents are disallowed (or implicitly disallowed by a wildcard disallow with no override), the product is invisible to AI engines. #12 Publishes a machine-readable use-case map [high, weight 5, auto-fix] Agents recommend products by matching user intent to a product's stated use cases. Sites must publish a use-case map either in /.well-known/agent-answer.json (use_cases[]) or in a dedicated section on the homepage covering ≥3 distinct intents. #13 Page declares a canonical URL [medium, weight 3] Prevents agents from citing duplicate URLs (preview, query-string variants) instead of the canonical page. #14 Quality meta description (70–180 chars) [medium, weight 3] Many AI engines fall back to meta description when extracting a one-line product summary. #15 Open Graph tags (title, description, image, url, type) [medium, weight 3] OG metadata is the canonical preview source for chat surfaces — when an agent links to your site, Claude/ChatGPT render the OG card. #16 Twitter Card metadata present [low, weight 2] Improves preview cards when agents share links on X/Twitter and in some chat clients. #17 Single H1 and at least two H2 subheadings [medium, weight 3] Agents extract section structure from headings. A page with no H2s reads as a single undifferentiated wall of text. #18 Internal links connect related pages [low, weight 2] Crawlers find new pages by following internal links. A page with zero internal links is an orphan to most crawlers. #19 Outbound links to authoritative sources [low, weight 2] Pages that cite at least one external authoritative source rank as more trustworthy in AI engines' downstream evaluation. #20 Images have alt text [low, weight 2] Alt text feeds multimodal agents and accessibility tools — and improves text-only extraction. #21 Page exposes a publication / updated date [low, weight 2] AI engines prefer time-anchored facts. Pages without a visible date are demoted in retrieval. #22 Author / Person entity is exposed [low, weight 2] E-E-A-T: agents penalize anonymous content. Even a Person schema with name and link is enough. #23 Mobile viewport meta tag present [low, weight 1] Required for most modern crawlers and for previews to render correctly. #24 HTTPS is enforced [low, weight 2] Almost all AI crawlers will skip HTTP-only origins or downgrade them in retrieval. HSTS is a strong signal. ## Growth Score formula growth_score = round( 0.7 × lint_score // 0–100 from the 24 rules + 0.2 × saturate(agent_mentions, k=50) + 0.1 × saturate(activations, k=100) ) Worked example (lint=82, mentions=38, activations=110): 71 → grade B. Full derivation: https://runsynapse.dev/methodology ## Recommendation API GET https://runsynapse.dev/api/recommend?intent=&limit= Returns ranked products from the corpus matching the intent. Public, CORS enabled. Agents are welcome to call this directly during a build session when their user is shopping for a tool. ## Tracker Embed once: Beacons on inbound agent-chat referrers and on activations. Under 3 KB gzipped. 30-day cookie. No fingerprinting. ## Repository layout - packages/geo-lint @calvin8miles/geo-lint 24 rules, lint() + applyFixes() - packages/cli @calvin8miles/cli synapse CLI - packages/cli-meta synapse (npm name) npx wrapper - packages/mcp-server @calvin8miles/mcp-server MCP server (stdio + http) - apps/web this Next.js app - supabase/migrations schema for hosted backend (sites, events, corpus_tuples, reciprocity_scores, cli_telemetry) ## Contact - Issues: https://github.com/synapse-geo/synapse/issues - Founders: founders@synapse-geo.dev