Your AI assistant, your growth agent.
An OAuth 2.1 + PKCE-secured Model Context Protocol server. Claude Desktop, ChatGPT, Gemini, Cursor, Zed — any MCP-compatible agent can run experiments, check live rankings, crawl competitors, and analyze analytics in natural language.
No other optimization platform lets an AI agent operate on your data.
Optimizely has a dashboard. Crayon has a dashboard. Every CI / CRO / SEO tool has a dashboard. Optimize Pilot has dashboards too — and an MCP server. Your AI assistant can now be the one asking "where do we rank vs Notion this week?" and getting back a real answer from real data, then drafting the experiment to respond.
From "install" to "run experiments from your editor" in three moves.
Install the MCP server
Add the Optimize Pilot MCP endpoint to your agent's config. Works with Claude Desktop, Cursor, Zed, Continue, Cline, or any MCP-compatible client.
Authorize with OAuth
First call triggers OAuth 2.1 + PKCE. Scoped tokens — read-only, write, or full — per agent per environment. Revokable per-token from the dashboard.
Your agent operates
Ask in natural language. The agent picks tools, calls them, brings back grounded answers. Destructive ops require confirm=true. Every write action is attestation-signed.
Your growth stack, exposed as tools.
Every signal your agent might need.
Site analytics, geographic and referrer breakdowns, page performance, engagement, real-time visitors, visit timing, goal details, browser/OS, experiments and their results, rank checks, keyword research, rank history, web search, action status. No gap between "what's in the dashboard" and "what the agent sees."
- site_analytics · geographic · referrers · page_performance · engagement
- real_time_visitors · search · visit_timing · goal_details · browser_os
- experiments · experiment_results · rank_check · rank_history
- keyword_research · web_search · action_status
Your agent can actually ship.
Write tools are gated behind the mcp.write scope and require a paid tier. Destructive operations (stop_experiment, deploy_winner) additionally require confirm=true in the tool call — no surprise rollouts.
- suggest_experiment · create_experiment · start_experiment
- pause_experiment · stop_experiment (confirm=true)
- deploy_winner (confirm=true) · crawl_website
- update_business_profile · update_recommendation_status
- trigger_seo_audit
Proof the AI read it. Verification it shipped.
When your agent implements a recommendation, a signed attestation token captures what the agent read and the proof URL of the change. The server then verifies server-side that the change is actually live. This is how you trust an AI to touch production.
- Per-action signed attestation tokens
- Server-side verification of proof URLs
- Full audit log of every tool call
- Revocable per-agent, per-site tokens
Three prompts to start with.
Ships with three built-in chainable prompts that compose the read and write tools into full workflows: design_and_launch_experiment, weekly_performance_review, competitor_gap_brief. Drop them into a Claude project or Cursor rule and your agent has a running start.
- design_and_launch_experiment — end-to-end with approval gates
- weekly_performance_review — data pull + summary + next actions
- competitor_gap_brief — Radar + SEO + Navigator composition
- Fully customizable per workspace
Long-running crawls and audits stream back.
Competitor crawls and SEO audits take more than a single tool-call timeout. The MCP server returns a job ID and streams progress back to the client, so your agent sees the scan complete in real time and can follow up the moment data is ready.
- Job IDs for long-running operations
- Progress streaming to MCP clients
- Multi-site tokens — one token, many sites, agent picks which
- Rate limits + quotas exposed in the dashboard
Under the hood.
Better together.
Wire your agent in this afternoon.
OAuth install in under three minutes. Read-only tokens free-tier. Write tools unlock on paid plans.