← Back to Blog
Analytics9 min read

I Let My AI Coding Agent Read My GA4 Data Live. One Session Permanently Changed How I Build, Optimise, and Report.

A full-stack developer connects an agentic IDE to Google Analytics 4 using MCP — surfacing bot traffic, a broken sitemap, and tracking gaps in one session. Includes setup steps, gotchas, and honest verdict.

N
Naveen Gaur
May 10, 2026

Most developers treat analytics and development as two separate jobs. Build in the IDE. Check GA4 later. Spot a problem. Switch back to code. Fix it. Repeat.

That separation is where bad decisions live — and where revenue quietly leaks.

For solo developers and small agencies especially, the cost of that context switching adds up fast. Jumping between analytics dashboards, documentation, and your codebase creates friction that slows debugging and increases the chance of missing signals that matter.

On a recent client project, I connected my AI coding agent directly to Google Analytics 4 using MCP (Model Context Protocol). In a single working session, it surfaced suspicious traffic patterns distorting conversion data, a discoverability issue on a key conversion page, and tracking gaps I would have caught much later the standard way.

This is what changed — and how you can replicate it.

Why the Standard Analytics Workflow Costs You More Than You Think

The problem isn't that GA4 is bad. The problem is that it lives in a different tab from the place where fixes actually happen.

Your dashboard tells you what happened. It doesn't tell you why — and it doesn't sit next to the code that produced the problem. So you form hypotheses, switch contexts, make changes, wait for data to refresh, and check again. That loop works. But it's slow, fragmented, and depends entirely on you asking the right question of the right report at the right time.

Three problems consistently fall through that gap:

Traffic quality issues — aggregate numbers look healthy enough at a glance, so you don't investigate further.

Conversion blockers — the page exists, the form works, but something upstream is broken.

Tracking bugs — discovered after deployment, when fixing them costs more time than catching them early would have.

For freelancers and agencies maintaining multiple client sites, these gaps compound. Faster identification of anomalies, tighter implementation feedback loops, and quicker tracking validation reduce hours spent on repetitive investigation — creating more room for higher-value optimisation work instead of manual reporting overhead.

What if your AI agent could surface these issues while you're still working?

The Idea: Give Your Agent Near-Real-Time Access to Analytics

MCP (Model Context Protocol) is an open standard that lets AI agents connect directly to external tools and near-real-time data sources. Think of it as a structured bridge between your coding agent — whether you use Cursor, Windsurf, Google Antigravity, or another agentic IDE — and live data sources like GA4.

Connect them, and your agent stops working only from what you tell it. It starts reading active analytics signals from inside the same environment where you write code.

Instead of manually building reports in GA4 and switching back to your IDE, you can ask the agent questions directly during a working session:

  • "Which landing pages have high traffic but weak engagement?"
  • "Which pages receive page_view events but no form_start events?"
  • "Which traffic sources show unusually low engagement duration?"

The agent queries GA4 through the MCP server, processes the results, and surfaces patterns — without you leaving the development environment once.

The shift is simple but significant: instead of asking "what does GA4 say?" after the build is done, you ask "what does the data say while I'm still building?"

That also changes what an ongoing maintenance or optimisation retainer can realistically include — because the analytics layer is no longer disconnected from implementation work.

What One Session Actually Found

I was auditing a client site I had full legitimate access to — a site I already knew well. Even then, the agent surfaced behavioural signals and patterns that the standard dashboard workflow had either missed or would have taken significantly longer to investigate manually.

1. Traffic patterns suggested a meaningful share of non-human sessions

The agent identified anomalies consistent with low-quality or automated traffic: sessions concentrated in locations that, for a new site with no established audience in those regions, were statistically unusual. Combined with low engagement metrics and session behaviour that didn't match normal user flows, the data pointed strongly toward a significant share of bot or crawler traffic.

This matters because inflated session counts distort your conversion analysis. A page that appears to convert poorly may actually be performing better than the numbers suggest once low-quality traffic is separated from genuine user behaviour.

The agent didn't definitively identify bot traffic — GA4 doesn't expose IP-level data — but it surfaced enough signal to make the hypothesis worth investigating further. That distinction matters: the value here is acceleration, not automation. The agent compressed hours of pattern analysis into minutes.

2. A conversion page showed a discoverability gap

The agent flagged a page receiving page_view events but near-zero conversion activity. That discrepancy led to a deeper investigation of how the page was being found. Traffic sources reaching the page were thin, organic entry was almost absent, and for a new site that pattern pointed toward a discoverability problem rather than a page quality problem.

Further inspection confirmed the sitemap hadn't been updated correctly, which likely contributed to weak organic visibility. The agent surfaced the analytics signal. The diagnosis came from combining that signal with implementation context — which is exactly what this workflow enables and a manual analytics review alone cannot.

3. Event tracking could be validated inside the build loop

As tracking-related code was being reviewed, we could verify whether click and form_start events were firing as expected — without leaving the IDE. The feedback loop compressed from "deploy, switch tabs, wait for data, come back" to "edit, validate, continue."

For client work, that matters operationally. Tracking mistakes are caught earlier, debugging becomes faster, and implementations become more reliable.

How to Set This Up

The setup pattern is the same regardless of which agentic IDE you use.

Step 1 — Get your GA4 Property ID
Admin → Property Settings → numeric Property ID. Wrong ID means wrong data.

Step 2 — Enable both APIs in Google Cloud Console

  • analyticsdata.googleapis.com — runs reports
  • analyticsadmin.googleapis.com — reads property configuration and custom dimensions

Both are required. Enabling only the Data API leaves your agent partially blind — it can run reports but can't read full property context. A partial setup that appears functional is worse than a failed one because it creates false confidence in incomplete analysis.

Step 3 — Authenticate securely
Use Application Default Credentials (ADC):

gcloud auth application-default login

Keeps credentials out of your codebase and is the cleanest option for local development.

Security note: Create a dedicated service account with only the Analytics Data and Admin scopes required — avoid using your personal Google account credentials for client projects. Audit service account access regularly and never commit credentials to your repository.

Step 4 — Install and configure the MCP server
Install the GA4 MCP server and configure it in your IDE's MCP settings, pointing it at your Google Cloud Project ID and the correct property.

Step 5 — Stay actively involved
This is what most guides underplay. Your agent will make mistakes — wrong fixes, misread documentation, assumptions that don't fit your property. Treat it as a capable junior developer: fast and useful, but requiring direction. The quality of analysis you get out is directly proportional to how actively you guide it.

Three Gotchas Before You Start

Silent hangs — Python buffering. If the server connects but the agent sits thinking indefinitely with no error, add the -u flag to your Python command in mcp_config.json: python -u -m analytics_mcp. Python buffers stdout by default; MCP needs unbuffered output. No error message, just an infinite wait — now you know why.

Metric format mismatch. If you see Input validation error: {'name': 'conversions'} is not of type 'string' — pass metrics as plain strings ["sessions", "conversions"], not as objects. Older GA4 API documentation shows the object format; the MCP server expects strings.

Missing Admin API. Enable both APIs from the start. Without the Admin API, the agent can run reports but lacks property-level context — and incomplete analysis can produce misleading conclusions.

What Permanently Changed

Before this workflow, analytics and development were sequential tasks. Build. Check data. Hypothesise. Return to code. Repeat.

Now the analytics layer sits directly alongside the implementation layer. The agent can surface behavioural signals while I'm still working on the code that affects them. Decisions happen faster, with more context, grounded in active analytics signals rather than dashboard summaries reviewed hours later.

The core shift is this: AI-native development collapses the gap between implementation, analytics, and optimisation. That's not a tool feature. It's a different way of working — and one that applies equally whether you're maintaining a client WordPress site, optimising a Next.js application, or debugging a conversion funnel that isn't behaving the way the code suggests it should.

For client work, this creates a different category of service. An audit that connects near-real-time GA4 signals to implementation context — surfacing traffic anomalies, tracking gaps, and conversion blockers in a single session — is not the same as a manual analytics review. It's faster, more specific, and maps directly to the outcomes clients care about.

Should You Do This?

If you build or optimise client sites: yes. The setup takes an afternoon the first time. Twenty minutes the second. What you get is a permanently tighter loop between what your code does and what your data shows.

If you run a business: find a developer who can set this up. The questions this workflow answers — what's affecting your conversion rate, whether your traffic quality is sound, whether your tracking is telling the truth — are worth answering on real data, not dashboard snapshots.


I offer AI-native analytics audits as part of new client engagements — near-real-time GA4 signals connected to your codebase, surfacing what standard reporting misses. Currently taking on 2–3 new projects. Get in touch.

Connected an AI agent to your analytics workflow yet? What did it surface that a standard review missed? Drop a comment below — always interested in comparing notes.

Share

Leave a Comment

Comments are moderated before appearing on the site.

Need help with your WordPress site?

I fix WordPress crashes, remove malware, and optimize performance for small businesses. Fast turnaround, direct access, no agency overhead.

Get in Touch →