CoreDash API: Query Real User Core Web Vitals Data

Query your real user Core Web Vitals data programmatically. Use it from scripts, CI pipelines, or let your AI agent diagnose performance issues automatically.

Arjen Karel Core Web Vitals Consultant
Arjen Karel - linkedin
Last update: 2026-03-17

Trusted by market leaders · Client results

adevintaharvardworkivaaleteiaperionmarktplaatsmy work featured on web.devsaturnebayvpnwhowhatwearloopearplugsmonarchcompareerasmusmchappyhorizonnina caredpg mediafotocasasnvnestlekpn

Your performance data, anywhere you need it

CoreDash collects Core Web Vitals from real users visiting your site. The API gives you access to that same data from any tool, script, or AI agent. Three tools, JSON in, JSON out.

The most interesting use case: connecting your AI. The CoreDash API uses the same protocol as the Model Context Protocol (MCP), which means AI tools like Claude, Cursor, and Windsurf can query your real user data directly. Ask your AI "why is my LCP slow on mobile?" and it pulls the actual field data to answer.

We built CWV Superpowers on top of this. It is an AI agent that combines your CoreDash field data with Chrome DevTools to diagnose and fix Core Web Vitals issues. The API is what makes that possible.

But you do not need an AI agent. A curl command works just as well.

Authentication

Every request needs an API key in the Authorization header:

Authorization: Bearer cdk_YOUR_API_KEY

To get a key:

  1. Log in at app.coredash.app
  2. Go to your project, then AI Insights, then Connect Your AI
  3. Click Create API Key and copy it. It is only shown once.

Keys start with cdk_ and are scoped to a single project. You can create multiple keys and revoke them from the same page.

Request format

The API uses JSON-RPC 2.0. Every request is a POST to:

https://app.coredash.app/api/mcp

The request body looks like this:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "get_metrics",
    "arguments": { }
  }
}

The id field can be any number or string. It gets echoed back in the response. There are three tools: get_metrics, get_timeseries, and get_histogram.

get_metrics: current performance

Returns the current Core Web Vitals values with good/improve/poor ratings. This is the tool you use for "what is my LCP right now?" type questions.

Parameters

ParameterTypeDefaultDescription
metricsstringLCP,INP,CLS,FCP,TTFBComma-separated metrics to return
percentilestringp75p50, p75, p80, p90, or p95
filtersobject{}Filter by dimensions (see Dimensions below)
groupstringGroup results by a dimension key to compare segments
datestring-31dTime range: -6h, today, -1d, -7d, -31d
limitnumber100Max segments when grouping (max 500)

Example: get all metrics

curl -X POST https://app.coredash.app/api/mcp \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdk_YOUR_API_KEY" \
  -d '{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "tools/call",
    "params": {
      "name": "get_metrics",
      "arguments": {}
    }
  }'

The raw response is a JSON-RPC wrapper:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "content": [{
      "type": "text",
      "text": "{ ... JSON string ... }"
    }]
  }
}

The actual data is a JSON string inside the text field. Parsed, it looks like this:

{
  "period": "last 31 days",
  "percentile": "p75",
  "metrics": {
    "LCP": {
      "value": 2450,
      "unit": "ms",
      "rating": "improve",
      "distribution": { "good": 61.2, "improve": 22.4, "poor": 16.4 }
    },
    "INP": {
      "value": 180,
      "unit": "ms",
      "rating": "good",
      "distribution": { "good": 82.1, "improve": 12.3, "poor": 5.6 }
    },
    "CLS": {
      "value": 0.08,
      "unit": "",
      "rating": "good",
      "distribution": { "good": 74.5, "improve": 18.2, "poor": 7.3 }
    }
  }
}

The distribution object tells you what percentage of real page loads fall into each rating. This is often more useful than the p75 value alone. An LCP of 2450ms with 61% good means most users have a fine experience, but the tail is dragging the p75 down.

Example: compare mobile vs desktop LCP

Use the group parameter to split results by any dimension. This is how you find out whether your LCP problem is a mobile problem:

curl -X POST https://app.coredash.app/api/mcp \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdk_YOUR_API_KEY" \
  -d '{
    "jsonrpc": "2.0",
    "id": 2,
    "method": "tools/call",
    "params": {
      "name": "get_metrics",
      "arguments": {
        "metrics": "LCP",
        "group": "d",
        "date": "-7d"
      }
    }
  }'

Parsed response:

{
  "period": "last 7 days",
  "percentile": "p75",
  "groupedBy": "d",
  "groupName": "Device Type",
  "segments": [
    {
      "segment": "mobile",
      "value": "mobile",
      "metrics": {
        "LCP": {
          "value": 3200, "unit": "ms", "rating": "improve",
          "distribution": { "good": 52.3, "improve": 28.1, "poor": 19.6 }
        }
      }
    },
    {
      "segment": "desktop",
      "value": "desktop",
      "metrics": {
        "LCP": {
          "value": 1800, "unit": "ms", "rating": "good",
          "distribution": { "good": 78.5, "improve": 15.2, "poor": 6.3 }
        }
      }
    }
  ]
}

Mobile at 3200ms, desktop at 1800ms. The aggregate would show 2500ms and you would think "not great, but not terrible." The grouped view shows the real story: desktop is fine, mobile needs work.

Example: filter to a specific page on mobile

Combine filters to narrow down to exactly the traffic you care about:

curl -X POST https://app.coredash.app/api/mcp \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdk_YOUR_API_KEY" \
  -d '{
    "jsonrpc": "2.0",
    "id": 3,
    "method": "tools/call",
    "params": {
      "name": "get_metrics",
      "arguments": {
        "metrics": "LCP,CLS",
        "filters": { "ff": "/checkout", "d": "mobile" },
        "date": "-7d"
      }
    }
  }'

get_timeseries: performance over time

Returns metric values bucketed over time with automatic trend detection. This is the tool you use for "has my LCP gotten worse?" and "did that deploy fix the regression?"

Parameters

ParameterTypeDefaultDescription
metricsstringLCP,INP,CLS,FCP,TTFBComma-separated metrics
percentilestringp75Which percentile
filtersobject{}Filter by dimensions
datestring-31dTime range
granularitystringdayBucket size: hour, 6hours, day, week

Example: LCP trend over the last 7 days

curl -X POST https://app.coredash.app/api/mcp \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdk_YOUR_API_KEY" \
  -d '{
    "jsonrpc": "2.0",
    "id": 4,
    "method": "tools/call",
    "params": {
      "name": "get_timeseries",
      "arguments": {
        "metrics": "LCP",
        "date": "-7d",
        "granularity": "day"
      }
    }
  }'

Parsed response:

{
  "period": "last 7 days",
  "percentile": "p75",
  "granularity": "day",
  "dataPoints": 7,
  "timeseries": [
    { "date": "2026-03-10T00:00:00.000Z", "LCP": { "value": 2600, "unit": "ms", "rating": "improve" } },
    { "date": "2026-03-11T00:00:00.000Z", "LCP": { "value": 2450, "unit": "ms", "rating": "improve" } },
    { "date": "2026-03-12T00:00:00.000Z", "LCP": { "value": 2300, "unit": "ms", "rating": "good" } }
  ],
  "summary": {
    "LCP": {
      "recent": 2350,
      "previous": 2680,
      "change": -12.3,
      "trend": "improving",
      "unit": "ms"
    }
  }
}

The summary compares the second half of the period to the first half. Trend values are improving (more than 5% better), stable (within 5%), or regressing (more than 5% worse). This is what makes the timeseries endpoint useful for automated monitoring: you do not need to parse the data points yourself to know if things are getting worse.

get_histogram: distribution shape

Returns the distribution of a single metric as ~40 buckets with counts per range. This is the tool you use when the p75 looks fine but you suspect a long tail, or when you want to see the full shape of your performance data.

Parameters

ParameterTypeDefaultDescription
metricstringrequiredSingle metric: LCP, INP, CLS, FCP, or TTFB
filtersobject{}Filter by dimensions
datestring-31dTime range

Note: unlike get_metrics, this takes a single metric (not metrics). One metric per request.

Example: LCP distribution on mobile

curl -X POST https://app.coredash.app/api/mcp \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdk_YOUR_API_KEY" \
  -d '{
    "jsonrpc": "2.0",
    "id": 5,
    "method": "tools/call",
    "params": {
      "name": "get_histogram",
      "arguments": {
        "metric": "LCP",
        "filters": { "d": "mobile" },
        "date": "-7d"
      }
    }
  }'

Parsed response:

{
  "period": "last 7 days",
  "metric": "LCP",
  "unit": "ms",
  "filters": { "d": "mobile" },
  "buckets": [
    { "from": 0, "to": 250, "count": 1250, "rating": "good" },
    { "from": 250, "to": 500, "count": 3400, "rating": "good" },
    { "from": 500, "to": 750, "count": 2800, "rating": "good" },
    { "from": 2500, "to": 2750, "count": 890, "rating": "improve" },
    { "from": 4000, "to": 4250, "count": 120, "rating": "poor" },
    { "from": 9750, "to": null, "count": 15, "rating": "poor" }
  ],
  "total": 45000
}

Each bucket has from/to boundaries, a count of estimated page loads in that range, and a rating based on where the bucket sits relative to Core Web Vitals thresholds. The last bucket has to: null because it is the open-ended tail.

Bucket widths are fixed per metric: LCP uses 250ms, INP uses 25ms, CLS uses 0.025, FCP uses 200ms, TTFB uses 125ms.

This is useful for understanding the shape of your data. A p75 of 2400ms could mean most users are around 2400ms, or it could mean 60% are under 1000ms and a chunk of slow mobile traffic is pulling the tail. The histogram tells you which.

Dimensions

Use these keys in filters or as the group value. Filtering narrows the data to a specific segment. Grouping splits the results so you can compare segments side by side.

General

KeyNameExample values
dDevice Typemobile, desktop
ccCountryUS, NL, DE (ISO 3166-1 alpha-2)
ffPathname/products, /checkout (null = /)
uFull URLSupports * wildcards, [neq] prefix for negation
qsQuery StringThe ?key=value part
lbPage LabelCustom label from the RUM snippet
browserBrowserChrome, Safari, Firefox
osOperating SystemAndroid, iOS, Windows
ntNavigation Typenavigate, back_forward, reload
fvVisitor Type0 = repeat, 1 = new visitor
liLogged In Status0 = logged out, 1 = logged in, 2 = admin
noNavigation Origin1 = same origin, 2 = cross origin
abA/B TestCustom test label

Device and network

KeyNameUnit
mDevice MemoryGB
dlNetwork SpeedMbps
ccsClient Capability Score1=Excellent, 2=Good, 3=Moderate, 4=Limited, 5=Constrained
redirRedirect Countcount

Metric attribution

These dimensions tell you what caused a metric value. Group by lcpel to see which elements become the LCP across your pages. Group by inpel to find the interactions that produce the worst INP.

KeyNameFor metric
lcpelLCP ElementLCP
lcpetLCP Element TypeLCP: text, image, background-image, video
lcpprioLCP PriorityLCP: 1=Preloaded, 2=High fetchpriority, 3=Not preloaded, 4=Lazy loaded
lcpurlLCP Image URLLCP
inpelINP ElementINP
inpitINP Input TypeINP
inplsINP Load StateINP
lurlLOAF Script URLINP
clselCLS ElementCLS

Filter examples

{ "d": "mobile" }
{ "ff": "/checkout", "d": "desktop" }
{ "cc": "US", "browser": "Chrome" }
{ "u": "[neq]*/admin/*" }

Metrics reference

MetricNameUnitGoodNeeds improvementPoor
LCPLargest Contentful Paintms< 25002500 to 4000> 4000
INPInteraction to Next Paintms< 200200 to 500> 500
CLSCumulative Layout Shift< 0.10.1 to 0.25> 0.25
FCPFirst Contentful Paintms< 18001800 to 3000> 3000
TTFBTime to First Bytems< 800800 to 1800> 1800

The default percentile is p75. This is what Google uses for Core Web Vitals ranking. If 75% of your page loads are below the threshold, you pass.

Using the API as an MCP server

The API endpoint is a fully compatible MCP server. If your AI tool supports MCP (Claude Code, Cursor, Windsurf, and others), you can connect it directly. The AI then has access to get_metrics, get_timeseries, and get_histogram as tools and can query your field data as part of any conversation.

This is how CWV Superpowers works: it connects to CoreDash via MCP, pulls your real user data, opens your site in Chrome, and traces the exact cause of a slow metric. The API provides the "what is happening in production" part, Chrome provides the "why is it happening" part.

You can also connect the MCP server to your own AI setup. Point your MCP client at https://app.coredash.app/api/mcp with your API key, and your AI can answer questions like "which pages have the worst INP on mobile?" using actual field data instead of guessing.

Rate limits

Limits are per project per day and reset at midnight UTC.

PlanDaily requests
Trial150
Starter500
Standard500
Pro500+
Enterprise500+

150 requests on the trial plan is plenty for manual exploration and AI assisted debugging. If you are running automated monitoring in CI, the paid plans give you 500 per day.

Error handling

Errors come back as JSON-RPC error objects:

{
  "jsonrpc": "2.0",
  "id": 1,
  "error": { "code": -32001, "message": "Invalid or revoked API key." }
}
CodeHTTP statusMeaning
-32001401Bad or missing API key
-32002429Rate limit exceeded
-32600400Malformed request
-32601200Unknown method
-32602200Unknown tool or missing params
-32603500Internal server error

If you get -32001, check that your key starts with cdk_ and that you have not revoked it. If you get -32002, you have hit the daily limit. Wait for the midnight UTC reset or upgrade your plan.