CoreDash API: Query Real User Core Web Vitals Data
Query your real user Core Web Vitals data programmatically. Use it from scripts, CI pipelines, or let your AI agent diagnose performance issues automatically.

Your performance data, anywhere you need it
CoreDash collects Core Web Vitals from real users visiting your site. The API gives you access to that same data from any tool, script, or AI agent. Three tools, JSON in, JSON out.
The most interesting use case: connecting your AI. The CoreDash API uses the same protocol as the Model Context Protocol (MCP), which means AI tools like Claude, Cursor, and Windsurf can query your real user data directly. Ask your AI "why is my LCP slow on mobile?" and it pulls the actual field data to answer.
We built CWV Superpowers on top of this. It is an AI agent that combines your CoreDash field data with Chrome DevTools to diagnose and fix Core Web Vitals issues. The API is what makes that possible.
But you do not need an AI agent. A curl command works just as well.
Authentication
Every request needs an API key in the Authorization header:
Authorization: Bearer cdk_YOUR_API_KEY
To get a key:
- Log in at app.coredash.app
- Go to your project, then AI Insights, then Connect Your AI
- Click Create API Key and copy it. It is only shown once.
Keys start with cdk_ and are scoped to a single project. You can create multiple keys and revoke them from the same page.
Request format
The API uses JSON-RPC 2.0. Every request is a POST to:
https://app.coredash.app/api/mcp The request body looks like this:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "get_metrics",
"arguments": { }
}
} The id field can be any number or string. It gets echoed back in the response. There are three tools: get_metrics, get_timeseries, and get_histogram.
get_metrics: current performance
Returns the current Core Web Vitals values with good/improve/poor ratings. This is the tool you use for "what is my LCP right now?" type questions.
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
metrics | string | LCP,INP,CLS,FCP,TTFB | Comma-separated metrics to return |
percentile | string | p75 | p50, p75, p80, p90, or p95 |
filters | object | {} | Filter by dimensions (see Dimensions below) |
group | string | Group results by a dimension key to compare segments | |
date | string | -31d | Time range: -6h, today, -1d, -7d, -31d |
limit | number | 100 | Max segments when grouping (max 500) |
Example: get all metrics
curl -X POST https://app.coredash.app/api/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdk_YOUR_API_KEY" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "get_metrics",
"arguments": {}
}
}' The raw response is a JSON-RPC wrapper:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"content": [{
"type": "text",
"text": "{ ... JSON string ... }"
}]
}
} The actual data is a JSON string inside the text field. Parsed, it looks like this:
{
"period": "last 31 days",
"percentile": "p75",
"metrics": {
"LCP": {
"value": 2450,
"unit": "ms",
"rating": "improve",
"distribution": { "good": 61.2, "improve": 22.4, "poor": 16.4 }
},
"INP": {
"value": 180,
"unit": "ms",
"rating": "good",
"distribution": { "good": 82.1, "improve": 12.3, "poor": 5.6 }
},
"CLS": {
"value": 0.08,
"unit": "",
"rating": "good",
"distribution": { "good": 74.5, "improve": 18.2, "poor": 7.3 }
}
}
} The distribution object tells you what percentage of real page loads fall into each rating. This is often more useful than the p75 value alone. An LCP of 2450ms with 61% good means most users have a fine experience, but the tail is dragging the p75 down.
Example: compare mobile vs desktop LCP
Use the group parameter to split results by any dimension. This is how you find out whether your LCP problem is a mobile problem:
curl -X POST https://app.coredash.app/api/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdk_YOUR_API_KEY" \
-d '{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "get_metrics",
"arguments": {
"metrics": "LCP",
"group": "d",
"date": "-7d"
}
}
}' Parsed response:
{
"period": "last 7 days",
"percentile": "p75",
"groupedBy": "d",
"groupName": "Device Type",
"segments": [
{
"segment": "mobile",
"value": "mobile",
"metrics": {
"LCP": {
"value": 3200, "unit": "ms", "rating": "improve",
"distribution": { "good": 52.3, "improve": 28.1, "poor": 19.6 }
}
}
},
{
"segment": "desktop",
"value": "desktop",
"metrics": {
"LCP": {
"value": 1800, "unit": "ms", "rating": "good",
"distribution": { "good": 78.5, "improve": 15.2, "poor": 6.3 }
}
}
}
]
} Mobile at 3200ms, desktop at 1800ms. The aggregate would show 2500ms and you would think "not great, but not terrible." The grouped view shows the real story: desktop is fine, mobile needs work.
Example: filter to a specific page on mobile
Combine filters to narrow down to exactly the traffic you care about:
curl -X POST https://app.coredash.app/api/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdk_YOUR_API_KEY" \
-d '{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "get_metrics",
"arguments": {
"metrics": "LCP,CLS",
"filters": { "ff": "/checkout", "d": "mobile" },
"date": "-7d"
}
}
}' get_timeseries: performance over time
Returns metric values bucketed over time with automatic trend detection. This is the tool you use for "has my LCP gotten worse?" and "did that deploy fix the regression?"
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
metrics | string | LCP,INP,CLS,FCP,TTFB | Comma-separated metrics |
percentile | string | p75 | Which percentile |
filters | object | {} | Filter by dimensions |
date | string | -31d | Time range |
granularity | string | day | Bucket size: hour, 6hours, day, week |
Example: LCP trend over the last 7 days
curl -X POST https://app.coredash.app/api/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdk_YOUR_API_KEY" \
-d '{
"jsonrpc": "2.0",
"id": 4,
"method": "tools/call",
"params": {
"name": "get_timeseries",
"arguments": {
"metrics": "LCP",
"date": "-7d",
"granularity": "day"
}
}
}' Parsed response:
{
"period": "last 7 days",
"percentile": "p75",
"granularity": "day",
"dataPoints": 7,
"timeseries": [
{ "date": "2026-03-10T00:00:00.000Z", "LCP": { "value": 2600, "unit": "ms", "rating": "improve" } },
{ "date": "2026-03-11T00:00:00.000Z", "LCP": { "value": 2450, "unit": "ms", "rating": "improve" } },
{ "date": "2026-03-12T00:00:00.000Z", "LCP": { "value": 2300, "unit": "ms", "rating": "good" } }
],
"summary": {
"LCP": {
"recent": 2350,
"previous": 2680,
"change": -12.3,
"trend": "improving",
"unit": "ms"
}
}
} The summary compares the second half of the period to the first half. Trend values are improving (more than 5% better), stable (within 5%), or regressing (more than 5% worse). This is what makes the timeseries endpoint useful for automated monitoring: you do not need to parse the data points yourself to know if things are getting worse.
get_histogram: distribution shape
Returns the distribution of a single metric as ~40 buckets with counts per range. This is the tool you use when the p75 looks fine but you suspect a long tail, or when you want to see the full shape of your performance data.
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
metric | string | required | Single metric: LCP, INP, CLS, FCP, or TTFB |
filters | object | {} | Filter by dimensions |
date | string | -31d | Time range |
Note: unlike get_metrics, this takes a single metric (not metrics). One metric per request.
Example: LCP distribution on mobile
curl -X POST https://app.coredash.app/api/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdk_YOUR_API_KEY" \
-d '{
"jsonrpc": "2.0",
"id": 5,
"method": "tools/call",
"params": {
"name": "get_histogram",
"arguments": {
"metric": "LCP",
"filters": { "d": "mobile" },
"date": "-7d"
}
}
}' Parsed response:
{
"period": "last 7 days",
"metric": "LCP",
"unit": "ms",
"filters": { "d": "mobile" },
"buckets": [
{ "from": 0, "to": 250, "count": 1250, "rating": "good" },
{ "from": 250, "to": 500, "count": 3400, "rating": "good" },
{ "from": 500, "to": 750, "count": 2800, "rating": "good" },
{ "from": 2500, "to": 2750, "count": 890, "rating": "improve" },
{ "from": 4000, "to": 4250, "count": 120, "rating": "poor" },
{ "from": 9750, "to": null, "count": 15, "rating": "poor" }
],
"total": 45000
} Each bucket has from/to boundaries, a count of estimated page loads in that range, and a rating based on where the bucket sits relative to Core Web Vitals thresholds. The last bucket has to: null because it is the open-ended tail.
Bucket widths are fixed per metric: LCP uses 250ms, INP uses 25ms, CLS uses 0.025, FCP uses 200ms, TTFB uses 125ms.
This is useful for understanding the shape of your data. A p75 of 2400ms could mean most users are around 2400ms, or it could mean 60% are under 1000ms and a chunk of slow mobile traffic is pulling the tail. The histogram tells you which.
Dimensions
Use these keys in filters or as the group value. Filtering narrows the data to a specific segment. Grouping splits the results so you can compare segments side by side.
General
| Key | Name | Example values |
|---|---|---|
d | Device Type | mobile, desktop |
cc | Country | US, NL, DE (ISO 3166-1 alpha-2) |
ff | Pathname | /products, /checkout (null = /) |
u | Full URL | Supports * wildcards, [neq] prefix for negation |
qs | Query String | The ?key=value part |
lb | Page Label | Custom label from the RUM snippet |
browser | Browser | Chrome, Safari, Firefox |
os | Operating System | Android, iOS, Windows |
nt | Navigation Type | navigate, back_forward, reload |
fv | Visitor Type | 0 = repeat, 1 = new visitor |
li | Logged In Status | 0 = logged out, 1 = logged in, 2 = admin |
no | Navigation Origin | 1 = same origin, 2 = cross origin |
ab | A/B Test | Custom test label |
Device and network
| Key | Name | Unit |
|---|---|---|
m | Device Memory | GB |
dl | Network Speed | Mbps |
ccs | Client Capability Score | 1=Excellent, 2=Good, 3=Moderate, 4=Limited, 5=Constrained |
redir | Redirect Count | count |
Metric attribution
These dimensions tell you what caused a metric value. Group by lcpel to see which elements become the LCP across your pages. Group by inpel to find the interactions that produce the worst INP.
| Key | Name | For metric |
|---|---|---|
lcpel | LCP Element | LCP |
lcpet | LCP Element Type | LCP: text, image, background-image, video |
lcpprio | LCP Priority | LCP: 1=Preloaded, 2=High fetchpriority, 3=Not preloaded, 4=Lazy loaded |
lcpurl | LCP Image URL | LCP |
inpel | INP Element | INP |
inpit | INP Input Type | INP |
inpls | INP Load State | INP |
lurl | LOAF Script URL | INP |
clsel | CLS Element | CLS |
Filter examples
{ "d": "mobile" }
{ "ff": "/checkout", "d": "desktop" }
{ "cc": "US", "browser": "Chrome" }
{ "u": "[neq]*/admin/*" } Metrics reference
| Metric | Name | Unit | Good | Needs improvement | Poor |
|---|---|---|---|---|---|
LCP | Largest Contentful Paint | ms | < 2500 | 2500 to 4000 | > 4000 |
INP | Interaction to Next Paint | ms | < 200 | 200 to 500 | > 500 |
CLS | Cumulative Layout Shift | < 0.1 | 0.1 to 0.25 | > 0.25 | |
FCP | First Contentful Paint | ms | < 1800 | 1800 to 3000 | > 3000 |
TTFB | Time to First Byte | ms | < 800 | 800 to 1800 | > 1800 |
The default percentile is p75. This is what Google uses for Core Web Vitals ranking. If 75% of your page loads are below the threshold, you pass.
Using the API as an MCP server
The API endpoint is a fully compatible MCP server. If your AI tool supports MCP (Claude Code, Cursor, Windsurf, and others), you can connect it directly. The AI then has access to get_metrics, get_timeseries, and get_histogram as tools and can query your field data as part of any conversation.
This is how CWV Superpowers works: it connects to CoreDash via MCP, pulls your real user data, opens your site in Chrome, and traces the exact cause of a slow metric. The API provides the "what is happening in production" part, Chrome provides the "why is it happening" part.
You can also connect the MCP server to your own AI setup. Point your MCP client at https://app.coredash.app/api/mcp with your API key, and your AI can answer questions like "which pages have the worst INP on mobile?" using actual field data instead of guessing.
Rate limits
Limits are per project per day and reset at midnight UTC.
| Plan | Daily requests |
|---|---|
| Trial | 150 |
| Starter | 500 |
| Standard | 500 |
| Pro | 500+ |
| Enterprise | 500+ |
150 requests on the trial plan is plenty for manual exploration and AI assisted debugging. If you are running automated monitoring in CI, the paid plans give you 500 per day.
Error handling
Errors come back as JSON-RPC error objects:
{
"jsonrpc": "2.0",
"id": 1,
"error": { "code": -32001, "message": "Invalid or revoked API key." }
} | Code | HTTP status | Meaning |
|---|---|---|
-32001 | 401 | Bad or missing API key |
-32002 | 429 | Rate limit exceeded |
-32600 | 400 | Malformed request |
-32601 | 200 | Unknown method |
-32602 | 200 | Unknown tool or missing params |
-32603 | 500 | Internal server error |
If you get -32001, check that your key starts with cdk_ and that you have not revoked it. If you get -32002, you have hit the daily limit. Wait for the midnight UTC reset or upgrade your plan.