Performance Implementation
End-to-end performance practices - coding, pre-commit checks, post-deploy monitoring, and team reporting
Performance Implementation
A complete performance workflow covers four stages: writing performant code, validating before merge, monitoring in production, and reporting to the team.
1. Writing High-Performance Code
Ensuring performance starts at the coding phase. Adopt patterns that prevent regressions before they reach review.
React Performance Patterns
import { memo, useMemo, useCallback, lazy, Suspense } from 'react';
// Memoize expensive child components
const ExpensiveList = memo(function ExpensiveList({ items }: { items: Item[] }) {
return (
<ul>
{items.map(item => (
<li key={item.id}>{item.name}</li>
))}
</ul>
);
});
function Dashboard({ data }: { data: RawData }) {
// Memoize derived data to avoid recalculation on every render
const processed = useMemo(
() => data.items.filter(i => i.active).sort((a, b) => b.score - a.score),
[data.items]
);
// Stable callback reference for child components
const handleSelect = useCallback((id: string) => {
selectItem(id);
}, []);
return <ExpensiveList items={processed} onSelect={handleSelect} />;
}
// Route-level code splitting
const Settings = lazy(() => import('./pages/Settings'));
function App() {
return (
<Suspense fallback={<Skeleton />}>
<Settings />
</Suspense>
);
}Virtualized Lists
Render only visible items for long lists to keep the DOM small and scrolling smooth.
import { useVirtualizer } from '@tanstack/react-virtual';
function VirtualList({ items }: { items: Item[] }) {
const parentRef = useRef<HTMLDivElement>(null);
const virtualizer = useVirtualizer({
count: items.length,
getScrollElement: () => parentRef.current,
estimateSize: () => 48,
overscan: 5,
});
return (
<div ref={parentRef} style={{ height: '500px', overflow: 'auto' }}>
<div style={{ height: `${virtualizer.getTotalSize()}px`, position: 'relative' }}>
{virtualizer.getVirtualItems().map(virtualRow => (
<div
key={virtualRow.key}
style={{
position: 'absolute',
top: 0,
transform: `translateY(${virtualRow.start}px)`,
height: `${virtualRow.size}px`,
width: '100%',
}}
>
{items[virtualRow.index].name}
</div>
))}
</div>
</div>
);
}Debounce and Throttle
Limit the rate of expensive operations triggered by user input.
function debounce<T extends (...args: any[]) => void>(fn: T, ms: number): T {
let timer: ReturnType<typeof setTimeout>;
return ((...args: Parameters<T>) => {
clearTimeout(timer);
timer = setTimeout(() => fn(...args), ms);
}) as unknown as T;
}
function throttle<T extends (...args: any[]) => void>(fn: T, ms: number): T {
let last = 0;
return ((...args: Parameters<T>) => {
const now = Date.now();
if (now - last >= ms) {
last = now;
fn(...args);
}
}) as unknown as T;
}
// Usage
const handleSearch = debounce((query: string) => {
fetchResults(query);
}, 300);
const handleScroll = throttle(() => {
updateScrollPosition();
}, 100);CSS Performance
/* Use transform/opacity for animations (GPU-composited, no layout/paint) */
.card-enter {
transform: translateY(20px);
opacity: 0;
transition: transform 0.3s ease, opacity 0.3s ease;
will-change: transform, opacity;
}
.card-enter-active {
transform: translateY(0);
opacity: 1;
}
/* Use content-visibility for off-screen sections */
.below-fold-section {
content-visibility: auto;
contain-intrinsic-size: 0 500px;
}
/* Avoid expensive selectors in hot paths */
/* Bad */ .container > div > ul > li > a { }
/* Good */ .nav-link { }Image and Asset Best Practices
// Use next/image or responsive images with modern formats
import Image from 'next/image';
function Hero() {
return (
<Image
src="/hero.webp"
alt="Hero"
width={1200}
height={600}
priority // Preload above-the-fold images
placeholder="blur" // Show blur placeholder while loading
/>
);
}
// For plain HTML: responsive images with format fallback
function ResponsiveImage() {
return (
<picture>
<source srcSet="/photo.avif" type="image/avif" />
<source srcSet="/photo.webp" type="image/webp" />
<img
src="/photo.jpg"
alt="Photo"
loading="lazy"
decoding="async"
width={800}
height={600}
/>
</picture>
);
}ESLint Performance Rules
Add static analysis rules to catch performance issues during development.
// .eslintrc.json
{
"plugins": ["react-hooks"],
"rules": {
"react-hooks/exhaustive-deps": "warn",
"react-hooks/rules-of-hooks": "error",
"no-await-in-loop": "warn",
"no-constant-binary-expression": "error"
}
}2. Pre-Commit Performance Checks
Catch performance regressions before code is merged.
Bundle Size Analysis
Track bundle size on every pull request to prevent creep.
// package.json
{
"scripts": {
"analyze": "ANALYZE=true next build",
"size": "size-limit",
"size:check": "size-limit --check"
},
"size-limit": [
{ "path": ".next/static/chunks/**/*.js", "limit": "300 kB", "gzip": true },
{ "path": ".next/static/css/**/*.css", "limit": "50 kB", "gzip": true }
],
"devDependencies": {
"size-limit": "^11.0.0",
"@size-limit/preset-app": "^11.0.0",
"@next/bundle-analyzer": "^15.0.0"
}
}// next.config.ts — enable bundle analyzer on demand
import bundleAnalyzer from '@next/bundle-analyzer';
const withBundleAnalyzer = bundleAnalyzer({
enabled: process.env.ANALYZE === 'true',
});
export default withBundleAnalyzer({ /* next config */ });Git Hook for Size Check
Run a size budget check before every commit using husky + lint-staged.
// package.json
{
"lint-staged": {
"*.{ts,tsx}": ["eslint --fix", "prettier --write"]
}
}# .husky/pre-push
npm run size:checkLighthouse CI in Pull Requests
Run Lighthouse automatically on every PR to gate merges on performance scores.
// lighthouserc.js
module.exports = {
ci: {
collect: {
url: [
'http://localhost:3000/',
'http://localhost:3000/dashboard',
],
startServerCommand: 'npm run start',
numberOfRuns: 3,
},
assert: {
assertions: {
'categories:performance': ['error', { minScore: 0.9 }],
'first-contentful-paint': ['error', { maxNumericValue: 1500 }],
'largest-contentful-paint': ['error', { maxNumericValue: 2500 }],
'cumulative-layout-shift': ['error', { maxNumericValue: 0.1 }],
'total-blocking-time': ['error', { maxNumericValue: 300 }],
'unused-javascript': ['warn', { maxNumericValue: 1 }],
},
},
upload: {
target: 'temporary-public-storage',
},
},
};# .github/workflows/lighthouse.yml
name: Lighthouse CI
on: [pull_request]
jobs:
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- run: npm ci && npm run build
- name: Lighthouse CI
uses: treosh/lighthouse-ci-action@v12
with:
configPath: ./lighthouserc.js
uploadArtifacts: true
temporaryPublicStorage: trueBundle Size Bot on PR
Post a bundle diff comment on every pull request.
# .github/workflows/bundle-size.yml
name: Bundle Size
on: [pull_request]
jobs:
size:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- run: npm ci
- uses: andresz1/size-limit-action@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
script: npm run sizeThis posts a comment like:
📦 Bundle Size
| File | Before | After | Diff |
|----------------|---------|---------|----------|
| main.js | 142 kB | 145 kB | +3 kB ⚠️ |
| vendor.js | 98 kB | 98 kB | 0 |
| styles.css | 22 kB | 21 kB | −1 kB ✅ |3. Post-Deploy Performance Monitoring
After code reaches production, monitor real users to catch issues that synthetic tests miss.
Real User Monitoring (RUM) Setup
Collect Core Web Vitals from every page load and send them to your analytics backend.
// lib/rum.ts
import { onCLS, onINP, onLCP, onFCP, onTTFB, type Metric } from 'web-vitals';
interface RUMPayload {
name: string;
value: number;
rating: 'good' | 'needs-improvement' | 'poor';
delta: number;
id: string;
url: string;
timestamp: number;
connection?: string;
deviceType: string;
}
function getDeviceType(): string {
const ua = navigator.userAgent;
if (/Mobi|Android/i.test(ua)) return 'mobile';
if (/Tablet|iPad/i.test(ua)) return 'tablet';
return 'desktop';
}
function report(metric: Metric) {
const payload: RUMPayload = {
name: metric.name,
value: metric.value,
rating: metric.rating,
delta: metric.delta,
id: metric.id,
url: location.href,
timestamp: Date.now(),
connection: (navigator as any).connection?.effectiveType,
deviceType: getDeviceType(),
};
// sendBeacon is reliable even during page unload
const blob = new Blob([JSON.stringify(payload)], { type: 'application/json' });
navigator.sendBeacon('/api/rum', blob);
}
export function initRUM() {
onCLS(report);
onINP(report);
onLCP(report);
onFCP(report);
onTTFB(report);
}// app/layout.tsx — initialize on the client
'use client';
import { useEffect } from 'react';
export function RUMProvider({ children }: { children: React.ReactNode }) {
useEffect(() => {
import('../lib/rum').then(({ initRUM }) => initRUM());
}, []);
return <>{children}</>;
}Server-Side Metrics Ingestion
Store raw RUM events in a time-series–friendly format for later aggregation.
// app/api/rum/route.ts
import { NextRequest, NextResponse } from 'next/server';
interface RUMEvent {
name: string;
value: number;
rating: string;
url: string;
timestamp: number;
connection?: string;
deviceType: string;
}
export async function POST(req: NextRequest) {
const event: RUMEvent = await req.json();
// Validate and sanitize
if (!['CLS', 'INP', 'LCP', 'FCP', 'TTFB'].includes(event.name)) {
return NextResponse.json({ error: 'Invalid metric' }, { status: 400 });
}
// Write to your storage (ClickHouse, BigQuery, PostgreSQL, etc.)
await metricsStore.insert({
metric: event.name,
value: event.value,
rating: event.rating,
page: new URL(event.url).pathname,
device: event.deviceType,
connection: event.connection ?? 'unknown',
timestamp: new Date(event.timestamp),
});
return NextResponse.json({ ok: true });
}Synthetic Monitoring with Scheduled Checks
Run Lighthouse on key pages on a schedule to detect infrastructure-level regressions.
# .github/workflows/scheduled-perf.yml
name: Scheduled Performance Check
on:
schedule:
- cron: '0 */6 * * *' # Every 6 hours
jobs:
perf-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Lighthouse
uses: treosh/lighthouse-ci-action@v12
with:
urls: |
https://example.com/
https://example.com/dashboard
budgetPath: ./budget.json
uploadArtifacts: true
- name: Notify on regression
if: failure()
uses: slackapi/slack-github-action@v1
with:
payload: |
{
"text": "⚠️ Performance regression detected in scheduled check. <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|View details>"
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}Performance Alerting
Set up threshold-based alerts so the team is notified immediately when metrics degrade.
// services/perf-alerting.ts
interface AlertRule {
metric: string;
threshold: number;
percentile: number;
windowMinutes: number;
channel: 'slack' | 'email' | 'pagerduty';
}
const rules: AlertRule[] = [
{ metric: 'LCP', threshold: 2500, percentile: 75, windowMinutes: 60, channel: 'slack' },
{ metric: 'INP', threshold: 200, percentile: 75, windowMinutes: 60, channel: 'slack' },
{ metric: 'CLS', threshold: 0.1, percentile: 75, windowMinutes: 60, channel: 'slack' },
{ metric: 'LCP', threshold: 4000, percentile: 75, windowMinutes: 15, channel: 'pagerduty' },
];
async function evaluateAlerts() {
for (const rule of rules) {
const value = await metricsStore.getPercentile(
rule.metric,
rule.percentile,
rule.windowMinutes
);
if (value > rule.threshold) {
await notify(rule.channel, {
title: `${rule.metric} p${rule.percentile} exceeded threshold`,
message: `Current: ${value.toFixed(1)} | Threshold: ${rule.threshold}`,
severity: rule.channel === 'pagerduty' ? 'critical' : 'warning',
});
}
}
}4. Performance Reporting for the Team
Aggregate real-user data into periodic reports so the team has visibility into performance trends.
Data Aggregation Queries
Calculate p50 / p75 / p95 values, segmented by page and device, for a given time range.
// services/perf-report.ts
interface PerformanceSummary {
metric: string;
page: string;
device: string;
p50: number;
p75: number;
p95: number;
goodPct: number; // % rated "good"
poorPct: number; // % rated "poor"
sampleCount: number;
}
async function buildReport(
startDate: Date,
endDate: Date
): Promise<PerformanceSummary[]> {
// Example SQL for ClickHouse / BigQuery / PostgreSQL
const query = `
SELECT
metric,
page,
device,
quantile(0.50)(value) AS p50,
quantile(0.75)(value) AS p75,
quantile(0.95)(value) AS p95,
countIf(rating = 'good') / count() * 100 AS good_pct,
countIf(rating = 'poor') / count() * 100 AS poor_pct,
count() AS sample_count
FROM rum_events
WHERE timestamp BETWEEN $1 AND $2
GROUP BY metric, page, device
ORDER BY metric, page, device
`;
return metricsStore.query(query, [startDate, endDate]);
}Automated Weekly Report
Send a digest to Slack or email every Monday with key metrics and trends.
// scripts/weekly-perf-report.ts
import { buildReport } from '../services/perf-report';
interface TrendData {
metric: string;
currentP75: number;
previousP75: number;
change: number;
changePct: number;
status: 'improved' | 'regressed' | 'stable';
}
async function generateWeeklyReport() {
const now = new Date();
const thisWeekStart = new Date(now.getTime() - 7 * 86400000);
const lastWeekStart = new Date(now.getTime() - 14 * 86400000);
const [current, previous] = await Promise.all([
buildReport(thisWeekStart, now),
buildReport(lastWeekStart, thisWeekStart),
]);
const trends = calculateTrends(current, previous);
const message = formatSlackReport(trends, current);
await sendSlackMessage(process.env.SLACK_PERF_CHANNEL!, message);
}
function calculateTrends(
current: PerformanceSummary[],
previous: PerformanceSummary[]
): TrendData[] {
const metrics = ['LCP', 'INP', 'CLS'];
return metrics.map(metric => {
const curr = current.filter(r => r.metric === metric);
const prev = previous.filter(r => r.metric === metric);
const currP75 = weightedAvg(curr, 'p75');
const prevP75 = weightedAvg(prev, 'p75');
const change = currP75 - prevP75;
const changePct = prevP75 !== 0 ? (change / prevP75) * 100 : 0;
return {
metric,
currentP75: currP75,
previousP75: prevP75,
change,
changePct,
status: Math.abs(changePct) < 3 ? 'stable' :
change < 0 ? 'improved' : 'regressed',
};
});
}
function weightedAvg(rows: PerformanceSummary[], field: keyof PerformanceSummary): number {
const total = rows.reduce((sum, r) => sum + (r[field] as number) * r.sampleCount, 0);
const count = rows.reduce((sum, r) => sum + r.sampleCount, 0);
return count > 0 ? total / count : 0;
}Slack Report Format
function formatSlackReport(trends: TrendData[], details: PerformanceSummary[]): object {
const statusIcon = (s: string) =>
s === 'improved' ? '✅' : s === 'regressed' ? '🔴' : '➖';
const header = '*📊 Weekly Performance Report*\n' +
`_${new Date().toLocaleDateString()} — Real user data_\n\n`;
const overview = '*Core Web Vitals (p75)*\n' +
'```\n' +
'Metric | This Week | Last Week | Change\n' +
'--------|-----------|-----------|--------\n' +
trends.map(t =>
`${t.metric.padEnd(7)} | ` +
`${formatValue(t.metric, t.currentP75).padEnd(9)} | ` +
`${formatValue(t.metric, t.previousP75).padEnd(9)} | ` +
`${statusIcon(t.status)} ${t.changePct > 0 ? '+' : ''}${t.changePct.toFixed(1)}%`
).join('\n') +
'\n```\n\n';
const topPages = '*Slowest Pages (LCP p75)*\n' +
details
.filter(d => d.metric === 'LCP')
.sort((a, b) => b.p75 - a.p75)
.slice(0, 5)
.map((d, i) => `${i + 1}. \`${d.page}\` — ${d.p75.toFixed(0)} ms (${d.sampleCount} samples)`)
.join('\n') +
'\n\n';
const deviceBreakdown = '*Device Breakdown*\n' +
details
.filter(d => d.metric === 'LCP')
.reduce((acc, d) => {
acc[d.device] = acc[d.device] || { total: 0, good: 0, count: 0 };
acc[d.device].total += d.p75 * d.sampleCount;
acc[d.device].good += d.goodPct * d.sampleCount;
acc[d.device].count += d.sampleCount;
return acc;
}, {} as Record<string, { total: number; good: number; count: number }>)
// format each device line
;
return {
blocks: [
{ type: 'section', text: { type: 'mrkdwn', text: header + overview + topPages } },
],
};
}
function formatValue(metric: string, value: number): string {
if (metric === 'CLS') return value.toFixed(3);
return `${value.toFixed(0)} ms`;
}Scheduled Report via CI
# .github/workflows/perf-report.yml
name: Weekly Performance Report
on:
schedule:
- cron: '0 9 * * 1' # Every Monday at 09:00 UTC
jobs:
report:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- run: npm ci
- name: Generate and send report
run: npx tsx scripts/weekly-perf-report.ts
env:
METRICS_DB_URL: ${{ secrets.METRICS_DB_URL }}
SLACK_PERF_CHANNEL: ${{ secrets.SLACK_PERF_CHANNEL }}
SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}Grafana Dashboard (Optional)
For teams using Grafana, expose a JSON dashboard definition alongside the codebase.
// grafana/perf-dashboard.json (simplified)
{
"title": "Frontend Performance — Core Web Vitals",
"panels": [
{
"title": "LCP p75 Over Time",
"type": "timeseries",
"targets": [{ "rawSql": "SELECT time, quantile(0.75)(value) FROM rum_events WHERE metric='LCP' GROUP BY time ORDER BY time" }]
},
{
"title": "INP p75 Over Time",
"type": "timeseries",
"targets": [{ "rawSql": "SELECT time, quantile(0.75)(value) FROM rum_events WHERE metric='INP' GROUP BY time ORDER BY time" }]
},
{
"title": "CLS p75 Over Time",
"type": "timeseries",
"targets": [{ "rawSql": "SELECT time, quantile(0.75)(value) FROM rum_events WHERE metric='CLS' GROUP BY time ORDER BY time" }]
},
{
"title": "Good / Needs Improvement / Poor Distribution",
"type": "piechart",
"targets": [{ "rawSql": "SELECT rating, count() FROM rum_events WHERE timestamp > now() - INTERVAL 7 DAY GROUP BY rating" }]
}
]
}Tooling Reference
Commonly used tools that can be installed and used directly, organized by workflow stage.
Development (Writing Code)
| Tool | Purpose | Install |
|---|---|---|
| React DevTools Profiler | Visualize component render times and identify unnecessary re-renders | Browser extension |
| why-did-you-render | Log unexpected React re-renders to console during development | npm i -D @welldone-software/why-did-you-render |
| Chrome DevTools Performance | Record and analyze runtime performance, layout shifts, long tasks | Built into Chrome |
| Import Cost | Show imported package size inline in your editor | VS Code extension |
why-did-you-render Setup
// src/wdyr.ts — import at the top of your entry file (dev only)
import React from 'react';
if (process.env.NODE_ENV === 'development') {
const { default: whyDidYouRender } = await import(
'@welldone-software/why-did-you-render'
);
whyDidYouRender(React, {
trackAllPureComponents: true,
logOnDifferentValues: true,
});
}// Mark specific components to track
const MyComponent = React.memo(function MyComponent(props: Props) {
return <div>{props.value}</div>;
});
MyComponent.whyDidYouRender = true;Pre-Commit (CI / Local Checks)
| Tool | Purpose | Install |
|---|---|---|
| size-limit | Fail CI when JS/CSS bundle exceeds budget | npm i -D size-limit @size-limit/preset-app |
| @next/bundle-analyzer | Generate interactive treemap of bundle contents | npm i -D @next/bundle-analyzer |
| source-map-explorer | Analyze bundle composition from source maps | npm i -D source-map-explorer |
| webpack-bundle-analyzer | Interactive treemap for Webpack-based projects | npm i -D webpack-bundle-analyzer |
| Lighthouse CI | Run Lighthouse in CI and assert score thresholds | npm i -D @lhci/cli |
| bundlephobia | Check package size before installing a dependency | Web: bundlephobia.com |
source-map-explorer
// package.json
{
"scripts": {
"analyze:sme": "next build && source-map-explorer .next/static/chunks/*.js"
}
}Lighthouse CI (Local)
Run Lighthouse locally before pushing to validate against budgets.
# Install globally or as a devDependency
npm i -D @lhci/cli
# Run against local build
npx lhci autorun --config=lighthouserc.jsPost-Deploy (Production Monitoring)
| Tool | Purpose | Install / Access |
|---|---|---|
| web-vitals | Collect Core Web Vitals from real users | npm i web-vitals |
| Sentry Performance | Tracing + Web Vitals + error correlation | npm i @sentry/react |
| Datadog RUM | Full RUM with session replay and error tracking | npm i @datadog/browser-rum |
| SpeedCurve | Synthetic + RUM dashboards with budget alerts | SaaS |
| New Relic Browser | RUM + distributed tracing | SaaS |
| Google CrUX | Free real-user Chrome data for your origin | BigQuery / API |
| PageSpeed Insights API | On-demand Lighthouse + CrUX field data | REST API (free) |
Sentry Performance (Quick Start)
import * as Sentry from '@sentry/react';
Sentry.init({
dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
integrations: [
Sentry.browserTracingIntegration(),
],
tracesSampleRate: 0.2, // 20% of page loads
replaysSessionSampleRate: 0.1,
});Datadog RUM (Quick Start)
import { datadogRum } from '@datadog/browser-rum';
datadogRum.init({
applicationId: process.env.NEXT_PUBLIC_DD_APP_ID!,
clientToken: process.env.NEXT_PUBLIC_DD_CLIENT_TOKEN!,
site: 'datadoghq.com',
service: 'my-frontend',
env: process.env.NODE_ENV,
sessionSampleRate: 100,
sessionReplaySampleRate: 20,
trackUserInteractions: true,
trackResources: true,
trackLongTasks: true,
defaultPrivacyLevel: 'mask-user-input',
});PageSpeed Insights API
Query field (CrUX) and lab data on demand — useful for scripted checks or dashboards.
// scripts/psi-check.ts
interface PSIResponse {
loadingExperience: {
metrics: Record<string, { percentile: number; category: string }>;
};
lighthouseResult: {
categories: { performance: { score: number } };
};
}
async function checkPageSpeed(url: string): Promise<PSIResponse> {
const apiUrl = `https://www.googleapis.com/pagespeedonline/v5/runPagespeed` +
`?url=${encodeURIComponent(url)}` +
`&strategy=mobile` +
`&category=PERFORMANCE` +
`&key=${process.env.PSI_API_KEY}`;
const res = await fetch(apiUrl);
return res.json();
}
// Usage
const result = await checkPageSpeed('https://example.com');
console.log('Performance score:', result.lighthouseResult.categories.performance.score);
console.log('LCP (field p75):', result.loadingExperience.metrics.LARGEST_CONTENTFUL_PAINT_MS.percentile);CrUX API
Query real Chrome user experience data aggregated over 28 days — free, no SDK needed.
// scripts/crux-check.ts
async function queryCrUX(origin: string) {
const res = await fetch(
`https://chromeuxreport.googleapis.com/v1/records:queryRecord?key=${process.env.CRUX_API_KEY}`,
{
method: 'POST',
body: JSON.stringify({
origin,
metrics: [
'largest_contentful_paint',
'interaction_to_next_paint',
'cumulative_layout_shift',
],
}),
}
);
const data = await res.json();
for (const [metric, info] of Object.entries(data.record.metrics)) {
const { percentiles, histogram } = info as any;
console.log(`${metric}: p75=${percentiles.p75} good=${(histogram[0].density * 100).toFixed(1)}%`);
}
}
// Usage
await queryCrUX('https://example.com');
// largest_contentful_paint: p75=1823 good=78.2%
// interaction_to_next_paint: p75=142 good=91.5%
// cumulative_layout_shift: p75=0.04 good=88.7%Vercel Performance Suite
Vercel provides a set of integrated performance tools for Next.js projects. They work out of the box on Vercel-hosted sites and require minimal configuration.
| Tool | Purpose | Install / Access |
|---|---|---|
| @vercel/speed-insights | Real-user Core Web Vitals per page, with score tracking | npm i @vercel/speed-insights |
| @vercel/analytics | Privacy-friendly web analytics (page views, visitors, referrers) | npm i @vercel/analytics |
| @vercel/otel | OpenTelemetry tracing for server-side (API routes, SSR, middleware) | npm i @vercel/otel |
| Vercel Toolbar | Dev overlay showing feature flags, visual feedback, and performance hints | Built into Vercel deployments |
| Vercel Image Optimization | Automatic WebP/AVIF conversion, resizing, and CDN caching via next/image | Built into Vercel |
Speed Insights (Real User Web Vitals)
Collects LCP, INP, CLS, FCP, TTFB from real users and surfaces per-page scores in the Vercel dashboard.
// app/layout.tsx
import { SpeedInsights } from '@vercel/speed-insights/next';
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html lang="en">
<body>
{children}
<SpeedInsights />
</body>
</html>
);
}The <SpeedInsights /> component automatically reports Core Web Vitals on every page load. Results appear in Vercel Dashboard → Project → Speed Insights, including:
- Per-route p75 scores for each Core Web Vital
- Score trend over time (daily / weekly)
- Device and connection breakdown
- "Real Experience Score" (0–100) combining all vitals
For non-Vercel hosting, pass a custom beforeSend to route data to your own endpoint:
<SpeedInsights
beforeSend={(data) => {
// Optionally filter or enrich
return data;
}}
/>Web Analytics (Traffic + Performance)
Privacy-friendly analytics (no cookies) that pairs traffic data with performance metrics.
// app/layout.tsx
import { Analytics } from '@vercel/analytics/next';
import { SpeedInsights } from '@vercel/speed-insights/next';
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html lang="en">
<body>
{children}
<Analytics />
<SpeedInsights />
</body>
</html>
);
}This gives the team a unified view of traffic and performance in the Vercel dashboard — no separate analytics platform needed for basic needs.
OpenTelemetry Tracing (Server-Side)
Trace server-side performance: API routes, SSR rendering, middleware, and external service calls.
// instrumentation.ts (Next.js instrumentation hook)
import { registerOTel } from '@vercel/otel';
export function register() {
registerOTel({
serviceName: 'my-next-app',
});
}// next.config.ts — enable the instrumentation hook
const nextConfig = {
experimental: {
instrumentationHook: true,
},
};
export default nextConfig;Once enabled, Vercel automatically captures traces for:
- Server-side rendering (SSR) duration
- API route execution time
- Middleware latency
- External fetch calls (with
fetchauto-instrumentation)
Traces appear in Vercel Dashboard → Project → Observability. For custom spans:
import { trace } from '@opentelemetry/api';
const tracer = trace.getTracer('my-app');
export async function getProducts() {
return tracer.startActiveSpan('getProducts', async (span) => {
try {
const products = await db.query('SELECT * FROM products');
span.setAttribute('product.count', products.length);
return products;
} finally {
span.end();
}
});
}Vercel Toolbar
The Vercel Toolbar appears automatically on preview deployments and provides:
- Performance hints — flags slow pages and layout shifts
- Visual feedback — leave comments on preview deploys for the team
- Feature flags — toggle flags per-session (requires Vercel Flags SDK)
For local development, install the toolbar package:
npm i @vercel/toolbar// next.config.ts
import { withVercelToolbar } from '@vercel/toolbar/plugins/next';
export default withVercelToolbar()({
// your next config
});Vercel Deployment Summary
Every deployment on Vercel includes an automatic performance summary:
Deployment Performance Summary
├── Speed Insights Score: 92 / 100
├── Build Duration: 45s
├── Cold Start (Serverless): 120ms
├── Edge Middleware: 3ms
├── Static Assets: 142 files, CDN-cached
└── Image Optimization: 28 images auto-converted to WebPThis is available in the deployment details page without any configuration.
Reporting
| Tool | Purpose | Access |
|---|---|---|
| CrUX Dashboard | Free auto-updating Data Studio dashboard from Chrome field data | Google Data Studio template |
| Grafana | Self-hosted dashboards connected to your metrics store | Open source |
| SpeedCurve | Automated performance reports with budget tracking | SaaS |
| Calibre | Performance monitoring with team-focused reporting | SaaS |
CrUX Dashboard (Zero Setup)
Google provides a free, auto-updating dashboard built on CrUX data:
- Go to the CrUX Dashboard template in Looker Studio
- Enter your origin (e.g.
https://example.com) - Dashboard auto-populates with monthly Core Web Vitals data
- Share the link with the team — it updates automatically
This requires no code, no SDK, and covers all Chrome users on your origin.
Best Practices
Performance Implementation Guidelines
- Shift left — catch issues in code review and CI, not production
- Set budgets — define thresholds for bundle size and Core Web Vitals; fail CI on breach
- Measure real users — synthetic tests are useful, but RUM data reflects actual experience
- Automate reporting — weekly digests keep the team aligned without manual effort
- Segment data — break metrics down by page, device, and connection to find targeted improvements
- Alert on regressions — use p75 thresholds to trigger immediate notifications
- Track trends — week-over-week comparisons reveal gradual drift before it becomes critical
- Close the loop — link performance improvements back to user experience and business outcomes