All Posts
Mock APIsTestingAPI Drift

Mock API vs Production API: How to Guarantee They Stay in Sync

Kiran MayeeMarch 25, 20258 min read

Mock APIs are supposed to be faithful replicas of your production endpoints. But in practice, they're snapshots in time — frozen the day you created them while the real API continues to evolve.

This is the mock-production synchronisation problem, and every team that separates frontend and backend development hits it eventually.

How Mock APIs Fall Out of Sync

It starts innocently. The backend team ships a new version:

  • A field gets renamed: firstNamefirst_name (camelCase to snake_case migration).
  • A nullable field becomes required.
  • A string enum gets a new value that the frontend doesn't handle.
  • A nested object gains a new child that changes the shape of the response.
  • Pagination metadata moves from headers to the response body.

None of these changes break the backend's own tests. The contract in the OpenAPI spec might even be updated — but nobody re-generates the mock.

The Testing Gap

Here's the testing stack most teams rely on:

  • Unit tests — test individual functions in isolation. Don't catch API shape changes.
  • Integration tests — test against mock servers or fixtures. Pass even when production has changed.
  • E2E tests — test against staging. But staging might be outdated or broken for unrelated reasons.
  • Contract tests (Pact) — require both provider and consumer to be in sync. Break down when one side doesn't update.

The gap is clear: no test in the standard stack compares what the mock returns against what production actually returns.

The Automated Sync Approach

The solution is automated comparison. Here's how it works with moqapi.dev:

1. Define the Production URL

Point your mock API at the corresponding production endpoint:

Production: https://api.myapp.com/v1/users
Mock:       https://moqapi.dev/api/invoke/{projectId}/users-api/users

2. Enable Drift Detection

Toggle drift detection in the mock API settings. The system will periodically call both endpoints and compare the responses structurally.

3. Review the Drift Report

When differences are found, you get a detailed report:

{
  "endpoint": "/users",
  "driftScore": 72,
  "fields": [
    {
      "path": "$.data[0].firstName",
      "issue": "field_missing_in_production",
      "severity": "critical",
      "suggestion": "Field renamed to first_name in production"
    },
    {
      "path": "$.data[0].avatar_url",
      "issue": "field_missing_in_mock",
      "severity": "info",
      "suggestion": "New field in production, not yet in mock"
    }
  ]
}

4. Auto-Update or Alert

Based on the severity, you can configure the system to:

  • Alert only — send a Slack/Discord notification with the drift report.
  • Block deployments — fail the CI pipeline if critical drift is detected.
  • Auto-update mock — update the mock API schema to match production (for non-breaking additions).

What Gets Compared

The comparison goes beyond simple JSON diff. Here's what's checked:

  • Response status codes — mock returns 200, production returns 201? That's a mismatch.
  • Field presence — every field in the mock response should exist in production, and vice versa.
  • Data types — a field that's a string in the mock but a number in production is a critical drift.
  • Nested structures — objects that become arrays, or depths that change.
  • Array element shapes — the schema of items inside arrays is compared.
  • Header differences — content-type, pagination headers, rate limit headers.

Handling Intentional Differences

Not every difference is a problem. Mock APIs often return different data values (that's the point). The comparison focuses on structural differences, not value differences. You can also suppress specific paths:

// Suppress known intentional differences
{
  "suppressions": [
    { "path": "$.meta.request_id", "reason": "Dynamic value" },
    { "path": "$.data[*].id", "reason": "Different IDs expected" }
  ]
}

Real-World Impact

A startup with 12 mock APIs ran their first drift scan and found:

  • 3 critical — field renames that would crash the frontend.
  • 7 warnings — new optional fields that the frontend wasn't rendering.
  • 15 info — additional fields in production that were harmless.

All 3 critical issues were in endpoints that the team had manually tested "recently." The drift had been introduced in the last two sprints without anyone noticing.

Key Takeaways

  • Mock APIs are snapshots — they drift from production automatically over time.
  • Traditional testing stacks don't compare mock responses against production responses.
  • Automated drift detection bridges this gap by structurally comparing both endpoints.
  • Severity classification prevents alert fatigue — focus on breaking changes, monitor the rest.
  • Suppression rules handle intentional differences without false positives.

Keep your mocks honest at moqapi.dev/signup.

Share this article:

About the Author

Kiran Mayee

Founder and sole developer of moqapi.dev. Full-stack engineer with deep experience in API platforms, serverless runtimes, and developer tooling. Built moqapi to solve the mock data and deployment friction she experienced firsthand building production APIs.

Ready to build?

Start deploying serverless functions in under a minute.

Get Started Free