REST API Pagination Patterns: Complete Reference

Type: Software Reference Confidence: 0.92 Sources: 8 Verified: 2026-02-24 Freshness: 2026-02-24

TL;DR

Constraints

Quick Reference

PatternPerformance at ScaleConsistencyJump to PageImplementationBest For
Offset/LimitPoor (O(offset+limit) scan)Weak (page drift on inserts)Yes?offset=20&limit=10Small datasets (<10K), admin UIs
Cursor (opaque token)Excellent (O(limit) constant)Strong (stable position)No?cursor=eyJpZCI6MTB9&limit=10Large datasets, real-time feeds, mobile infinite scroll
Keyset/SeekExcellent (O(limit) constant)Strong (stable position)No?after_id=42&after_date=2026-01-15&limit=10Large sorted datasets, time-series, logs
Page NumberPoor (offset in disguise)Weak (same as offset)Yes?page=3&per_page=25Simple UIs, small static datasets
Time-BasedGood (index on timestamp)Moderate (clock skew risk)No?since=2026-01-01T00:00:00Z&limit=50Event streams, audit logs, sync APIs
Hybrid (cursor + page count)Good (cursor + cached count)Strong for data, eventual for countLimited (first/last only)?cursor=abc&limit=10 + total_count headerAPIs needing both efficiency and total counts

Decision Tree

START
|-- Dataset size > 1M rows?
|   |-- YES --> Use Cursor or Keyset pagination
|   |   |-- Need to expose sort values in URL (transparent)?
|   |   |   |-- YES --> Keyset/Seek (after_id + after_sort_key)
|   |   |   +-- NO --> Cursor with opaque base64 token
|   +-- NO |
|-- Dataset size 10K-1M rows?
|   |-- YES --> Do clients need page-jump (go to page N)?
|   |   |-- YES --> Offset/Limit with cached total count
|   |   +-- NO --> Cursor pagination (simpler, more consistent)
|   +-- NO |
|-- Dataset < 10K rows?
|   |-- YES --> Do clients need page numbers in UI?
|   |   |-- YES --> Page Number (?page=N&per_page=M)
|   |   +-- NO --> Offset/Limit (simple, good enough)
+-- Is data time-series or event log?
    |-- YES --> Time-Based (?since=timestamp&limit=N)
    +-- NO --> DEFAULT: Cursor pagination (safest general choice)

Step-by-Step Guide

1. Choose your pagination pattern

Evaluate your dataset size, consistency needs, and client requirements using the decision tree above. For most modern APIs serving mobile or SPA clients, cursor-based pagination is the recommended default. [src1]

Verify: Answer these three questions: (1) How many rows? (2) Do clients need page-jump? (3) Is data frequently inserted/deleted?

2. Design the response envelope

Every paginated endpoint should return a consistent response shape. Include the data array, pagination metadata, and navigation links. [src5]

{
  "data": [...],
  "pagination": {
    "has_more": true,
    "next_cursor": "eyJpZCI6NDIsImNyZWF0ZWRfYXQiOiIyMDI2LTAxLTE1VDEwOjMwOjAwWiJ9",
    "limit": 25
  },
  "links": {
    "next": "/api/v1/items?cursor=eyJpZCI6NDJ9&limit=25"
  }
}

Verify: Ensure has_more is false on the last page and cursors are null when no more pages exist.

3. Create the database index

Keyset and cursor pagination only perform well if the sort columns are indexed. Create a composite index matching your ORDER BY clause. [src2]

-- For pagination sorted by created_at DESC, id DESC
CREATE INDEX idx_items_pagination ON items (created_at DESC, id DESC);

Verify: EXPLAIN ANALYZE should show Index Scan, not Seq Scan.

4. Implement cursor encoding/decoding

Encode the last row's sort values into an opaque, base64-encoded cursor token. Decode on the server to build the WHERE clause. [src4]

function encodeCursor(row) {
  return Buffer.from(JSON.stringify({
    id: row.id, created_at: row.created_at
  })).toString('base64url');
}
function decodeCursor(cursor) {
  return JSON.parse(Buffer.from(cursor, 'base64url').toString());
}

Verify: decodeCursor(encodeCursor({ id: 42, created_at: '2026-01-15' })) returns original values.

5. Build the query with keyset filtering

Use row-value comparison or compound WHERE clauses to seek to the correct position. [src2]

-- PostgreSQL: row-value comparison
SELECT id, title, created_at FROM items
WHERE (created_at, id) < (:last_created_at, :last_id)
ORDER BY created_at DESC, id DESC
LIMIT :page_size;

Verify: Run with EXPLAIN -- the plan should use the composite index from Step 3.

6. Add server-side limit enforcement

Never trust client-provided limits. Clamp to a maximum and provide a sensible default. [src5]

const MAX_LIMIT = 100;
const DEFAULT_LIMIT = 25;
function sanitizeLimit(requested) {
  const limit = parseInt(requested, 10) || DEFAULT_LIMIT;
  return Math.min(Math.max(1, limit), MAX_LIMIT);
}

Verify: sanitizeLimit(500) returns 100, sanitizeLimit(undefined) returns 25.

Code Examples

Node.js/Express: All Three Pagination Styles

// Input:  GET /api/items?mode=cursor&cursor=abc&limit=25
// Output: { data: [...], pagination: { has_more, next_cursor, limit } }

const express = require('express'); // ^4.18
const MAX_LIMIT = 100, DEFAULT_LIMIT = 25;

app.get('/api/items', async (req, res) => {
  const limit = Math.min(Math.max(1, parseInt(req.query.limit) || DEFAULT_LIMIT), MAX_LIMIT);
  const mode = req.query.mode || 'cursor';
  let query, params;

  if (mode === 'offset') {
    const offset = Math.max(0, parseInt(req.query.offset) || 0);
    query = 'SELECT * FROM items ORDER BY created_at DESC, id DESC LIMIT $1 OFFSET $2';
    params = [limit + 1, offset];
  } else if (mode === 'keyset') {
    const afterId = parseInt(req.query.after_id);
    const afterDate = req.query.after_date;
    if (afterId && afterDate) {
      query = `SELECT * FROM items WHERE (created_at, id) < ($1, $2)
               ORDER BY created_at DESC, id DESC LIMIT $3`;
      params = [afterDate, afterId, limit + 1];
    } else {
      query = 'SELECT * FROM items ORDER BY created_at DESC, id DESC LIMIT $1';
      params = [limit + 1];
    }
  } else {
    // Cursor -- opaque base64 token (recommended default)
    let where = '';
    params = [limit + 1];
    if (req.query.cursor) {
      const { id, created_at } = JSON.parse(
        Buffer.from(req.query.cursor, 'base64url').toString()
      );
      where = 'WHERE (created_at, id) < ($2, $3)';
      params.push(created_at, id);
    }
    query = `SELECT * FROM items ${where} ORDER BY created_at DESC, id DESC LIMIT $1`;
  }

  const rows = await db.query(query, params);
  const hasMore = rows.length > limit;
  const data = hasMore ? rows.slice(0, limit) : rows;
  res.json({ data, pagination: { has_more: hasMore, limit } });
});

Python/FastAPI: Cursor Pagination with SQLAlchemy

# Input:  GET /items?cursor=eyJpZCI6NDJ9&limit=25
# Output: { "data": [...], "pagination": { "has_more": true, "next_cursor": "...", "limit": 25 } }

import base64, json
from fastapi import FastAPI, Query  # fastapi>=0.109
from sqlalchemy import select, and_  # sqlalchemy>=2.0

MAX_LIMIT = 100; DEFAULT_LIMIT = 25

def encode_cursor(row) -> str:
    payload = json.dumps({"id": row.id, "created_at": row.created_at.isoformat()})
    return base64.urlsafe_b64encode(payload.encode()).decode()

def decode_cursor(cursor: str) -> dict:
    return json.loads(base64.urlsafe_b64decode(cursor.encode()).decode())

@app.get("/items")
async def list_items(cursor: str | None = Query(None), limit: int = Query(DEFAULT_LIMIT, ge=1, le=MAX_LIMIT)):
    stmt = select(Item).order_by(Item.created_at.desc(), Item.id.desc())
    if cursor:
        decoded = decode_cursor(cursor)
        stmt = stmt.where(and_(
            Item.created_at <= decoded["created_at"],
            ~and_(Item.created_at == decoded["created_at"], Item.id >= decoded["id"])
        ))
    stmt = stmt.limit(limit + 1)
    rows = (await db.execute(stmt)).scalars().all()
    has_more = len(rows) > limit
    data = rows[:limit] if has_more else rows
    next_cursor = encode_cursor(data[-1]) if has_more and data else None
    return {"data": [i.to_dict() for i in data], "pagination": {"has_more": has_more, "next_cursor": next_cursor, "limit": limit}}

Go: Keyset Pagination with database/sql

// Input:  GET /items?after_id=42&after_ts=2026-01-15T10:30:00Z&limit=25
// Output: JSON { "data": [...], "pagination": { "has_more": true, ... } }

func listItems(db *sql.DB) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        limit, _ := strconv.Atoi(r.URL.Query().Get("limit"))
        if limit <= 0 { limit = 25 }
        if limit > 100 { limit = 100 }
        afterID, _ := strconv.Atoi(r.URL.Query().Get("after_id"))
        afterTS := r.URL.Query().Get("after_ts")
        var rows *sql.Rows; var err error
        if afterID > 0 && afterTS != "" {
            ts, _ := time.Parse(time.RFC3339, afterTS)
            rows, err = db.Query(
                `SELECT id, title, created_at FROM items
                 WHERE (created_at, id) < ($1, $2)
                 ORDER BY created_at DESC, id DESC LIMIT $3`, ts, afterID, limit+1)
        } else {
            rows, err = db.Query(
                `SELECT id, title, created_at FROM items
                 ORDER BY created_at DESC, id DESC LIMIT $1`, limit+1)
        }
        if err != nil { http.Error(w, err.Error(), 500); return }
        defer rows.Close()
        // ... scan rows, detect has_more, return JSON
    }
}

Anti-Patterns

Wrong: Using OFFSET on large tables

-- BAD -- database scans and discards 1M rows, returns 25
SELECT * FROM items ORDER BY created_at DESC LIMIT 25 OFFSET 1000000;
-- At offset 1M: ~2-5 seconds (vs. 2ms for first page)

Correct: Use keyset/seek to jump directly

-- GOOD -- database seeks directly via index, reads only 25 rows
SELECT * FROM items
WHERE (created_at, id) < ('2026-01-15T10:30:00Z', 42)
ORDER BY created_at DESC, id DESC LIMIT 25;
-- Constant ~2ms regardless of position

Wrong: Exposing raw database IDs as cursors

// BAD -- leaks internal IDs, allows enumeration
app.get('/api/items', (req, res) => {
  const cursor = parseInt(req.query.cursor);
  db.query('SELECT * FROM items WHERE id < $1 ORDER BY id DESC LIMIT 25', [cursor]);
});

Correct: Use opaque, encoded cursor tokens

// GOOD -- opaque token hides internals
app.get('/api/items', (req, res) => {
  const { id, created_at } = JSON.parse(Buffer.from(req.query.cursor, 'base64url').toString());
  db.query('SELECT * FROM items WHERE (created_at, id) < ($1, $2) ORDER BY created_at DESC, id DESC LIMIT 25',
    [created_at, id]);
});

Wrong: No server-side limit enforcement

// BAD -- client can request limit=999999
app.get('/api/items', (req, res) => {
  const limit = parseInt(req.query.limit); // unbounded!
  db.query('SELECT * FROM items LIMIT $1', [limit]);
});

Correct: Clamp limit to a safe maximum

// GOOD -- enforce bounds server-side
const MAX_LIMIT = 100;
app.get('/api/items', (req, res) => {
  const limit = Math.min(Math.max(1, parseInt(req.query.limit) || 25), MAX_LIMIT);
  db.query('SELECT * FROM items ORDER BY id DESC LIMIT $1', [limit]);
});

Wrong: Computing COUNT(*) on every request

// BAD -- full table scan on every request for total_count
const [{ count }] = await db.query('SELECT COUNT(*) FROM items'); // slow on large tables

Correct: Cache total count or use has_more flag

// GOOD -- fetch limit+1 rows to detect has_more; cache count separately
const rows = await db.query('SELECT * FROM items ORDER BY id DESC LIMIT $1', [limit + 1]);
const hasMore = rows.length > limit;
const total = await cache.getOrSet('items_count', () => db.query('SELECT COUNT(*)...'), 60);

Common Pitfalls

Diagnostic Commands

# Check if pagination query uses an index (PostgreSQL)
EXPLAIN ANALYZE SELECT * FROM items
WHERE (created_at, id) < ('2026-01-15', 42)
ORDER BY created_at DESC, id DESC LIMIT 25;

# Compare offset vs keyset performance
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM items ORDER BY created_at DESC LIMIT 25 OFFSET 100000;
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM items
WHERE (created_at, id) < ('2025-06-01', 500000)
ORDER BY created_at DESC, id DESC LIMIT 25;

# Check index existence for pagination columns
SELECT indexname, indexdef FROM pg_indexes
WHERE tablename = 'items' AND indexdef LIKE '%created_at%';

# Monitor slow paginated queries
SELECT query, calls, mean_exec_time, rows
FROM pg_stat_statements
WHERE query LIKE '%OFFSET%' AND mean_exec_time > 100
ORDER BY mean_exec_time DESC LIMIT 10;

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Dataset > 100K rows and growingDataset is small and static (<1K rows)Return all results in a single response
Clients consume data sequentially (infinite scroll)Clients need random page access (go to page 47)Offset/Limit with cached total count
Data changes frequently (real-time feeds, logs)Data is immutable with known totalSimple offset pagination
Building a public API for third partiesInternal admin dashboard with small datasetsPage number pagination for simplicity
Need consistent results during concurrent writesGraphQL APIRelay cursor connections specification

Important Caveats

Related Units