Rate Limiting
Protect serve({ queries }) runtimes with middleware-based rate limiting.
Rate Limiting
Use rateLimit(...) to protect your runtime from abuse. In the main query + serve path, you apply it as runtime middleware. Per-query use is still available on the builder-compatible .use(...) surface.
Global rate limiting
Apply one policy to the whole runtime:
import { initServe, rateLimit } from '@hypequery/serve';
const { query, serve } = initServe({
context: () => ({ db }),
middlewares: [
rateLimit({
windowMs: 60_000,
max: 100,
}),
],
});This is a good default for public APIs and browser-facing runtimes.
Per-query rate limiting
If you need rate limiting on one specific query, use the builder-compatible middleware surface:
const adminMetrics = query
.use(
rateLimit({
windowMs: 60_000,
max: 20,
})
)
.query(async ({ ctx }) => {
return ctx.db.table('metrics').select('*').execute();
});Per-tenant or custom keys
Use keyBy when the limit should be scoped by tenant, user, or some other request-derived identity:
const api = serve({
queries: { activeUsers },
middlewares: [
rateLimit({
windowMs: 60_000,
max: 50,
keyBy: (ctx) => ctx.auth?.tenantId ?? null,
}),
],
});If keyBy returns null, the request skips rate limiting.
Options
windowMssets the time windowmaxsets the maximum allowed hits in that windowkeyBycontrols how the rate-limit key is derivedstorelets you use a custom backend instead of the in-memory storeheadersenables or disables rate-limit headers on429responsesmessagecustomizes the 429 response messagefailOpencontrols whether store failures should skip limiting or fail the request
Custom store
Use a custom RateLimitStore when you need a shared backend such as Redis:
import type { RateLimitStore } from '@hypequery/serve';
const redisStore: RateLimitStore = {
increment: async (key, windowMs) => {
// increment in Redis
return 1;
},
getTtl: async (key) => {
// return remaining ttl
return 60_000;
},
reset: async (key) => {
// clear stored counter
},
};