Embedded Runtime
Run hypequery definitions directly inside your application runtime
Embedded Runtime
We've simplified the serve API in v0.2. If you're looking for the older builder-first API, see Migrate from v0.1.x Serve API and v0.1.x Serve API.
You don't need HTTP to execute hypequery metrics. Every serve({ queries }) export exposes an embedded runtime so you can call queries directly from SSR routes, cron jobs, queues, or AI agents.
Lifecycle
-
Initialization – Import your
analytics/queries.ts(or equivalent) so yourinitServe()+serve()module runs. This wires up middleware, auth strategies, tenant config, docs/OpenAPI, etc. Make sure env vars are loaded before the import (e.g.,import 'dotenv/config'). -
Context creation – Whenever you call the runtime, hypequery builds a
ctxobject that includes the request metadata, auth context (if any), tenant helpers, and whatever you return from thecontextfactory (e.g.,db, cache clients, tracing IDs). -
Middleware + hooks – Global and per-endpoint middleware run around your query, just like they do for HTTP requests. Lifecycle hooks (
onRequestStart,onRequestEnd, etc.) fire as well, so logging and metrics stay consistent. -
Execution – The query resolver executes against your ClickHouse connection (or any other resources you injected). If the resolver returns a value, hypequery serializes it exactly as it would for HTTP responses.
-
Hot reload expectations – During development the CLI reloads your
queries.tsfile automatically, so edits are picked up immediately. In production you control deployments; keep yourserve()module instantiated once per process to avoid re-registering endpoints.
Example Flows
Background Job / Cron Task
import { api } from '../analytics/queries'; export async function syncDailyRevenue() { const result = await api.run('dailyRevenue', { input: { start: '2024-01-01', end: '2024-01-31' }, context: { jobId: crypto.randomUUID() }, }); await warehouse.insert('daily_metrics', result); }
api.run(key, options)runs the endpoint in-process with the same validation, middleware, and hooks as HTTP (aliases:api.execute,api.client).inputmust match the endpoint'sinputschema. If validation fails, the method throws an error containing the validation issues.contextlets you inject per-call data (job IDs, loggers, cache handles) that your middleware or resolver can read.
API Handler (SSR / Server Action)
import { api } from '../../analytics/queries'; export async function GET() { const result = await api.run('activeUsers'); return Response.json(result); }
This pattern keeps HTTP thin: the server component just forwards inputs to api.run and returns the result. You still benefit from Zod validation, middleware, and hooks.
Test or Staging Harness
import { api } from '../analytics/queries'; import { describe, it, expect } from 'vitest'; describe('activeUsers metric', () => { it('returns the most recent rows', async () => { const result = await api.run('activeUsers', { input: { limit: 10 }, }); expect(result).toHaveLength(10); }); });
Embedding metrics directly makes automated tests trivial: no HTTP servers to spin up, yet you still exercise the entire hypequery stack.
Safety Checklist
-
Environment variables – Load creds before importing
analytics/queries.ts. In ESM/TS projects the easiest option isimport 'dotenv/config'in your entrypoint. -
Auth context – If you rely on
authstrategies, pass arequestshape toapi.runviaoptions.request. That ensures the strategy receives headers/tokens. -
Tenant enforcement – Global/per-endpoint
tenantconfigs still apply. For background jobs that legitimately bypass tenant checks, disable the tenant config on that endpoint. -
Error handling –
api.runthrows when the endpoint would have returned an error response. Wrap calls in try/catch to handle validation failures or ClickHouse errors gracefully.
With these patterns you can run hypequery definitions anywhere in your runtime.