ClickHouse Connection Management in Node.js
ClickHouse uses HTTP, not TCP — so connection pooling works differently than with Postgres or MySQL. Here's the right approach for Node.js.
Teams coming from Postgres or MySQL usually start by asking where the ClickHouse connection pool goes. That framing is slightly off. ClickHouse over HTTP has a different cost model, so the practical question is how to reuse the client instance and tune concurrency.
ClickHouse Uses HTTP, Not TCP
Postgres and MySQL maintain persistent TCP connections. Each connection represents state on the server — an authenticated session, a transaction context. Connection pools exist to reuse those expensive persistent connections instead of establishing a new TCP handshake and auth round-trip on every query.
ClickHouse communicates over HTTP (port 8123) or HTTPS. HTTP is stateless and connectionless. There's no server-side session to maintain between requests. This means:
- No connection pool is needed in the traditional sense. There's no equivalent of a Postgres connection that stays open between queries.
- What does matter: HTTP keep-alive (reusing TCP connections for multiple HTTP requests), concurrency limits, and timeouts.
The Singleton Pattern
The correct pattern is to create a single ClickHouse client instance at module load time and reuse it everywhere. The client manages an internal HTTP agent that handles keep-alive connections under the hood.
With hypequery, you wrap this once when initialising the query builder:
Import db wherever you need it. The underlying HTTP agent is shared, keep-alive is handled automatically.
What Not to Do
The naive pattern — creating a new client per request — wastes resources and adds latency:
Each call to createClient() allocates a new HTTP agent, which means no keep-alive benefit, no shared connection reuse, and unnecessary GC pressure.
Configuring Timeouts
ClickHouse queries can range from milliseconds to minutes depending on the query. Set timeouts at two levels:
Setting max_execution_time on the server side is important — it prevents runaway queries from holding connections and consuming server resources even if the Node.js client disconnects.
Handling Errors and Retries
ClickHouse is generally reliable but network blips happen. A simple retry wrapper for transient errors:
Only retry on network-level errors. Don't retry on ClickHouse query errors (syntax errors, missing columns) — those won't succeed on retry.
In a Next.js / Serverless Environment
Serverless functions don't have long-running processes, so module-level singletons work differently. In Next.js App Router:
In AWS Lambda, cold starts will create a new client. Warm invocations reuse it. This is fine — the overhead is a single object allocation, not a TCP handshake.
Summary
- ClickHouse uses HTTP — think singleton client and keep-alive, not Postgres-style pooling
- Create one client per process, reuse it everywhere
- Configure
max_open_connectionsto control concurrency - Set both client-side and server-side timeouts
- Only retry on transient network errors, not query errors
- hypequery's
createQueryBuilder()wraps the client — call it once at module level
Related content