Optimizing Latency
This page provides recommendations for minimizing round-trip latency when interacting with the GX Exchange API. These techniques are primarily relevant for market makers, high-frequency traders, and latency-sensitive applications.
Architecture Overview
Client --> TLS 1.3 --> Load Balancer --> REST API (port 4001)
Client --> TLS 1.3 --> Load Balancer --> WebSocket (port 4000)The GX Exchange matching engine processes orders in under 1 millisecond. The dominant latency component for most integrations is network round-trip time.
Network Optimization
Co-location
For the lowest possible latency, co-locate your trading infrastructure in the same data center or cloud region as the GX Exchange API servers. Contact the team for co-location details.
Keep-Alive Connections
Reuse HTTP connections to avoid TLS handshake overhead on every request. All modern HTTP libraries support connection pooling by default.
// Node.js -- use a persistent agent
import { Agent } from "https";
const agent = new Agent({
keepAlive: true,
maxSockets: 10,
});
const response = await fetch("https://api.gx.exchange/info", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ type: "allMids" }),
// @ts-ignore
agent,
});# Python -- reuse a session
import requests
session = requests.Session() # Reuses TCP connections
# All requests through the session share connections
mids = session.post("https://api.gx.exchange/info",
json={"type": "allMids"}).json()DNS Caching
Cache DNS resolutions locally to avoid lookup latency on each request. Most operating systems handle this automatically, but verify your container or serverless environment does not perform fresh lookups per request.
WebSocket vs REST
For order management, WebSocket post requests can be faster than REST because:
- No connection setup — the WebSocket is already established.
- No TLS handshake — TLS negotiation happens once at connection time.
- Bidirectional — responses arrive on the same connection without polling.
Use the WebSocket post method for latency-critical order placement:
{
"method": "post",
"id": 1,
"request": {
"type": "exchange",
"payload": { ... }
}
}See WebSocket Post Requests for details.
Signature Pre-computation
EIP-712 signature computation involves hashing and elliptic curve operations. Pre-compute as much as possible:
- Cache the domain separator hash. The EIP-712 domain is static and can be hashed once at startup.
- Pre-compute the type hash. The Agent type definition does not change between requests.
- Parallelize signing. If placing multiple independent orders, sign them in parallel.
// Pre-compute domain separator at startup
import { ethers } from "ethers";
const DOMAIN_HASH = ethers.TypedDataEncoder.hashDomain({
name: "GXExchange",
version: "1",
chainId: 42069,
verifyingContract: "0x0000000000000000000000000000000000000000",
});
// Reuse DOMAIN_HASH for all subsequent signingsOrder Batching
Submit multiple orders in a single request to reduce the number of round trips:
{
"type": "order",
"orders": [
{ "a": 0, "b": true, "p": "67000.0", "s": "0.005", "r": false, "t": { "limit": { "tif": "Gtc" } } },
{ "a": 0, "b": true, "p": "66990.0", "s": "0.005", "r": false, "t": { "limit": { "tif": "Gtc" } } },
{ "a": 0, "b": true, "p": "66980.0", "s": "0.005", "r": false, "t": { "limit": { "tif": "Gtc" } } }
],
"grouping": "na"
}This places three orders in one network round trip rather than three.
Cancel Batching
Similarly, batch cancel requests:
{
"type": "cancel",
"cancels": [
{ "a": 0, "o": 12345 },
{ "a": 0, "o": 12346 },
{ "a": 0, "o": 12347 }
]
}Local Orderbook Maintenance
Instead of polling the orderbook via REST, maintain a local copy using WebSocket:
- Subscribe to the
l2Bookchannel for the markets you trade. - The first message is a full snapshot.
- Apply subsequent incremental updates to your local copy.
- Use the local book for pricing decisions without any network latency.
const localBook: Map<string, { bids: any[]; asks: any[] }> = new Map();
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
if (msg.channel === "l2Book") {
localBook.set(msg.data.coin, {
bids: msg.data.levels[0],
asks: msg.data.levels[1],
});
}
};
// Access local book with zero latency
function getBestBid(coin: string): string | null {
const book = localBook.get(coin);
return book?.bids[0]?.px ?? null;
}Timing Breakdown
Typical latency components for a co-located client:
| Component | Latency |
|---|---|
| Network round trip (co-located) | < 1 ms |
| Network round trip (same region) | 1 — 5 ms |
| Network round trip (cross-region) | 20 — 100 ms |
| TLS handshake (first request) | 5 — 15 ms |
| TLS handshake (resumed) | 1 — 3 ms |
| EIP-712 signature computation | 1 — 3 ms |
| Matching engine processing | < 1 ms |
| JSON serialization/deserialization | < 1 ms |
Summary
| Technique | Impact |
|---|---|
| Co-location | Reduces network RTT to < 1 ms |
| WebSocket post (vs REST) | Eliminates per-request connection overhead |
| Connection keep-alive | Avoids repeated TLS handshakes |
| Order/cancel batching | Reduces number of round trips |
| Signature pre-computation | Saves 1-2 ms per request |
| Local orderbook via WS | Eliminates data fetch latency entirely |