Skip to content

Rate Limits

The Synthetix API implements rate limiting to ensure fair usage, prevent abuse, and maintain system stability. Rate limits are enforced per subaccount for order placement operations and per IP address for WebSocket connections.

Overview

Rate limiting protects the API infrastructure and ensures consistent performance for all users. Understanding these limits and implementing proper handling is essential for building robust trading applications.

Order Rate Limiting

Default Limits

Limit TypeValueScopeWindow
Order Placement100 orders/secondPer subaccount1 second (sliding window)
Burst CapacityUp to limit valuePer subaccountImmediate

How It Works

The order rate limiter uses a token bucket algorithm:

  1. Per-Subaccount Tracking: Each subaccount maintains an independent rate limiter
  2. Sliding Window: Limits are calculated over a 1-second rolling window
  3. Burst Support: Allows initial bursts up to the limit, then refills at the configured rate
  4. Automatic Cleanup: Idle rate limiters (unused for 10 minutes) are cleaned up every 5 minutes

Batch Order Counting

When placing multiple orders in a single request, each order in the batch counts toward your rate limit:

{
  "params": {
    "action": "placeOrders",
    "subAccountId": "1867542890123456789",
    "orders": [
      { "symbol": "BTC-USDT", "side": "buy", "quantity": "0.1" },
      { "symbol": "ETH-USDT", "side": "buy", "quantity": "1.0" },
      { "symbol": "BTC-USDT", "side": "sell", "quantity": "0.05" }
    ]
  }
}

This request consumes 3 orders from your rate limit, not 1 request.

WebSocket Rate Limiting

Connection Limits

Limit TypeDefault ValueScopeConfigurable
Connections100 per IPPer IP addressServer-side only
Subscriptions1000 per IPPer IP addressServer-side only

WebSocket Order Placement

Orders placed via WebSocket are subject to the same per-subaccount rate limits as REST API requests. The rate limiter tracks all order placement operations regardless of the connection method.

Rate Limit Errors

HTTP Status Code: 429

When you exceed the rate limit, the API returns a 429 Too Many Requests response:

{
  "status": "error",
  "error": {
    "code": "RATE_LIMIT_EXCEEDED",
    "message": "Order rate limit exceeded. Please wait before placing more orders. Current rate: 95/100 orders per second",
    "details": {
      "current_count": "95",
      "limit": "100",
      "window": "1s",
      "message": "Rate limit exceeded: 95/100 orders used in the current second"
    }
  },
  "request_id": "abc123...",
  "timestamp": "2025-01-01T00:00:00Z"
}

Error Response Fields

FieldDescription
codeAlways RATE_LIMIT_EXCEEDED for rate limit violations
messageHuman-readable error with current usage information
details.current_countNumber of orders used in the current window
details.limitYour current rate limit (orders per second)
details.windowTime window for the limit (always 1s)

Handling Rate Limits

1. Implement Retry Logic with Backoff

When you receive a 429 response, implement exponential backoff:

async function placeOrderWithRetry(order: Order, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      const response = await placeOrder(order);
      return response;
    } catch (error) {
      if (error.status === 429 && attempt < maxRetries - 1) {
        // Wait 1 second + exponential backoff
        const delay = 1000 * Math.pow(2, attempt);
        await new Promise(resolve => setTimeout(resolve, delay));
        continue;
      }
      throw error;
    }
  }
}

2. Client-Side Rate Limiting

Implement client-side rate limiting to prevent hitting server limits:

class OrderRateLimiter {
  private queue: Array<() => Promise<any>> = [];
  private processing = false;
  private ordersPerSecond: number;
  private interval: number;
 
  constructor(ordersPerSecond: number = 50) {
    this.ordersPerSecond = ordersPerSecond;
    this.interval = 1000 / ordersPerSecond;
  }
 
  async execute<T>(fn: () => Promise<T>): Promise<T> {
    return new Promise((resolve, reject) => {
      this.queue.push(async () => {
        try {
          const result = await fn();
          resolve(result);
        } catch (error) {
          reject(error);
        }
      });
      this.processQueue();
    });
  }
 
  private async processQueue() {
    if (this.processing || this.queue.length === 0) return;
 
    this.processing = true;
    while (this.queue.length > 0) {
      const task = this.queue.shift();
      if (task) await task();
      await new Promise(resolve => setTimeout(resolve, this.interval));
    }
    this.processing = false;
  }
}
 
// Usage
const rateLimiter = new OrderRateLimiter(50); // 50 orders/sec (conservative)
 
await rateLimiter.execute(() => placeOrder({
  symbol: 'BTC-USDT',
  side: 'buy',
  quantity: '0.1'
}));

3. Monitor Your Usage

Track your order placement rate to avoid hitting limits:

class RateLimitMonitor {
  private orders: number[] = [];
 
  recordOrder() {
    const now = Date.now();
    // Keep only orders from the last second
    this.orders = this.orders.filter(timestamp => now - timestamp < 1000);
    this.orders.push(now);
  }
 
  getCurrentRate(): number {
    const now = Date.now();
    this.orders = this.orders.filter(timestamp => now - timestamp < 1000);
    return this.orders.length;
  }
 
  canPlaceOrder(limit: number = 100): boolean {
    return this.getCurrentRate() < limit;
  }
}
 
// Usage
const monitor = new RateLimitMonitor();
 
if (monitor.canPlaceOrder()) {
  await placeOrder(orderData);
  monitor.recordOrder();
} else {
  console.warn('Approaching rate limit, waiting...');
  await new Promise(resolve => setTimeout(resolve, 1000));
}

4. Batch Orders Efficiently

Group related orders together to maximize throughput:

// Good: Batch related orders
const orders = [
  { symbol: 'BTC-USDT', side: 'buy', quantity: '0.1', orderType: 'limitGtc', price: '45000' },
  { symbol: 'ETH-USDT', side: 'buy', quantity: '1.0', orderType: 'limitGtc', price: '2500' }
];
 
await placeOrders({ orders });
 
// Avoid: Placing orders individually when batching is possible
for (const order of orders) {
  await placeOrder(order); // Each call counts separately
}

Best Practices

Use WebSocket for High-Frequency Trading

WebSocket connections offer advantages for high-frequency scenarios:

  • Persistent connection reduces overhead
  • Lower latency for order placement
  • Real-time order updates without polling
  • Same rate limits apply, but lower per-request overhead

Implement Graceful Degradation

Handle rate limiter failures gracefully:

async function placeOrderSafely(order: Order) {
  try {
    return await placeOrder(order);
  } catch (error) {
    if (error.status === 429) {
      // Rate limited - wait and retry
      await new Promise(resolve => setTimeout(resolve, 1000));
      return await placeOrder(order);
    }
    throw error;
  }
}

Optimize for Your Use Case

Different trading strategies require different approaches:

Strategy TypeRecommended Approach
Market MakingUse conservative client-side limits (50-70 orders/sec)
Algorithmic TradingImplement adaptive rate limiting based on market conditions
Manual TradingRate limits unlikely to be an issue
Batch OperationsGroup orders by market/direction when possible

Checking Rate Limits

Get Rate Limits Endpoint

Query your current rate limit status (currently returns mock data):

{
  "params": {
    "action": "getRateLimits",
    "subAccountId": "1867542890123456789"
  },
  "nonce": 1703635200000,
  "signature": { /* EIP-712 signature */ }
}

Response:

{
  "status": "ok",
  "response": {
    "requestsUsed": 45,
    "requestsCap": 1200
  },
  "request_id": "5ccf215d37e3ae6d",
  "timestamp": "2025-01-01T00:00:00Z"
}

Rate Limit Behavior

Graceful Degradation

The rate limiting system is designed to be non-blocking:

  • Rate Limiter Failures: If the rate limiter service fails, requests continue to be processed (failures are logged but don't block operations)
  • Redis Unavailable: System continues operating without distributed rate limiting
  • No Single Point of Failure: Rate limiting issues don't prevent trading operations

Memory Management

The system automatically manages rate limiter memory:

  • Idle Timeout: 10 minutes of inactivity
  • Cleanup Interval: Every 5 minutes
  • Automatic Removal: Inactive subaccount rate limiters are cleaned up to prevent memory leaks

Troubleshooting

Common Issues

Issue: Frequent 429 Errors

Possible Causes:
  • Placing orders too rapidly
  • Multiple applications using the same subaccount
  • Retry logic causing rapid successive requests
Solutions:
  1. Implement client-side rate limiting (aim for 50-70 orders/sec)
  2. Add delays between order placements
  3. Use exponential backoff for retries
  4. Monitor your order placement rate

Issue: Rate Limits with Batch Orders

Problem: Batching 100+ orders in a single request

Solution: Split large batches into smaller groups to stay within limits:

function chunkOrders(orders: Order[], chunkSize: number = 50) {
  const chunks = [];
  for (let i = 0; i < orders.length; i += chunkSize) {
    chunks.push(orders.slice(i, i + chunkSize));
  }
  return chunks;
}
 
const orderBatches = chunkOrders(allOrders, 50);
for (const batch of orderBatches) {
  await placeOrders({ orders: batch });
  await new Promise(resolve => setTimeout(resolve, 1000)); // Wait between batches
}

Issue: WebSocket Connection Limits

Problem: Unable to establish new WebSocket connections

Possible Causes:
  • Exceeded 100 connections per IP
  • Previous connections not properly closed
Solutions:
  1. Reuse existing WebSocket connections
  2. Implement connection pooling
  3. Ensure proper connection cleanup on disconnect
  4. Use different IP addresses if needed

Future Enhancements

Planned improvements to the rate limiting system:

  • Real-time Rate Limit Headers: Response headers with current usage and limits
  • Dynamic Rate Limits: Account tier-based rate limit adjustments
  • Redis-Based Distribution: Full distributed rate limiting for multi-instance deployments
  • Enhanced Monitoring: Real-time rate limit status via getRateLimits endpoint
  • SLP Exceptions: Special rate limit accommodations for Synthetix Liquidity Providers

Related Documentation