Rate Limits
Understanding and working within API rate limits.
Rate Limit Overview
Each API token is limited to 60 requests per minute.
This applies to all endpoints and is designed to:
- Ensure fair usage across all stores
- Protect API stability
- Prevent abuse
Rate Limit Headers
Every response includes rate limit information:
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests per window |
X-RateLimit-Remaining | Requests remaining in window |
X-RateLimit-Reset | Unix timestamp when window resets |
Example Headers
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 55
X-RateLimit-Reset: 1640000060When Rate Limited
If you exceed the limit, you'll receive a 429 Too Many Requests response:
{
"success": false,
"error": "Too Many Requests",
"message": "Rate limit exceeded. Retry after 45 seconds",
"retry_after": 45
}The retry_after field indicates seconds until you can retry.
Best Practices
1. Monitor Headers
Track your remaining requests to avoid hitting limits:
function checkRateLimit(headers) {
const remaining = parseInt(headers['x-ratelimit-remaining']);
const reset = parseInt(headers['x-ratelimit-reset']);
if (remaining < 10) {
console.warn(`Low rate limit: ${remaining} requests remaining`);
}
return { remaining, reset };
}2. Implement Backoff
When rate limited, wait before retrying:
async function requestWithBackoff(endpoint) {
const response = await client.get(endpoint);
if (response.status === 429) {
const retryAfter = response.data.retry_after || 60;
console.log(`Rate limited. Waiting ${retryAfter}s...`);
await sleep(retryAfter * 1000);
return requestWithBackoff(endpoint);
}
return response;
}3. Batch Operations
Instead of many small requests, batch where possible:
// Instead of this:
for (const id of productIds) {
await client.get(`/products/${id}`); // 100 requests
}
// Do this:
const response = await client.get('/products'); // 1 request
const products = response.data.data;4. Cache Responses
Cache data that doesn't change frequently:
const cache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
async function getProductCached(id) {
const key = `product:${id}`;
const cached = cache.get(key);
if (cached && Date.now() < cached.expiry) {
return cached.data;
}
const response = await client.get(`/products/${id}`);
cache.set(key, {
data: response.data,
expiry: Date.now() + CACHE_TTL
});
return response.data;
}5. Use Webhooks
Instead of polling for updates, use webhooks:
// Instead of polling every minute:
setInterval(async () => {
const orders = await client.get('/orders?status=pending');
processOrders(orders);
}, 60000);
// Use webhooks - no rate limit concerns:
app.post('/webhooks', (req, res) => {
if (req.body.event === 'order.paid') {
processOrder(req.body.data);
}
res.sendStatus(200);
});6. Spread Requests
If you need many requests, spread them over time:
async function fetchAllProducts() {
const products = [];
let page = 1;
while (true) {
const response = await client.get(`/products?page=${page}&per_page=100`);
products.push(...response.data.data);
if (page >= response.data.meta.last_page) {
break;
}
page++;
await sleep(1000); // 1 second between requests
}
return products;
}Increasing Limits
Need higher rate limits? Contact support to discuss options for high-volume integrations.
Testing
Use the rate limit headers to test your handling:
- Make requests until
X-RateLimit-Remainingis low - Verify your code handles 429 responses correctly
- Confirm backoff logic waits appropriate time
