Rate Limiting

Understand the rate limits in the Digital River APIs as well as how to handle and avoid reaching them

The Digital River APIs limit the number of calls you can make within a given time.

This rate limiting is crucial for maintaining system stability and ensuring that all of our clients experience efficient, secure, and reliable service. Rate limiting also helps us mitigate the damage caused by malicious actors and faulty integrations.

If you exceed the designated rate limit, then your integration should temporarily stop making requests. This is because these requests will fail until a certain amount of time has passed. You can usually avoid hitting this ceiling by following our rate-limiting best practices. But, in the event the rate limit is breached, your integration should have logic in place to manage rate limiting effectively.

How we rate limit

The maximum number of requests you can make per unit of time is determined by which secret (confidential) API key is being used. Your integration can send up to 250 requests per second in live mode. In test mode, you can make up to 25 requests per second.

When a request exceeds these rate limits, we block that call from reaching our backend servers and then respond with a 429 Too Many Requests status code and a standardized Retry-After header. This header's value is an integer, denoting the number of seconds to wait before making another request. You can use this value when implementing logic to handle rate limiting.

In the following response, the Retry-After header indicates that your integration should be delayed 120 seconds before sending any more requests.

...
Retry-After: 120
...

Causes of rate limiting

Some common reasons for exceeding rate limits include:

  • Performing analytics or batch updates on your SKU product catalog. You could attempt to reduce your request rate in these scenarios or employ a delta encoding strategy.

  • Your eCommerce store might be conducting a flash sale (in which deep product discounts are offered for a short period of time), which results in a sudden spike in traffic. In almost all cases, our standard rate limits are high enough so that legitimate traffic is never throttled. However, if you believe an upcoming event might cause you to hit your request ceiling, contact your Digital River representative and inquire about temporarily increasing your rate limit.

  • Making unnecessary requests to the APIs that retrieve data from the response but then fail to use it in your application.

Your integration should always have a mechanism for handling rate limiting. However, you can usually avoid triggering our restrictions by following some rate-limiting best practices.

Handling rate limiting

In addition to following our best practices, you should have a built-in retry mechanism to handle rate limiting.

The simplest, most straightforward approach is to just build a delay into your code. Whenever you catch a 429 error, delay the execution of your next API call. To prevent your request from getting blocked, don't make any additional requests until the number of seconds specified in the Retry-After response header has elapsed.

You could implement an exponential backoff algorithm if you want a more sophisticated solution. With this approach, you periodically retry a failed request after the waiting period has expired. The wait time between each retry request increases exponentially, up to a designated maximum backoff time. At this point, the time between retries stops increasing. If you use this approach, we recommend you code some random behavior, or jitter, into your algorithm's wait times. This helps avoid the thundering herd problem where client requests have become synchronized by a blocking event and retry requests are sent in synchronous waves.

At an even more advanced level, you could implement a token bucket algorithm (sometimes known as a "leaky bucket") to manage traffic to the Digital River APIs.

Rate limiting best practices

In addition to implementing logic to handle rate limiting, you should also take steps to minimize the probability of hitting your request ceiling. In most cases, you can accomplish this by adhering to some best practices:

  • Instead of frequently calling the APIs, your integration should make extensive use of webhook events.

  • Optimize your code to eliminate unnecessary API calls. For example, ensure that all of your requests retrieve data, which your application then uses.

  • When updating product catalogs, implement delta encoding strategies to minimize the number of POST /skus/{id} and PUT /skus/{id} requests you make.

  • Avoid making multiple concurrent requests.

  • When you're making a large number of POST, PUT, or DELETE requests, implement a delay between each.

Last updated