LogoLogo
System Status
Digital River API reference
Digital River API reference
  • Digital River API reference
  • Digital River API Reference Guide
    • API structure
    • Best practices
    • Supported languages
    • Working with metadata
    • Rate limiting
    • Versioning
    • Glossary
Powered by GitBook
On this page
  • Our approach to rate-limiting
  • Causes of rate limiting
  • Handling rate limiting
  • Rate-limiting best practices
  1. Digital River API Reference Guide

Rate limiting

Understand the rate limits in the Digital River APIs as well as how to handle and avoid reaching them

PreviousWorking with metadataNextVersioning

Last updated 4 months ago

The Digital River APIs limit the number of calls you can make within a given time.

This is crucial for maintaining system stability and ensuring our clients experience efficient, secure, and reliable service. Rate limiting also helps us mitigate the damage caused by malicious actors and faulty integrations.

If you exceed the , your integration should temporarily stop making requests. This is because these requests will fail until a certain amount of time has passed. You can usually avoid hitting this ceiling by following our rate-limiting. But, if your API calls breach the rate limit, your integration should have logic to .

Our approach to rate-limiting

The maximum number of requests you can make per unit of time is determined by which you use. Your integration can send up to 250 requests per second in . In , you can make up to 25 requests per second.

When a request exceeds these rate limits, we block that call from reaching our backend servers and then respond with a 429 Too Many Requests status code and a standardized Retry-After header. This header's value is an integer, denoting the number of seconds to wait before making another request. You can use this value when implementing logic to .

In the following response, the Retry-After header indicates that your integration should be delayed 120 seconds before sending more requests.

...
Retry-After: 120
...

Causes of rate limiting

Some common reasons for exceeding rate limits include:

  • Performing analytics or batch updates on your product catalog. You could attempt to reduce your request rate in these scenarios or employ a strategy.

  • Your eCommerce store might be conducting a flash sale (in which deep product discounts are offered for a short period), which results in a sudden spike in traffic. In almost all cases, our are high enough, so that legitimate traffic is never throttled. However, if you believe an upcoming event might cause you to hit your request ceiling, contact your Digital River representative and inquire about temporarily increasing your rate limit.

  • Making unnecessary requests to the APIs that retrieve data from the response but fail to use it in your application.

Your integration should always have a mechanism for . However, you can usually avoid triggering our restrictions by following some .

Handling rate limiting

The simplest, most straightforward approach is to build a delay into your code. Whenever you catch a 429 error, delay the execution of your next API call. To prevent your request from getting blocked, don't make any additional requests until the number of seconds specified in the Retry-After response header has elapsed.

Rate-limiting best practices

  • Optimize your code to eliminate unnecessary API calls. For example, ensure that all your requests retrieve data, which your application then uses.

  • Avoid making multiple concurrent requests.

  • When you're making a large number of POST, PUT, or DELETE requests, implement a delay between each.

In addition to following our , you should have a built-in retry mechanism to handle rate limiting.

You could implement an algorithm for a more sophisticated solution. With this approach, you periodically retry a failed request after the waiting period has expired. The wait time between each retry request increases exponentially, up to a designated maximum backoff time. At this point, the time between retries stops increasing. If you use this approach, we recommend you code some random behavior, or , into your algorithm's wait times. It helps avoid the where client requests have become synchronized by a blocking event, and retry requests are sent in synchronous waves.

At an even more advanced level, you could implement a algorithm (sometimes known as a "") to manage traffic to the Digital River APIs.

In addition to implementing logic to , you should also take steps to minimize the probability of hitting your request ceiling. In most cases, you can accomplish this by adhering to some best practices:

Instead of frequently calling the APIs, your integration should extensively use .

When updating product catalogs, implement strategies to minimize the number of POST /skus/{id} and PUT /skus/{id} requests you make.

exponential backoff
jitter
thundering herd problem
token bucket
leaky bucket
delta encoding
best practices
handle rate limiting
rate limiting
delta encoding
designated rate limit
best practices
manage rate limiting effectively
handle rate limiting
standard rate limits
handling rate limiting
rate-limiting best practices
secret (confidential) API key
live mode
test mode
SKU
webhook events