Skip to main content

Rate Limiting

To ensure fair usage and API stability, Data Docked implements rate limiting on all endpoints.

Default Limit

LimitScope
15 requests per minutePer endpoint, per user

Endpoint-Specific Limits

Some endpoints have higher rate limits:
EndpointLimit
get-vessel-location50 requests/minute
get-vessels-by-area50 requests/minute
All other endpoints15 requests/minute

Rate Limit Exceeded

If you exceed the rate limit, you’ll receive a 429 Too Many Requests response:
{
  "error": "Rate limit exceeded",
  "message": "You have exceeded the rate limit of 15 requests per minute"
}

Best Practices

Implement Exponential Backoff

When you receive a 429 response, wait before retrying:
import time
import requests

def make_request_with_backoff(url, headers, max_retries=5):
    for attempt in range(max_retries):
        response = requests.get(url, headers=headers)

        if response.status_code == 429:
            wait_time = 2 ** attempt  # 1, 2, 4, 8, 16 seconds
            time.sleep(wait_time)
            continue

        return response

    raise Exception("Max retries exceeded")

Use Bulk Endpoints

Instead of making many individual requests, use bulk endpoints when available:
# Instead of this (10 requests):
for imo in ["9247431", "9184419", "9465411", ...]:
    get_vessel_location(imo)

# Do this (1 request):
get_vessels_location_bulk("9247431,9184419,9465411,...")

Cache Responses

AIS data updates every few minutes. Cache responses to reduce unnecessary requests:
from functools import lru_cache
import time

@lru_cache(maxsize=1000)
def get_cached_location(imo, timestamp):
    # timestamp rounded to 5-minute intervals
    return get_vessel_location(imo)

# Use with 5-minute cache
current_interval = int(time.time() / 300) * 300
location = get_cached_location("9247431", current_interval)

Need Higher Limits?

Contact us at datadocked.com/contact to discuss enterprise rate limits.