Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.datadocked.com/llms.txt

Use this file to discover all available pages before exploring further.

Rate Limiting

To ensure fair usage and API stability, Data Docked implements rate limiting on all endpoints.

Default Limit

LimitScope
15 requests per minutePer endpoint, per user

Endpoint-Specific Limits

EndpointRate Limit
get-vessel-location100 requests/min
my-credits50 requests/min
get-vessel-particulars50 requests/min
get-vessel-engine-data50 requests/min
get-vessel-management-data50 requests/min
vessels-by-name50 requests/min
get-vessels-by-area50 requests/min
get-vessel-info15 requests/min
get-vessel-weather15 requests/min
port-calls-by-vessel15 requests/min
vessel-mou15 requests/min
port-calls-by-port15 requests/min
get-vessels-location-bulk-search15 requests/min
get-vessel-historical-data15 requests/min

Rate Limit Exceeded

If you exceed the rate limit, you’ll receive a 429 Too Many Requests response:
{
  "error": "Rate limit exceeded",
  "message": "You have exceeded the rate limit of 15 requests per minute"
}

Best Practices

Implement Exponential Backoff

When you receive a 429 response, wait before retrying:
import time
import requests

def make_request_with_backoff(url, headers, max_retries=5):
    for attempt in range(max_retries):
        response = requests.get(url, headers=headers)

        if response.status_code == 429:
            wait_time = 2 ** attempt  # 1, 2, 4, 8, 16 seconds
            time.sleep(wait_time)
            continue

        return response

    raise Exception("Max retries exceeded")

Use Bulk Endpoints

Instead of making many individual requests, use bulk endpoints when available:
# Instead of this (10 requests):
for imo in ["9247431", "9184419", "9465411", ...]:
    get_vessel_location(imo)

# Do this (1 request):
get_vessels_location_bulk("9247431,9184419,9465411,...")

Cache Responses

AIS data updates every few minutes. Cache responses to reduce unnecessary requests:
from functools import lru_cache
import time

@lru_cache(maxsize=1000)
def get_cached_location(imo, timestamp):
    # timestamp rounded to 5-minute intervals
    return get_vessel_location(imo)

# Use with 5-minute cache
current_interval = int(time.time() / 300) * 300
location = get_cached_location("9247431", current_interval)

Need Higher Limits?

Contact us at datadocked.com/contact to discuss enterprise rate limits.