Documentation Index
Fetch the complete documentation index at: https://docs.datadocked.com/llms.txt
Use this file to discover all available pages before exploring further.
Rate Limiting
To ensure fair usage and API stability, Data Docked implements rate limiting on all endpoints.
Default Limit
| Limit | Scope |
|---|
| 15 requests per minute | Per endpoint, per user |
Endpoint-Specific Limits
| Endpoint | Rate Limit |
|---|
get-vessel-location | 100 requests/min |
my-credits | 50 requests/min |
get-vessel-particulars | 50 requests/min |
get-vessel-engine-data | 50 requests/min |
get-vessel-management-data | 50 requests/min |
vessels-by-name | 50 requests/min |
get-vessels-by-area | 50 requests/min |
get-vessel-info | 15 requests/min |
get-vessel-weather | 15 requests/min |
port-calls-by-vessel | 15 requests/min |
vessel-mou | 15 requests/min |
port-calls-by-port | 15 requests/min |
get-vessels-location-bulk-search | 15 requests/min |
get-vessel-historical-data | 15 requests/min |
Rate Limit Exceeded
If you exceed the rate limit, you’ll receive a 429 Too Many Requests response:
{
"error": "Rate limit exceeded",
"message": "You have exceeded the rate limit of 15 requests per minute"
}
Best Practices
Implement Exponential Backoff
When you receive a 429 response, wait before retrying:
import time
import requests
def make_request_with_backoff(url, headers, max_retries=5):
for attempt in range(max_retries):
response = requests.get(url, headers=headers)
if response.status_code == 429:
wait_time = 2 ** attempt # 1, 2, 4, 8, 16 seconds
time.sleep(wait_time)
continue
return response
raise Exception("Max retries exceeded")
Use Bulk Endpoints
Instead of making many individual requests, use bulk endpoints when available:
# Instead of this (10 requests):
for imo in ["9247431", "9184419", "9465411", ...]:
get_vessel_location(imo)
# Do this (1 request):
get_vessels_location_bulk("9247431,9184419,9465411,...")
Cache Responses
AIS data updates every few minutes. Cache responses to reduce unnecessary requests:
from functools import lru_cache
import time
@lru_cache(maxsize=1000)
def get_cached_location(imo, timestamp):
# timestamp rounded to 5-minute intervals
return get_vessel_location(imo)
# Use with 5-minute cache
current_interval = int(time.time() / 300) * 300
location = get_cached_location("9247431", current_interval)
Need Higher Limits?
Contact us at datadocked.com/contact to discuss enterprise rate limits.