Managed Storage
Every Enrich.sh customer gets a dedicated, isolated R2 bucket — fully S3-compatible, with your own credentials. Your data never shares storage with other customers.
Your Own Storage Instance
When you enable warehouse storage, Enrich.sh provisions a private R2 bucket exclusively for your account. You receive full S3-compatible credentials that you can use with any tool, SDK, or warehouse.
┌─────────────────────────────────────────────────────┐
│ Your Account │
│ ┌───────────────────────────────────────────────┐ │
│ │ Dedicated R2 Bucket │ │
│ │ enrich-r2-{customer_id} │ │
│ │ │ │
│ │ ✅ Isolated — no other customer has access │ │
│ │ ✅ S3-compatible — works with any S3 client │ │
│ │ ✅ Your credentials — full read access │ │
│ │ ✅ $0 egress — unlimited reads at no cost │ │
│ └───────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘What You Get
| Property | Details |
|---|---|
| Bucket | One dedicated bucket per customer |
| Endpoint | S3-compatible HTTPS endpoint |
| Access Key ID | Scoped to your bucket only |
| Secret Access Key | Scoped to your bucket only |
| Region | auto (Cloudflare global network) |
| Encryption | At-rest encryption (Cloudflare R2 default) |
Enabling Your Storage
Dashboard
- Go to dashboard.enrich.sh → Integrations
- Click Enable Warehouse
- Your bucket and credentials are provisioned instantly
API
curl -X POST https://enrich.sh/warehouse/enable \
-H "Authorization: Bearer sk_live_your_key"Response:
{
"bucket": "enrich-r2-cust-abc123",
"endpoint": "https://abcdef123456.r2.cloudflarestorage.com",
"access_key_id": "your_read_access_key",
"secret_access_key": "your_read_secret_key",
"region": "auto"
}Retrieving Your Credentials
If your storage is already enabled, retrieve your credentials anytime:
curl https://enrich.sh/warehouse \
-H "Authorization: Bearer sk_live_your_key"Or go to Dashboard → Integrations to view your endpoint, bucket, access key, and secret key.
What You Can Do
Download Data
Your Parquet files are stored automatically as events flow in. You can download them using any S3-compatible tool:
AWS CLI:
aws s3 ls s3://enrich-r2-cust-abc123/events/ \
--endpoint-url https://abcdef123456.r2.cloudflarestorage.com
aws s3 cp s3://enrich-r2-cust-abc123/events/2026/02/18/ ./local-backup/ \
--recursive \
--endpoint-url https://abcdef123456.r2.cloudflarestorage.comPython (boto3):
import boto3
s3 = boto3.client('s3',
endpoint_url='https://abcdef123456.r2.cloudflarestorage.com',
aws_access_key_id='your_access_key',
aws_secret_access_key='your_secret_key',
region_name='auto'
)
# List files
response = s3.list_objects_v2(
Bucket='enrich-r2-cust-abc123',
Prefix='events/2026/02/'
)
for obj in response.get('Contents', []):
print(obj['Key'], obj['Size'])
# Download a file
s3.download_file(
'enrich-r2-cust-abc123',
'events/2026/02/18/11/2026-02-18T11-36-37-431Z.parquet',
'local-file.parquet'
)Browse via Dashboard
The Enrich.sh dashboard includes a built-in Storage page where you can browse your files, preview data, and download individual Parquet files — no S3 credentials required.
Query Directly
Connect your warehouse or analytics tool to query your data in-place. See the Connect guide for setup instructions with DuckDB, ClickHouse, Snowflake, BigQuery, and Python.
File Layout
Your data is automatically organized in a time-partitioned structure:
enrich-r2-cust-abc123/
├── events/
│ └── 2026/
│ └── 02/
│ ├── 15/
│ │ ├── 09/
│ │ │ └── 2026-02-15T09-30-00-000Z.parquet
│ │ └── 10/
│ │ └── 2026-02-15T10-15-00-000Z.parquet
│ └── 16/
│ └── ...
├── clicks/
│ └── 2026/
│ └── ...
└── transactions/
└── _dlq/ ← Dead Letter Queue (strict mode rejections)
└── 2026/
└── ...See Parquet Files for details on the file format, data types, and how to read them.
Security
| Property | Details |
|---|---|
| Bucket isolation | One bucket per customer — no shared storage |
| Credential scope | Read-only, scoped to your bucket only |
| Encryption | At-rest encryption (Cloudflare R2 default) |
| No egress fees | $0 to read your data — query as much as you want |
WARNING
Your credentials provide access to all data in your bucket across all streams. Treat them like any other secret.
Data Freshness
| Volume | Typical Flush Delay |
|---|---|
| High volume (>5k events/min) | ~2 minutes |
| Medium volume | ~5–12 minutes |
| Low volume (<50 events/min) | Up to 1 hour (max hold time) |
Data is available to query or download as soon as the Parquet file is flushed to your bucket.
Disabling Storage
curl -X POST https://enrich.sh/warehouse/disable \
-H "Authorization: Bearer sk_live_your_key"This revokes your S3 credentials and stops writing to your dedicated bucket. Data already stored is retained.
