Skip to content

Managed Storage

Every Enrich.sh customer gets a dedicated, isolated R2 bucket — fully S3-compatible, with your own credentials. Your data never shares storage with other customers.

Your Own Storage Instance

When you enable warehouse storage, Enrich.sh provisions a private R2 bucket exclusively for your account. You receive full S3-compatible credentials that you can use with any tool, SDK, or warehouse.

┌─────────────────────────────────────────────────────┐
│  Your Account                                       │
│  ┌───────────────────────────────────────────────┐  │
│  │  Dedicated R2 Bucket                          │  │
│  │  enrich-r2-{customer_id}                      │  │
│  │                                               │  │
│  │  ✅ Isolated — no other customer has access   │  │
│  │  ✅ S3-compatible — works with any S3 client  │  │
│  │  ✅ Your credentials — full read access        │  │
│  │  ✅ $0 egress — unlimited reads at no cost    │  │
│  └───────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────┘

What You Get

PropertyDetails
BucketOne dedicated bucket per customer
EndpointS3-compatible HTTPS endpoint
Access Key IDScoped to your bucket only
Secret Access KeyScoped to your bucket only
Regionauto (Cloudflare global network)
EncryptionAt-rest encryption (Cloudflare R2 default)

Enabling Your Storage

Dashboard

  1. Go to dashboard.enrich.shIntegrations
  2. Click Enable Warehouse
  3. Your bucket and credentials are provisioned instantly

API

bash
curl -X POST https://enrich.sh/warehouse/enable \
  -H "Authorization: Bearer sk_live_your_key"

Response:

json
{
  "bucket": "enrich-r2-cust-abc123",
  "endpoint": "https://abcdef123456.r2.cloudflarestorage.com",
  "access_key_id": "your_read_access_key",
  "secret_access_key": "your_read_secret_key",
  "region": "auto"
}

Retrieving Your Credentials

If your storage is already enabled, retrieve your credentials anytime:

bash
curl https://enrich.sh/warehouse \
  -H "Authorization: Bearer sk_live_your_key"

Or go to Dashboard → Integrations to view your endpoint, bucket, access key, and secret key.

What You Can Do

Download Data

Your Parquet files are stored automatically as events flow in. You can download them using any S3-compatible tool:

AWS CLI:

bash
aws s3 ls s3://enrich-r2-cust-abc123/events/ \
  --endpoint-url https://abcdef123456.r2.cloudflarestorage.com

aws s3 cp s3://enrich-r2-cust-abc123/events/2026/02/18/ ./local-backup/ \
  --recursive \
  --endpoint-url https://abcdef123456.r2.cloudflarestorage.com

Python (boto3):

python
import boto3

s3 = boto3.client('s3',
    endpoint_url='https://abcdef123456.r2.cloudflarestorage.com',
    aws_access_key_id='your_access_key',
    aws_secret_access_key='your_secret_key',
    region_name='auto'
)

# List files
response = s3.list_objects_v2(
    Bucket='enrich-r2-cust-abc123',
    Prefix='events/2026/02/'
)

for obj in response.get('Contents', []):
    print(obj['Key'], obj['Size'])

# Download a file
s3.download_file(
    'enrich-r2-cust-abc123',
    'events/2026/02/18/11/2026-02-18T11-36-37-431Z.parquet',
    'local-file.parquet'
)

Browse via Dashboard

The Enrich.sh dashboard includes a built-in Storage page where you can browse your files, preview data, and download individual Parquet files — no S3 credentials required.

Query Directly

Connect your warehouse or analytics tool to query your data in-place. See the Connect guide for setup instructions with DuckDB, ClickHouse, Snowflake, BigQuery, and Python.

File Layout

Your data is automatically organized in a time-partitioned structure:

enrich-r2-cust-abc123/
├── events/
│   └── 2026/
│       └── 02/
│           ├── 15/
│           │   ├── 09/
│           │   │   └── 2026-02-15T09-30-00-000Z.parquet
│           │   └── 10/
│           │       └── 2026-02-15T10-15-00-000Z.parquet
│           └── 16/
│               └── ...
├── clicks/
│   └── 2026/
│       └── ...
└── transactions/
    └── _dlq/          ← Dead Letter Queue (strict mode rejections)
        └── 2026/
            └── ...

See Parquet Files for details on the file format, data types, and how to read them.

Security

PropertyDetails
Bucket isolationOne bucket per customer — no shared storage
Credential scopeRead-only, scoped to your bucket only
EncryptionAt-rest encryption (Cloudflare R2 default)
No egress fees$0 to read your data — query as much as you want

WARNING

Your credentials provide access to all data in your bucket across all streams. Treat them like any other secret.

Data Freshness

VolumeTypical Flush Delay
High volume (>5k events/min)~2 minutes
Medium volume~5–12 minutes
Low volume (<50 events/min)Up to 1 hour (max hold time)

Data is available to query or download as soon as the Parquet file is flushed to your bucket.

Disabling Storage

bash
curl -X POST https://enrich.sh/warehouse/disable \
  -H "Authorization: Bearer sk_live_your_key"

This revokes your S3 credentials and stops writing to your dedicated bucket. Data already stored is retained.

Serverless data ingestion for developers.