Events API
The events API is how your app pushes audit records into Recalled and reads them back.
Examples below show JSON payloads. For ready-to-run snippets in curl, Python, Go, PHP, Ruby, Java and Rust, see Use from any language.
Create an event
POST /v1/events
{
"action": "invoice.deleted",
"actor": {
"type": "user",
"id": "user_123",
"name": "Alice",
"email": "alice@example.com"
},
"organization": "org_abc",
"targets": [{ "type": "invoice", "id": "inv_42" }],
"metadata": { "reason": "duplicate" },
"occurred_at": "2026-04-14T09:12:45.000Z"
}Required: action. Recommended: actor.id, organization. Everything else is optional metadata.
The server computes a SHA-256 hash of the event chained to the previous event in the same project AND an HMAC-SHA256 signature over the canonical payload, keyed by a secret that lives outside the database. The chain detects reordering and gaps; the signature detects content rewrites. Call `GET /v1/events/verify` to audit both at once.
Field reference
Every field you can send on POST /v1/events, with its type, required status and what it's for.
action, required
String, 1 to 255 chars. The verb-style name of what happened. This is the only mandatory field.
Recalled doesn't enforce a naming scheme but we recommend domain.subject.verb dot-separated, past tense:
- Good:
user.logged_in,invoice.deleted,billing.subscription.updated,api_key.rotated - Bad:
click,error,something happened,User Login
Consistent naming pays off later: it's what powers exact match filtering (?action=user.delete), wildcard retention rules (user.*) and full-text search.
For the full naming convention, the standard verb list and a category-by-category catalogue of what to log, see What to log.
organization, optional
String, max 128 chars. The tenant identifier in your own product, not a Recalled concept.
If your SaaS is multi-tenant, put your internal customer/tenant ID here (e.g. org_acme, tenant_42). Recalled uses it to:
- Filter events in dashboard and API (
?organization=org_acme) - Narrow an embed token so
<RecalledFeed />acts as a per-tenant drill-down inside your admin panel - Route GDPR deletion by organization if needed
If your app is single-tenant or the event isn't tied to a specific customer (cron, system tasks), leave it empty.
actor, optional object
Who performed the action. All sub-fields are optional but actor.id is strongly recommended when the action is triggered by a human user.
| Sub-field | Type | Constraint | Purpose |
|---|---|---|---|
actor.id | string | 1-255 chars | Stable user ID from your DB. Enables per-user filtering and GDPR deletion via DELETE /v1/actors/:id |
actor.type | string | 1-64 chars | user, service, api_key, system, etc. Distinguishes human vs automated |
actor.name | string | max 255 chars | Display name, shown in dashboard and embed feed |
actor.email | string | max 255 chars, valid email | Optional, shown in dashboard |
Leave actor out entirely for system events (cron, migration, startup tasks).
targets, optional array
List of resources the action operated on. Max 20 entries per event, and the serialized JSON of the whole array must stay under 4 KB. Each entry has:
| Sub-field | Type | Constraint | Purpose |
|---|---|---|---|
type | string | 1-64 chars, required | Resource type (invoice, project, api_key) |
id | string | 1-255 chars, required | Resource ID in your DB |
name | string | max 255 chars, optional | Display name |
Example, a user moved 2 items to a folder:
{
"action": "folder.items.moved",
"actor": { "id": "user_1" },
"targets": [
{ "type": "item", "id": "item_a", "name": "Invoice Q1" },
{ "type": "item", "id": "item_b", "name": "Invoice Q2" },
{ "type": "folder", "id": "folder_archive", "name": "Archive" }
]
}metadata, optional object
Free-form JSON. Put anything you want to remember about the context:
{
"metadata": {
"reason": "duplicate",
"source": "admin_panel",
"diff": { "before": "draft", "after": "paid" }
}
}No schema is enforced, so it's flexible but not searchable by inner field. Serialized JSON must stay under 8 KB, typical events come in well under 1 KB. Beyond that, the API rejects the event with HTTP 413.
occurred_at, optional ISO 8601
When the action actually happened, as seen by your app. Format 2026-04-14T09:12:45.000Z.
If omitted, the server timestamps the event at ingest time. That's what you want for real-time logging. Only set it explicitly when replaying historical events or when there's a meaningful delay between the action and the API call.
Per-event size limits
Each event's payload is capped at ingest. These caps apply to POST /v1/events only.
| Field | Limit |
|---|---|
action | 255 chars |
metadata | 8 KB serialized JSON |
targets | 4 KB serialized JSON, 20 entries max |
actor.id, actor.name, actor.email | 255 chars each |
A typical event weighs under 500 bytes total. The caps are roughly 20× the usual metadata size, generous enough to absorb a richly-tagged event without leaving the door open to a client that accidentally dumps a stack trace, a request body or an entire document into a single event.
When a payload exceeds a cap, the API returns:
HTTP/1.1 413 Payload Too Large
Content-Type: application/json
{
"error": {
"code": "EVENT_TOO_LARGE",
"message": "metadata is too large: 12453 bytes, limit is 8192",
"details": {
"field": "metadata",
"size": 12453,
"limit": 8192
}
}
}If you keep hitting this in legitimate cases, you probably want to split the data: log a slim event referencing an external resource (S3 key, blob storage URL) instead of inlining the payload itself.
Fields the server fills in
You never send these, Recalled adds them on ingest:
| Field | Meaning |
|---|---|
id | UUID assigned at ingest |
project_id | Inferred from the API key |
ip_address | IP of the ingest request |
user_agent | User-Agent header of the ingest request |
hash | SHA-256 of prev_hash concatenated with the canonical event payload. Chain evidence |
prev_hash | hash of the previous event in this project, null for the very first one |
signature | HMAC-SHA256 of the canonical payload, prefixed with the key version (e.g. v1:...). Server-side secret never stored in the database |
anonymized_at | ISO timestamp set when PII was scrubbed via GDPR erasure. null otherwise |
List events
GET /v1/events?limit=50&cursor=<iso>
Query params:
limit(default 50, max 200)cursor, ISO timestamp from the previous page'snextCursororganization, tenant filteractor_id, filter on a specific actor idaction, exact match filter on a single actionactions, comma-separated list of actions to include (e.g.user.login,user.logout). Max 50 entries.actions_exclude, comma-separated list of actions to exclude. Max 50 entries.ip_address, filter on a specific IPdate_from,date_to, ISO bounds
Returns { data: Event[], nextCursor: string | null }.
Search
GET /v1/events/search?q=<term>
Full-text search across action, actor_name, actor_email, actor_id. Cursor-paginated like list.
Query params:
q, required, the search term (1-255 chars)limit,cursor, pagination like listorganization,actor_id,actions,actions_exclude,ip_address,date_from,date_to, same filter semantics as list, applied on top of the text search
Get one
GET /v1/events/:id
Returns a single event (same shape as list items), scoped to the project of the API key.
Export
GET /v1/exports?format=csv or format=json
Streams the filtered events as a downloadable file. Same filters as list.
Verify the chain
GET /v1/events/verify
Walks every event in the project in occurred-at order and checks:
- Chain link: each
prev_hashequals the previous row'shash. - Stored hash: recompute
sha256(prev_hash || canonical_payload)and compare tohash. - HMAC signature: recompute
hmac-sha256(secret, canonical_payload)and compare tosignature.
Optional query params ?from=<ISO> and ?to=<ISO> limit the check to a window.
The response always returns HTTP 200. The payload tells you what happened:
{
"data": {
"ok": true,
"verified": 1284,
"anonymized": 3,
"unsigned": 0,
"gaps": [
{ "at": "2026-03-01T00:00:00.000Z", "reason": "plan_retention", "purged_count": 112 }
],
"failure": null
}
}When something fails, ok is false and failure pinpoints the offender:
{
"data": {
"ok": false,
"verified": 842,
"anonymized": 0,
"unsigned": 0,
"gaps": [],
"failure": {
"event_id": "01HX...",
"reason": "signature_mismatch",
"at": "2026-04-12T14:07:13.000Z"
}
}
}Failure reasons:
hash_mismatch: a row's payload no longer matches its storedhash.signature_mismatch: a row's payload no longer matches its HMACsignature.chain_broken: a row'sprev_hashpoints nowhere, and noretention_checkpointexplains the gap.
Anonymized rows are reported as anonymized (skipped safely). Rows predating the HMAC rollout are reported as unsigned; running the backfill script clears the count.
Receipts: a portable, citable proof for one event
GET /v1/events/:id/receipt
Returns a single self-contained JSON receipt for one event, with two URLs you can hand out to anyone:
{
"data": {
"type": "recalled.receipt.v1",
"event_id": "01HX...",
"action": "file.deleted",
"actor": { "type": "agent", "id": "claude-sonnet-4.6" },
"target": { "type": "file", "id": "f_42" },
"occurred_at": "2026-05-02T17:42:00.000Z",
"hash": "...",
"prev_hash": "...",
"signature": "v1:...",
"verification_url": "https://api.recalled.dev/v1/receipts/01HX...",
"view_url": "https://recalled.dev/receipts/01HX..."
}
}The view_url is a public webpage that confirms the event exists and the chain is intact, with no API key required. The verification_url is the raw JSON version of the same check. Use this when an AI agent needs to cite an action it took, or when you want to prove to a customer that an event happened without giving them dashboard access. See the Agent audit guide for the full pattern.