# Recalled, Developer documentation

This file is the full Recalled documentation concatenated into a single markdown document. Drop it into an LLM prompt to get an agent that understands the Recalled API.

Generated on 2026-05-14.

---

<!-- Getting started / Overview -->

# Recalled

**Audit logs as a Service** for B2B and B2C products.

Recalled stores "who did what, when, from where" for every action your users take, and gives you a signed, searchable, exportable log you can show to your customers, your auditors and your legal team.

## Why

- **Compliance ready**: SOC 2, ISO 27001, GDPR ask for an audit trail. Recalled gives you one out of the box with EU hosting, AES-256 at rest and a cryptographic hash chain.
- **No main-DB pollution**: audit logs grow fast and slow down production queries. Recalled stores events off-site, indexed for search, with configurable retention.
- **Embeddable UI**: an internal admin widget (React component) for your support engineers, ops team, SRE and compliance reviewers. Drop it inside the back-office your team already uses to operate the product so they can browse "who did what" without leaving their workflow.

## How it works

1. Create a project in the dashboard.
2. Generate an API key.
3. Install the [npm SDK](/docs/sdk) or hit the [REST API from any language](/docs/any-language).
4. Send events: `client.events.create({ action, actor, targets, metadata })`.
5. Read them back via the dashboard, the API, or the embeddable component.

## Ship your first event in 2 minutes

```ts
import { Recalled } from "@recalled/sdk";

const client = new Recalled({
  apiKey: process.env.RECALLED_API_KEY!,
});

await client.events.create({
  action: "invoice.deleted",
  actor: { id: "user_123", email: "alice@example.com" },
  organization: "org_abc",
  targets: [{ type: "invoice", id: "inv_42" }],
  metadata: { reason: "duplicate" },
});
```

That's it. You're logging.

## Setup with an LLM

If you are integrating Recalled with the help of Claude, Cursor, ChatGPT or any other AI coding assistant, paste the prompt below into the assistant's context first. It tells the assistant the rules of what to log, what to skip, how to name actions, and what to put in metadata, so it stops asking you 50 follow-up questions and ships a clean integration on the first try.

```
You are integrating Recalled (audit logs as a service) into an existing app.

# Rules
1. Log state changes, not reads. A user reading a dashboard 50 times is not an audit event. A user changing their email is.
2. Log actions with consequences, not technical noise. Health checks, cache misses, 304 responses do not belong here. They belong in APM.
3. Log what tells a story. In 6 months, someone will ask "who did this and when". The answer must come from a single event with actor, target, reason, IP, time.

# What to log

Authentication: user.signed_up, user.logged_in, user.logged_out, user.login_failed, user.password_changed, user.password_reset_requested, user.password_reset_completed, user.email_changed, user.two_factor_enabled, user.two_factor_disabled, user.session_revoked, magic_link.sent, magic_link.consumed, oauth.linked, oauth.unlinked.

Authorisation: member.invited, member.joined, member.removed, member.role_changed, team.created, team.deleted, permission.granted, permission.revoked, api_key.created, api_key.revoked, sharing.granted, sharing.revoked, ownership.transferred.

Data lifecycle on every business object (invoice, project, document, etc.): <object>.created, <object>.updated, <object>.deleted, <object>.archived, <object>.restored, <object>.published, <object>.unpublished, <object>.duplicated, <object>.moved.

Money: subscription.created, subscription.updated, subscription.canceled, subscription.plan_changed, invoice.created, invoice.paid, invoice.failed, invoice.refunded, payment.succeeded, payment.failed, refund.issued, refund.completed, coupon.applied, coupon.expired, payment_method.added, payment_method.removed, payment_method.set_default.

Admin actions (always, no exception): admin.impersonation_started, admin.impersonation_ended, admin.user_unlocked, admin.user_locked, admin.feature_toggled, admin.data_overridden, admin.support_intervention.

Exports and imports: export.started, export.completed, export.failed, import.started, import.completed, import.failed, bulk_delete.requested, bulk_delete.completed, gdpr.access_request, gdpr.erasure_request.

Integrations: integration.connected, integration.disconnected, webhook.created, webhook.updated, webhook.deleted, webhook.delivery_failed (only after retries are exhausted).

Security: security.brute_force_detected, security.suspicious_login, security.rate_limit_exceeded (only when persistent), security.csp_violation_reported, security.api_key_leaked.

System and background jobs (only when meaningful): cron.<name>.completed, cron.<name>.failed (one per run, not per item), migration.applied, migration.rolled_back, backup.created, backup.restored.

# What to skip

- GET requests, page views, dashboard reads, scroll events
- Auto-saves and draft updates if saved every few seconds
- Token refresh, health checks, CSRF verifications
- Permission checks (every API request runs them)
- Each successful webhook delivery, each batch job iteration, each cache invalidation
- Heartbeats and liveness probes
- Anything bigger than ~2 KB in metadata

# Naming convention

Format: <domain>.<subject>.<verb_past_tense>, all lowercase, dot-separated, snake_case inside a segment if needed. Past tense verbs always (.created not .create, .deleted not .delete). Be consistent: do not mix .deleted and .removed for the same domain.

# Metadata

Always include when relevant: source (web, mobile, api, admin_panel, automation, import, webhook), reason (free text if user provided one), request_id (correlation id), result (success or failure).

For updates: changed_fields array of field names, plus before/after only for small diffs.

For money: amount_cents (integer, never float), currency, provider id (stripe_payment_intent_id or equivalent).

For failures: result: "failure", reason, code.

# Antipatterns

Never put in metadata: plaintext secrets, passwords, full tokens, full credit card numbers, full document bodies, file contents, blobs, PII you do not need, stack traces, SQL queries, anything bigger than ~2 KB.

# How to add the SDK call

Use @recalled/sdk:
  import { Recalled } from "@recalled/sdk";
  const client = new Recalled({ apiKey: process.env.RECALLED_API_KEY });
  client.events.emit({ action: "...", actor: {...}, organization: "...", targets: [...], metadata: {...} });

Use emit() (resilient, non blocking) by default. Use create() (throws on failure) only when the audit log is part of the request's success condition.

Now go through the codebase, find the spots that match the catalogue above, and add the appropriate client.events.emit calls. Skip everything that does not match.
```

For the full opinionated guide on what to log, see [What to log](/docs/what-to-log). The MCP server also exposes this as a tool: `get_setup_guide` returns the same prompt so AI agents can read it on demand.

---

<!-- Getting started / REST API -->

# REST API

Recalled ships as a plain HTTPS + JSON API. Any language with an HTTP client can send events, not just Node. The [npm SDK](/docs/sdk) is a thin wrapper over these same endpoints.

## Base URL

```
https://api.recalled.dev/v1
```

## Required headers

Every request to `/v1/*` carries:

```
Authorization: Bearer rec_live_<prefix>_<secret>
Content-Type: application/json
```

`Content-Type` is only required on requests that have a body (`POST`, `PUT`, `PATCH`). See [Authentication](/docs/authentication) for the key format and scopes.

## Response envelope

Single-resource responses are wrapped in `data`:

```json
{
  "data": { "id": "evt_01HX...", "action": "invoice.deleted" }
}
```

List responses add a cursor:

```json
{
  "data": [{ "id": "evt_01HX..." }],
  "nextCursor": "2026-04-14T09:12:45.000Z"
}
```

Errors are always:

```json
{
  "error": {
    "code": "VALIDATION_ERROR",
    "message": "action is required",
    "details": {}
  }
}
```

Full error catalog: [Error codes](/docs/errors).

## Endpoints

| Method | Path | What it does |
|---|---|---|
| `POST` | `/v1/events` | Ingest a new event |
| `GET` | `/v1/events` | List events, cursor pagination + filters |
| `GET` | `/v1/events/search` | Full-text search |
| `GET` | `/v1/events/:id` | Read one event |
| `GET` | `/v1/events/verify` | Verify hash chain and signatures |
| `GET` | `/v1/exports` | Download CSV or JSON export |
| `DELETE` | `/v1/actors/:id` | GDPR erasure for one actor |
| `POST` | `/v1/embed/token` | Mint a short-lived embed token |

Each endpoint is documented in detail in [Events API](/docs/events), [GDPR](/docs/gdpr) and [Embeddable UI](/docs/embed).

## Try it from your terminal

```bash
curl -X POST https://api.recalled.dev/v1/events \
  -H "Authorization: Bearer $RECALLED_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "user.login",
    "actor": { "id": "user_123" },
    "organization": "org_acme"
  }'
```

## Pagination

List and search endpoints use **cursor pagination** keyed on `occurred_at`. The response contains `nextCursor`; pass it back on the next call until it returns `null`.

```bash
curl "https://api.recalled.dev/v1/events?limit=50&cursor=2026-04-14T09:12:45.000Z" \
  -H "Authorization: Bearer $RECALLED_API_KEY"
```

## Rate limits

- `POST /v1/events`: 1200 requests per minute per API key
- Other endpoints: 1500 requests per minute per IP

Every response carries IETF `RateLimit-Limit`, `RateLimit-Remaining` and `RateLimit-Reset` headers. See [Rate limits](/docs/rate-limits).

## Next

- [Use from any language](/docs/any-language), runnable snippets in Python, Go, PHP, Ruby, Java, Rust and more
- [Authentication](/docs/authentication), key format, scopes, embed tokens
- [Events API](/docs/events), every field and query param documented

---

<!-- Getting started / Use from any language -->

# Use from any language

The REST API is plain HTTPS + JSON. Any language with an HTTP client can send events. Below is the same `POST /v1/events` call, written idiomatically in a handful of languages. Drop in your API key, run, done.

Every example targets:

```
POST https://api.recalled.dev/v1/events
Authorization: Bearer $RECALLED_API_KEY
Content-Type: application/json
```

With the body:

```json
{
  "action": "invoice.deleted",
  "actor": { "id": "user_123", "email": "alice@example.com" },
  "organization": "org_acme",
  "targets": [{ "type": "invoice", "id": "inv_42" }],
  "metadata": { "reason": "duplicate" }
}
```

## curl

```bash
curl -X POST https://api.recalled.dev/v1/events \
  -H "Authorization: Bearer $RECALLED_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "invoice.deleted",
    "actor": { "id": "user_123", "email": "alice@example.com" },
    "organization": "org_acme",
    "targets": [{ "type": "invoice", "id": "inv_42" }],
    "metadata": { "reason": "duplicate" }
  }'
```

## Node.js (fetch, no SDK)

If you don't want the SDK and prefer raw `fetch`:

```js
const response = await fetch("https://api.recalled.dev/v1/events", {
  method: "POST",
  headers: {
    Authorization: `Bearer ${process.env.RECALLED_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    action: "invoice.deleted",
    actor: { id: "user_123", email: "alice@example.com" },
    organization: "org_acme",
    targets: [{ type: "invoice", id: "inv_42" }],
    metadata: { reason: "duplicate" },
  }),
});

if (!response.ok) throw new Error(await response.text());
const { data: event } = await response.json();
```

## Python (requests)

```python
import os
import requests

response = requests.post(
    "https://api.recalled.dev/v1/events",
    headers={
        "Authorization": f"Bearer {os.environ['RECALLED_API_KEY']}",
        "Content-Type": "application/json",
    },
    json={
        "action": "invoice.deleted",
        "actor": {"id": "user_123", "email": "alice@example.com"},
        "organization": "org_acme",
        "targets": [{"type": "invoice", "id": "inv_42"}],
        "metadata": {"reason": "duplicate"},
    },
    timeout=10,
)
response.raise_for_status()
event = response.json()["data"]
```

## Go (net/http)

```go
package main

import (
    "bytes"
    "encoding/json"
    "net/http"
    "os"
    "time"
)

func main() {
    body, _ := json.Marshal(map[string]any{
        "action":       "invoice.deleted",
        "actor":        map[string]any{"id": "user_123", "email": "alice@example.com"},
        "organization": "org_acme",
        "targets":      []map[string]any{{"type": "invoice", "id": "inv_42"}},
        "metadata":     map[string]any{"reason": "duplicate"},
    })

    req, _ := http.NewRequest("POST", "https://api.recalled.dev/v1/events", bytes.NewReader(body))
    req.Header.Set("Authorization", "Bearer "+os.Getenv("RECALLED_API_KEY"))
    req.Header.Set("Content-Type", "application/json")

    client := &http.Client{Timeout: 10 * time.Second}
    resp, err := client.Do(req)
    if err != nil {
        panic(err)
    }
    defer resp.Body.Close()
}
```

## PHP (curl)

```php
<?php
$body = json_encode([
    "action" => "invoice.deleted",
    "actor" => ["id" => "user_123", "email" => "alice@example.com"],
    "organization" => "org_acme",
    "targets" => [["type" => "invoice", "id" => "inv_42"]],
    "metadata" => ["reason" => "duplicate"],
]);

$ch = curl_init("https://api.recalled.dev/v1/events");
curl_setopt_array($ch, [
    CURLOPT_POST => true,
    CURLOPT_POSTFIELDS => $body,
    CURLOPT_RETURNTRANSFER => true,
    CURLOPT_HTTPHEADER => [
        "Authorization: Bearer " . getenv("RECALLED_API_KEY"),
        "Content-Type: application/json",
    ],
]);
$response = curl_exec($ch);
curl_close($ch);
```

## Ruby (Net::HTTP)

```ruby
require "net/http"
require "json"
require "uri"

uri = URI("https://api.recalled.dev/v1/events")
request = Net::HTTP::Post.new(uri)
request["Authorization"] = "Bearer #{ENV['RECALLED_API_KEY']}"
request["Content-Type"] = "application/json"
request.body = {
  action: "invoice.deleted",
  actor: { id: "user_123", email: "alice@example.com" },
  organization: "org_acme",
  targets: [{ type: "invoice", id: "inv_42" }],
  metadata: { reason: "duplicate" },
}.to_json

response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: true) do |http|
  http.request(request)
end
```

## Java (java.net.http)

```java
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

String body = """
{
  "action": "invoice.deleted",
  "actor": { "id": "user_123", "email": "alice@example.com" },
  "organization": "org_acme",
  "targets": [{ "type": "invoice", "id": "inv_42" }],
  "metadata": { "reason": "duplicate" }
}
""";

HttpRequest request = HttpRequest.newBuilder()
    .uri(URI.create("https://api.recalled.dev/v1/events"))
    .header("Authorization", "Bearer " + System.getenv("RECALLED_API_KEY"))
    .header("Content-Type", "application/json")
    .POST(HttpRequest.BodyPublishers.ofString(body))
    .build();

HttpResponse<String> response = HttpClient.newHttpClient()
    .send(request, HttpResponse.BodyHandlers.ofString());
```

## Rust (reqwest)

```rust
use serde_json::json;

let client = reqwest::Client::new();
let response = client
    .post("https://api.recalled.dev/v1/events")
    .header(
        "Authorization",
        format!("Bearer {}", std::env::var("RECALLED_API_KEY")?),
    )
    .header("Content-Type", "application/json")
    .json(&json!({
        "action": "invoice.deleted",
        "actor": { "id": "user_123", "email": "alice@example.com" },
        "organization": "org_acme",
        "targets": [{ "type": "invoice", "id": "inv_42" }],
        "metadata": { "reason": "duplicate" }
    }))
    .send()
    .await?
    .error_for_status()?;
```

## Error handling

Every non-2xx response has the same shape:

```json
{
  "error": {
    "code": "PLAN_LIMIT_REACHED",
    "message": "Monthly event quota exceeded",
    "details": { "limit": 5000 }
  }
}
```

- `400`: `VALIDATION_ERROR`, body failed schema (fields listed in `details`)
- `401`: `UNAUTHORIZED`, `INVALID_API_KEY` or `REVOKED_API_KEY`, key missing, wrong or revoked
- `403`: `FORBIDDEN`, key is valid but the feature is plan-gated
- `429`: `RATE_LIMITED` (check `RateLimit-Reset`) or `PLAN_LIMIT_REACHED` (monthly quota)
- `5xx`: retry with backoff

Full list in [Error codes](/docs/errors).

## Retry strategy

Retry on `408`, `429`, `502`, `503`, `504` with exponential backoff (1s to 10min). Do **not** retry on `400`, `401`, `403`, `404`, those are permanent.

The [npm SDK](/docs/sdk) implements this for you, including a 24h in-memory queue for `emit()`. If you're on Node, use the SDK. Otherwise, wrap your HTTP client in a retry loop.

---

<!-- Core concepts / Authentication -->

# Authentication

All `/v1/*` requests require a **Bearer API key** in the `Authorization` header.

```
Authorization: Bearer rec_live_aBcD1234EfGh5678...
```

## Key format

- `rec_live_<random>`, production key, sends real events.
- `rec_test_<random>`, test key, same behavior, easier to distinguish.

Generate keys from the dashboard (one per environment). Keys are hashed (SHA-256) on the server, **the full secret is shown only once at creation**. If you lose it, revoke and generate a new one.

## Scopes

Each key can be scoped to a subset of actions:

- `events:write`, ingest new events
- `events:read`, list, search, read
- `exports:read`, download CSV/JSON exports
- `actors:delete`, GDPR right-to-erasure
- `embed:write`, mint short-lived embed tokens

## Embed tokens

For the internal `<RecalledFeed />` admin widget, you mint a short-lived token server-side and pass it to the browser. The browser talks to `/v1/embed/events` with that token instead of the API key, so your API key never leaves the server. By default the token grants read access to the whole project (admin view); pass `organization` if you want to narrow a given widget instance to a single tenant.

```ts
const { token } = await client.embed.createToken({
  organization: "org_abc",
  ttlSeconds: 900,
});
// return `token` to the browser
```

---

<!-- Core concepts / Events API -->

# Events API

The events API is how your app pushes audit records into Recalled and reads them back.

Examples below show JSON payloads. For ready-to-run snippets in curl, Python, Go, PHP, Ruby, Java and Rust, see [Use from any language](/docs/any-language).

## Create an event

`POST /v1/events`

```json
{
  "action": "invoice.deleted",
  "actor": {
    "type": "user",
    "id": "user_123",
    "name": "Alice",
    "email": "alice@example.com"
  },
  "organization": "org_abc",
  "targets": [{ "type": "invoice", "id": "inv_42" }],
  "metadata": { "reason": "duplicate" },
  "occurred_at": "2026-04-14T09:12:45.000Z"
}
```

**Required**: `action`. **Recommended**: `actor.id`, `organization`. Everything else is optional metadata.

The server computes a SHA-256 hash of the event chained to the previous event in the same project AND an HMAC-SHA256 signature over the canonical payload, keyed by a secret that lives outside the database. The chain detects reordering and gaps; the signature detects content rewrites. Call [`GET /v1/events/verify`](#verify-the-chain) to audit both at once.

## Field reference

Every field you can send on `POST /v1/events`, with its type, required status and what it's for.

### `action`, required

String, 1 to 255 chars. The verb-style name of what happened. **This is the only mandatory field.**

Recalled doesn't enforce a naming scheme but we recommend `domain.subject.verb` dot-separated, past tense:

- Good: `user.logged_in`, `invoice.deleted`, `billing.subscription.updated`, `api_key.rotated`
- Bad: `click`, `error`, `something happened`, `User Login`

Consistent naming pays off later: it's what powers exact match filtering (`?action=user.delete`), wildcard retention rules (`user.*`) and full-text search.

For the full naming convention, the standard verb list and a category-by-category catalogue of what to log, see [What to log](/docs/what-to-log).

### `organization`, optional

String, max 128 chars. The **tenant identifier in your own product**, not a Recalled concept.

If your SaaS is multi-tenant, put your internal customer/tenant ID here (e.g. `org_acme`, `tenant_42`). Recalled uses it to:

- Filter events in dashboard and API (`?organization=org_acme`)
- Narrow an embed token so `<RecalledFeed />` acts as a per-tenant drill-down inside your admin panel
- Route GDPR deletion by organization if needed

If your app is single-tenant or the event isn't tied to a specific customer (cron, system tasks), leave it empty.

### `actor`, optional object

Who performed the action. All sub-fields are optional but `actor.id` is strongly recommended when the action is triggered by a human user.

| Sub-field | Type | Constraint | Purpose |
|---|---|---|---|
| `actor.id` | string | 1-255 chars | Stable user ID from your DB. Enables per-user filtering and GDPR deletion via `DELETE /v1/actors/:id` |
| `actor.type` | string | 1-64 chars | `user`, `service`, `api_key`, `system`, etc. Distinguishes human vs automated |
| `actor.name` | string | max 255 chars | Display name, shown in dashboard and embed feed |
| `actor.email` | string | max 255 chars, valid email | Optional, shown in dashboard |

Leave `actor` out entirely for system events (cron, migration, startup tasks).

### `targets`, optional array

List of resources the action operated on. Max 20 entries per event, and the serialized JSON of the whole array must stay **under 4 KB**. Each entry has:

| Sub-field | Type | Constraint | Purpose |
|---|---|---|---|
| `type` | string | 1-64 chars, required | Resource type (`invoice`, `project`, `api_key`) |
| `id` | string | 1-255 chars, required | Resource ID in your DB |
| `name` | string | max 255 chars, optional | Display name |

Example, a user moved 2 items to a folder:

```json
{
  "action": "folder.items.moved",
  "actor": { "id": "user_1" },
  "targets": [
    { "type": "item", "id": "item_a", "name": "Invoice Q1" },
    { "type": "item", "id": "item_b", "name": "Invoice Q2" },
    { "type": "folder", "id": "folder_archive", "name": "Archive" }
  ]
}
```

### `metadata`, optional object

Free-form JSON. Put anything you want to remember about the context:

```json
{
  "metadata": {
    "reason": "duplicate",
    "source": "admin_panel",
    "diff": { "before": "draft", "after": "paid" }
  }
}
```

No schema is enforced, so it's flexible but not searchable by inner field. Serialized JSON must stay **under 8 KB**, typical events come in well under 1 KB. Beyond that, the API rejects the event with HTTP 413.

### `occurred_at`, optional ISO 8601

When the action actually happened, as seen by your app. Format `2026-04-14T09:12:45.000Z`.

**If omitted, the server timestamps the event at ingest time.** That's what you want for real-time logging. Only set it explicitly when replaying historical events or when there's a meaningful delay between the action and the API call.

## Per-event size limits

Each event's payload is capped at ingest. These caps apply to `POST /v1/events` only.

| Field | Limit |
|---|---|
| `action` | 255 chars |
| `metadata` | 8 KB serialized JSON |
| `targets` | 4 KB serialized JSON, 20 entries max |
| `actor.id`, `actor.name`, `actor.email` | 255 chars each |

A typical event weighs **under 500 bytes total**. The caps are roughly 20× the usual `metadata` size, generous enough to absorb a richly-tagged event without leaving the door open to a client that accidentally dumps a stack trace, a request body or an entire document into a single event.

When a payload exceeds a cap, the API returns:

```http
HTTP/1.1 413 Payload Too Large
Content-Type: application/json

{
  "error": {
    "code": "EVENT_TOO_LARGE",
    "message": "metadata is too large: 12453 bytes, limit is 8192",
    "details": {
      "field": "metadata",
      "size": 12453,
      "limit": 8192
    }
  }
}
```

If you keep hitting this in legitimate cases, you probably want to split the data: log a slim event referencing an external resource (S3 key, blob storage URL) instead of inlining the payload itself.

## Fields the server fills in

You never send these, Recalled adds them on ingest:

| Field | Meaning |
|---|---|
| `id` | UUID assigned at ingest |
| `project_id` | Inferred from the API key |
| `ip_address` | IP of the ingest request |
| `user_agent` | User-Agent header of the ingest request |
| `hash` | SHA-256 of `prev_hash` concatenated with the canonical event payload. Chain evidence |
| `prev_hash` | `hash` of the previous event in this project, `null` for the very first one |
| `signature` | HMAC-SHA256 of the canonical payload, prefixed with the key version (e.g. `v1:...`). Server-side secret never stored in the database |
| `anonymized_at` | ISO timestamp set when PII was scrubbed via GDPR erasure. `null` otherwise |

## List events

`GET /v1/events?limit=50&cursor=<iso>`

Query params:
- `limit` (default 50, max 200)
- `cursor`, ISO timestamp from the previous page's `nextCursor`
- `organization`, tenant filter
- `actor_id`, filter on a specific actor id
- `action`, exact match filter on a single action
- `actions`, comma-separated list of actions to **include** (e.g. `user.login,user.logout`). Max 50 entries.
- `actions_exclude`, comma-separated list of actions to **exclude**. Max 50 entries.
- `ip_address`, filter on a specific IP
- `date_from`, `date_to`, ISO bounds

Returns `{ data: Event[], nextCursor: string | null }`.

## Search

`GET /v1/events/search?q=<term>`

Full-text search across `action`, `actor_name`, `actor_email`, `actor_id`. Cursor-paginated like list.

Query params:
- `q`, required, the search term (1-255 chars)
- `limit`, `cursor`, pagination like list
- `organization`, `actor_id`, `actions`, `actions_exclude`, `ip_address`, `date_from`, `date_to`, same filter semantics as list, applied on top of the text search

## Get one

`GET /v1/events/:id`

Returns a single event (same shape as list items), scoped to the project of the API key.

## Export

`GET /v1/exports?format=csv` or `format=json`

Streams the filtered events as a downloadable file. Same filters as list.

## Verify the chain

`GET /v1/events/verify`

Walks every event in the project in occurred-at order and checks:

- **Chain link**: each `prev_hash` equals the previous row's `hash`.
- **Stored hash**: recompute `sha256(prev_hash || canonical_payload)` and compare to `hash`.
- **HMAC signature**: recompute `hmac-sha256(secret, canonical_payload)` and compare to `signature`.

Optional query params `?from=<ISO>` and `?to=<ISO>` limit the check to a window.

The response always returns HTTP 200. The payload tells you what happened:

```json
{
  "data": {
    "ok": true,
    "verified": 1284,
    "anonymized": 3,
    "unsigned": 0,
    "gaps": [
      { "at": "2026-03-01T00:00:00.000Z", "reason": "plan_retention", "purged_count": 112 }
    ],
    "failure": null
  }
}
```

When something fails, `ok` is `false` and `failure` pinpoints the offender:

```json
{
  "data": {
    "ok": false,
    "verified": 842,
    "anonymized": 0,
    "unsigned": 0,
    "gaps": [],
    "failure": {
      "event_id": "01HX...",
      "reason": "signature_mismatch",
      "at": "2026-04-12T14:07:13.000Z"
    }
  }
}
```

Failure reasons:

- `hash_mismatch`: a row's payload no longer matches its stored `hash`.
- `signature_mismatch`: a row's payload no longer matches its HMAC `signature`.
- `chain_broken`: a row's `prev_hash` points nowhere, and no `retention_checkpoint` explains the gap.

Anonymized rows are reported as `anonymized` (skipped safely). Rows predating the HMAC rollout are reported as `unsigned`; running the backfill script clears the count.

## Receipts: a portable, citable proof for one event

`GET /v1/events/:id/receipt`

Returns a single self-contained JSON receipt for one event, with two URLs you can hand out to anyone:

```json
{
  "data": {
    "type": "recalled.receipt.v1",
    "event_id": "01HX...",
    "action": "file.deleted",
    "actor": { "type": "agent", "id": "claude-sonnet-4.6" },
    "target": { "type": "file", "id": "f_42" },
    "occurred_at": "2026-05-02T17:42:00.000Z",
    "hash": "...",
    "prev_hash": "...",
    "signature": "v1:...",
    "verification_url": "https://api.recalled.dev/v1/receipts/01HX...",
    "view_url": "https://recalled.dev/receipts/01HX..."
  }
}
```

The `view_url` is a public webpage that confirms the event exists and the chain is intact, with no API key required. The `verification_url` is the raw JSON version of the same check. Use this when an AI agent needs to cite an action it took, or when you want to prove to a customer that an event happened without giving them dashboard access. See the [Agent audit](/docs/agent-audit) guide for the full pattern.

---

<!-- Core concepts / Agent audit -->

# Agent audit

Recalled is built for **human and AI agent actions, side by side**. In 2026, half the actions in a typical SaaS are taken by **AI agents** (Claude, GPT, custom agents wired with tool calls), and the same audit log records them with the same hash chain, the same signatures, the same dashboard. Two small conventions make it the system of record for what your agents did, when, and on whose behalf.

## The pattern

The agent itself does not call Recalled. **Your backend** orchestrates the agent, runs the tool calls, and logs the resulting actions. From Recalled's point of view, an agent is just an actor with a different `actor.type`.

Two conventions:

1. `actor.type` is set to `"agent"` (or `"ai_agent"`, pick one and stick with it).
2. `metadata.triggered_by_user` carries the human user id who started the conversation that led to this action.

```ts
client.events.emit({
  action: "file.deleted",
  actor: {
    type: "agent",
    id: "claude-sonnet-4.6",
    name: "Support Triage Agent",
  },
  organization: "acme_corp",
  targets: [{ type: "file", id: "f_42", name: "old-report.pdf" }],
  metadata: {
    triggered_by_user: "user_123",
    conversation_id: "conv_xyz",
    tool_call_id: "call_abc",
    reasoning: "user asked to clean up old uploads",
    confidence: "high",
    model: "claude-sonnet-4.6",
    tokens_used: 1240,
    result: "success",
  },
});
```

## Three events per tool call

For full traceability, log three events around each agent tool call:

```ts
// 1. The agent decided to call a tool
client.events.emit({
  action: "agent.tool_called",
  actor: { type: "agent", id: "claude-sonnet-4.6", name: "Support Agent" },
  targets: [{ type: "tool", id: "delete_file" }],
  metadata: {
    triggered_by_user: "user_123",
    conversation_id: "conv_xyz",
    tool_args: { file_id: "f_42" },
  },
});

// 2. The action itself (same as a human action)
client.events.emit({
  action: "file.deleted",
  actor: { type: "agent", id: "claude-sonnet-4.6", name: "Support Agent" },
  targets: [{ type: "file", id: "f_42" }],
  metadata: { triggered_by_user: "user_123", on_behalf_of: "user_123" },
});

// 3. The tool returned to the agent
client.events.emit({
  action: "agent.tool_returned",
  actor: { type: "agent", id: "claude-sonnet-4.6", name: "Support Agent" },
  targets: [{ type: "tool_call", id: "call_abc" }],
  metadata: {
    triggered_by_user: "user_123",
    result: "success",
    duration_ms: 142,
  },
});
```

If the cardinality is a problem at scale, drop the `tool_called` and `tool_returned` events and keep only the action itself with rich metadata. The action event alone is enough for accountability.

## Filtering: humans vs agents

Once `actor.type=agent` is set consistently, you can filter the dashboard or the API to see only agent activity, only human activity, or everything for one user:

- All agent activity this month: `?actor_type=agent` (filter by metadata if you need it; today the API filters by `actor_id`)
- All actions a specific human triggered, including those carried out by agents on their behalf: filter by `metadata.triggered_by_user` (planned, today match by hand using `list_events` and post-filter)
- Audit trail for a specific agent: `?actor_id=claude-sonnet-4.6`

## Receipts: replayable evidence

When an agent says "I deleted that file for you", you want it to back the claim with proof. Recalled issues a **receipt** for any event:

```bash
curl https://api.recalled.dev/v1/events/$EVENT_ID/receipt \
  -H "Authorization: Bearer rec_live_..."
```

You get a JSON object the agent can paste back into its reply, with a `view_url` (public webpage) and a `verification_url` (no-auth JSON endpoint). The recipient can verify cryptographically that the event happened, in what order, and untampered, **without an API key**.

Inside Claude Desktop, Cursor or any MCP client connected to Recalled, the `get_event_receipt` tool returns the same object so the agent can cite it without going through HTTP itself.

## What auditors actually want to see

If a customer or auditor questions an agent action three months later, here is what makes it bulletproof:

1. The action itself is in the audit log with `actor.type=agent`.
2. The receipt verifies green: hash chain intact, HMAC signature valid.
3. `metadata.triggered_by_user` ties the action to the human who initiated it.
4. `metadata.reasoning`, `metadata.confidence`, `metadata.model` give context.
5. `metadata.conversation_id` lets you pull the whole conversation if needed.

## Public verification page

Every receipt has a public URL like `https://recalled.dev/receipts/<event-id>`. The page strips actor PII and metadata (the public viewer never sees who or what), and shows only:

- The action name
- The actor type (agent, user, service, etc.)
- The timestamp
- The hashes and signature
- A green or red banner whether the cryptographic proof verifies

Hand this URL out instead of dashboard access. Anyone can confirm an event existed without accessing your project.

## Privacy

The public receipt page never exposes:

- Actor name, email, id, IP, user agent
- Event metadata
- Other events in the chain

It only proves that the specific event id, with that exact action verb and timestamp, was recorded by Recalled and has not been tampered with.

---

<!-- Core concepts / What to log -->

# What to log

The hardest part of integrating Recalled is not the SDK call, it is deciding **what is worth logging**. Log too little and your audit trail is useless. Log too much and you blow your quota, slow your app and drown the signal in noise.

This page is the opinionated guide. Follow these rules and you will have a clean, useful, compliant audit log without thinking about it.

## The 3 rules

**1. Log state changes, not reads.**
A user reading their dashboard 50 times a day is not an audit event. A user changing their email address once is. If the action does not mutate something in your system or in the user's account, it does not belong in Recalled.

**2. Log actions with consequences, not technical noise.**
A failed health check, a cache miss, a 304 response, a CDN purge: not audit events. They belong in your APM tool. Recalled is for actions a human or an auditor would care about in 6 months.

**3. Log what tells a story.**
Imagine someone in 6 months asking "who deleted this invoice and why". The answer must come from a single event: the actor, the target, the reason if any, the IP, the time. If your log entry does not let you reconstruct the story, it is incomplete.

## Decision flowchart

When you are about to add a `client.events.create()` call somewhere, ask:

1. **Does this change a piece of state in our system?** No → do not log.
2. **Would the legal, support or security team ever ask "who did this and when"?** No → do not log.
3. **Is the cardinality reasonable?** (less than 10 events per active user per session) Yes → log it. No → consider sampling or aggregating client-side.

If you answer yes to 1 and 2 and the cardinality is reasonable, log it. Otherwise skip it.

## Catalogue by category

The standard list of what most B2B and B2C SaaS should log. Pick the categories that apply to your product, copy the action names, adapt to your domain. Each tab carries the events to log, the ones to skip, and a metadata hint.

### Authentication

Every action that changes who is logged in or how they authenticate.

**Log this:**
- user.signed_up (new account creation)
- user.logged_in (successful auth, every method)
- user.logged_out (explicit logout, including session revocation)
- user.login_failed (bad password, expired token, blocked IP). Critical for security.
- user.password_changed
- user.password_reset_requested and user.password_reset_completed (two distinct events)
- user.email_changed
- user.two_factor_enabled and user.two_factor_disabled
- user.session_revoked (admin force-logout, log out from all devices)
- magic_link.sent and magic_link.consumed
- oauth.linked and oauth.unlinked (Google, GitHub, etc.)

**Skip this:**
- Page views or user navigated to /dashboard
- Token refresh (every 5 minutes is noise, log only revocations)
- Auth health checks
- Successful CSRF token verifications

**Useful metadata:** auth_provider (email, google, magic_link), ip, user_agent, mfa_used (true/false), success (for failed attempts include the reason: bad_password, account_locked, mfa_required).

### Authorisation

Anything that changes who can do what.

**Log this:**
- member.invited, member.joined, member.removed
- member.role_changed (always include before/after in metadata)
- team.created, team.deleted
- permission.granted, permission.revoked
- api_key.created, api_key.revoked. Last_used_at on the key is fine, do not log every single use
- sharing.granted, sharing.revoked (when a user shares a resource with another user or externally)
- ownership.transferred (especially for B2B)

**Skip this:**
- Permission checks (every API request runs them, that is APM territory)
- Read-only access grants if they auto-expire fast
- Internal RBAC computation steps

**Useful metadata:** role_before, role_after, scope, granted_by, expires_at.

### Data lifecycle

The heart of audit logging. Every business object your users create, update, delete, share or move.

**Log this:**
- <object>.created (invoice.created, project.created, document.created, etc.)
- <object>.updated (with a changed_fields array in metadata, not the full diff if it is large)
- <object>.deleted (always, every soft-delete and hard-delete)
- <object>.archived and <object>.restored
- <object>.published, <object>.unpublished
- <object>.duplicated (by whom, source id in targets)
- <object>.moved (folder change, ownership change, etc.)

**Skip this:**
- Auto-saves and draft updates if your app saves every 10 seconds. Log the explicit save instead.
- Reads, opens, views (those go to product analytics, not audit)
- Internal denormalisation jobs that touch the same row

**Useful metadata:** changed_fields (array of field names), reason (if user provided one), source (manual, api, import, automation).

### Money

Every monetary state change. Be exhaustive here, accountants and your future self will thank you.

**Log this:**
- subscription.created, subscription.updated, subscription.canceled
- subscription.plan_changed (with from and to plan slugs in metadata)
- invoice.created, invoice.paid, invoice.failed, invoice.refunded
- payment.succeeded, payment.failed (include declined reason)
- refund.issued, refund.completed
- coupon.applied, coupon.expired
- payment_method.added, payment_method.removed, payment_method.set_default

**Skip this:**
- Stripe webhook ping itself (log the event the webhook represents instead)
- Currency conversion checks
- Pre-flight payment validations that did not result in a charge attempt

**Useful metadata:** amount_cents, currency, stripe_payment_intent_id, provider (stripe, paddle, etc.), reason for refunds.

### Admin actions

Anything an internal team member does on behalf of a user. Always log these, no exception. This is what auditors look at first.

**Log this:**
- admin.impersonation_started, admin.impersonation_ended
- admin.user_unlocked, admin.user_locked
- admin.feature_toggled (per user or globally)
- admin.data_overridden (when support manually fixes a value)
- admin.support_intervention (catch-all for one-off corrections)
- Anything done via your internal back-office tools that mutates customer data

**Useful metadata:** admin_id, reason (always require one in your back-office UI), ticket_id if you link to your helpdesk.

### Exports & imports

Data leaving or entering the system in bulk.

**Log this:**
- export.started, export.completed, export.failed
- import.started, import.completed, import.failed
- bulk_delete.requested, bulk_delete.completed
- gdpr.access_request and gdpr.erasure_request

**Useful metadata:** format (csv, json), row_count, destination for exports, source for imports, size_bytes.

### Integrations

Events that involve third parties.

**Log this:**
- integration.connected, integration.disconnected
- webhook.created, webhook.updated, webhook.deleted
- webhook.delivery_failed (after the retry budget is exhausted, not every retry)
- oauth.token_refreshed only if it represents a meaningful event (e.g. forced re-auth), otherwise skip

**Skip this:**
- Every successful webhook delivery
- Healthchecks against integrations

### Security

Things your security team needs visibility on.

**Log this:**
- security.brute_force_detected (after the threshold is hit)
- security.suspicious_login (new country, new device, etc.)
- security.rate_limit_exceeded (only if it persisted, not every hit)
- security.csp_violation_reported
- security.api_key_leaked (if you have a detection pipeline)

**Skip this:**
- Every rate-limited request individually
- Every CSP violation (sample or aggregate hourly)

### System & jobs

Useful for operational debugging, but be careful with cardinality.

**Log this:**
- cron.<name>.completed and cron.<name>.failed (one per run, not per item processed)
- migration.applied and migration.rolled_back
- backup.created, backup.restored

**Skip this:**
- Each iteration of a loop, each row processed by a batch job
- Heartbeats, liveness probes
- Internal queue moves


## Naming conventions

Follow this schema for everything: `<domain>.<subject>.<verb_past_tense>` or `<domain>.<verb_past_tense>` when there is no separate subject.

Rules:

1. **All lowercase**, dot-separated, snake_case if needed inside a segment.
2. **Past tense verbs**: `.created` not `.create`, `.updated` not `.update`, `.deleted` not `.delete`.
3. **Domain first**, subject second, verb last: `invoice.line_item.added` is better than `added.invoice.line_item`.
4. **Be consistent**: pick a verb and stick with it. Do not mix `.deleted` and `.removed` for the same domain. Do not mix `.failed` and `.errored`.
5. **Keep it stable**: action names end up in your dashboards, retention rules and notification filters. Renaming them later breaks history.

Standard verbs to reach for, ranked by frequency:

| Verb | Use it for |
|------|------------|
| `.created` | New resource came into existence |
| `.updated` | Existing resource changed |
| `.deleted` | Resource removed (soft or hard) |
| `.archived` / `.restored` | Soft state changes |
| `.published` / `.unpublished` | Visibility toggles |
| `.shared` / `.unshared` | Access grants and revocations |
| `.signed_up` / `.logged_in` / `.logged_out` | Auth lifecycle |
| `.invited` / `.joined` / `.removed` | Membership |
| `.started` / `.completed` / `.failed` | Long-running async actions |
| `.requested` / `.granted` / `.revoked` | Permission flows |

## Metadata schema

`metadata` is a free-form JSON object. Keep it small (a few hundred bytes is plenty), structured, and useful in 6 months. The patterns below work across most action categories.

### Always include when relevant

- `source`: where the action came from. Values: `web`, `mobile`, `api`, `admin_panel`, `automation`, `import`, `webhook`.
- `reason`: free text reason if the user or admin provided one (`metadata.reason: "duplicate"`).
- `request_id`: your own correlation id for tracing back to logs.
- `result`: `success` or `failure`. Default to `success` so you can filter failures later.

### For updates

```json
{
  "metadata": {
    "changed_fields": ["status", "due_date"],
    "before": { "status": "draft" },
    "after": { "status": "paid" }
  }
}
```

Only include the changed fields, not the full object. If the diff is huge, just keep `changed_fields` and skip `before`/`after`.

### For money

Always serialise amounts in **integer cents** (or smallest currency unit), never floats:

```json
{
  "metadata": {
    "amount_cents": 4200,
    "currency": "EUR",
    "stripe_payment_intent_id": "pi_..."
  }
}
```

### For failures

```json
{
  "metadata": {
    "result": "failure",
    "reason": "card_declined",
    "code": "insufficient_funds"
  }
}
```

## Antipatterns

Do not put any of these in metadata:

- **Plaintext secrets, passwords, full tokens, full credit card numbers.** If it should not be in your logs, it should not be in Recalled.
- **Full document bodies, file contents, blobs, images.** Recalled is not a document store. Put the resource id in `targets` and let the consumer fetch the body if they need it.
- **PII you do not need.** If you log an action on a user, you already have `actor.id`. You do not also need their full address, phone number, etc. Less PII = less GDPR exposure.
- **Server-side debug data.** Stack traces, SQL queries, internal IDs that mean nothing to a human auditor. Send those to your APM.
- **Anything bigger than ~2 KB.** Large `metadata` slows down list queries and inflates your storage bill.

## Volume guidance

Recalled bills by event count per month. The right ratio for most B2B SaaS:

- **1 to 5 events per active user per session** (login, a couple of actions, logout)
- **0 events** when a user is just reading
- **A handful per day** for system actions (cron, billing, backups)

If you find yourself logging more than 10 events per active user per day, you are probably over-logging reads or capturing technical noise. Re-read the 3 rules at the top.

## Worked example

A fictional B2B invoicing SaaS. Here is the **complete** list of events its team would push to Recalled. Notice how it stays focused.

```
# Auth (5)
user.signed_up
user.logged_in
user.logged_out
user.login_failed
user.password_changed

# Team (4)
member.invited
member.joined
member.removed
member.role_changed

# Invoices, the core business object (6)
invoice.created
invoice.updated
invoice.sent
invoice.paid
invoice.refunded
invoice.deleted

# Customers (3)
customer.created
customer.updated
customer.deleted

# Billing for the SaaS itself (4)
subscription.created
subscription.plan_changed
subscription.canceled
payment_method.added

# Admin (2)
admin.impersonation_started
admin.impersonation_ended

# Exports (2)
export.started
export.completed
```

That is **26 distinct actions**, more than enough for a clean audit trail. Your real list will be similar in size.

## Next step

Once you have your list, drop it in the [Events API](/docs/events) and start sending. The dashboard's [list_actions_summary tool](/docs/mcp) helps you see in real time which actions you are logging the most, so you can spot over-logging or gaps.

---

<!-- Core concepts / Embeddable UI -->

# Embeddable UI

The `@recalled/sdk/react` sub-export ships a `<RecalledFeed />` component designed to live **inside the admin panel, support console or back-office your own team already uses to operate the product**. Its audience is your support engineers, your ops and SRE, your compliance and audit reviewers, the internal people who need to answer "who did what, when" without context-switching to a second tool.

It is **not** a customer-facing component. The end users of your SaaS never see Recalled. This is not white-label either, there is nothing to resell to your customers. Think of it as an internal observability widget, in the same bucket as a Grafana panel or a Sentry issue feed embedded in your admin.

By default the widget has access to the **whole project**, every event emitted, across every tenant, which is what you want for an admin view. You can optionally narrow a given widget instance to a single tenant if you build a per-customer drill-down.

> **Plan requirement.** Minting embed tokens via `client.embed.createToken()` requires the **Pro** or **Scale** plan. Free accounts get a `FORBIDDEN` error on that endpoint. Reading events via an already-minted token is not plan-gated, if a token is valid, the widget works.

## Install

```bash
npm install @recalled/sdk
```

## Mint a token server-side

The widget is driven by a short-lived **embed token**. Mint it from your own backend so your API key never touches the browser, only the token does. Keep that backend route behind whatever admin auth you already use, so only authorized team members can reach it.

```ts
// your backend, e.g. inside a protected admin route
import { Recalled } from "@recalled/sdk";

const client = new Recalled({ apiKey: process.env.RECALLED_API_KEY! });

// Default admin view: every event in the project, across every tenant.
const { token } = await client.embed.createToken({ ttlSeconds: 900 });

// Optional: narrow this particular widget to a single tenant (drill-down).
const { token: scoped } = await client.embed.createToken({
  organization: "org_acme",
  ttlSeconds: 900,
});
// send the token to your admin page (props, fetch response, etc.)
```

## Render in React

Drop the component inside the admin page your team already uses day to day. The token lives on your admin backend, never in the browser bundle, so access is gated by your existing admin auth.

```tsx
import { RecalledFeed } from "@recalled/sdk/react";

export function AdminAuditLogPage({ token }: { token: string }) {
  return (
    <RecalledFeed
      embedToken={token}
      baseUrl="https://api.recalled.dev/v1"
      pageSize={50}
    />
  );
}
```

The component handles its own pagination and refresh. Style it through your own CSS, it uses plain class names with no hard-coded colors.

---

<!-- Core concepts / GDPR & retention -->

# GDPR & retention

## Right to erasure

`DELETE /v1/actors/:id`

Anonymizes all events for a given actor in the calling project. The rows are kept so the chain link to surrounding events stays intact, but `actor_name`, `actor_email`, `metadata`, `ip_address` and `user_agent` are nulled. `actor_id` is replaced with `[deleted]` and `anonymized_at` is stamped with the erasure time.

```bash
curl -X DELETE "https://api.recalled.dev/v1/actors/user_123" \
  -H "Authorization: Bearer $RECALLED_API_KEY"
```

Pass an optional `organization` query param to scope the erasure to a single tenant:

```bash
curl -X DELETE "https://api.recalled.dev/v1/actors/user_123?organization=org_acme" \
  -H "Authorization: Bearer $RECALLED_API_KEY"
```

Verification knows about this: rows with `anonymized_at` are skipped when recomputing the hash (the payload no longer matches the stored hash by design), while the chain link from the event BEFORE and the event AFTER is still enforced. So GDPR erasure satisfies Article 17 without creating a tampering blind spot.

## Retention

Each plan has a default retention:

- **Free**, 7 days
- **Starter**, 90 days
- **Pro**, 1 year (configurable per event with custom rules)
- **Scale**, unlimited (configurable per project)

On the **Pro** and **Scale** plans, you can define **retention rules per event pattern** from the project settings. `user.delete` at 10 years, `user.login` at 30 days, `*` at 1 year, whatever your compliance team asks. On Pro, anything not matched by a rule falls back to the 1-year plan retention. On Scale, anything not matched stays forever.

## Proof of purge

Every time a batch of events is deleted, a row is written in `retention_checkpoints` with the last deleted hash, the count, the date range, and the reason (`plan_retention`, `rule:<id>`, or `project_deleted` when a whole project is removed). The verify endpoint reads these checkpoints so chain gaps created by legitimate purges do not look like tampering.

---

<!-- Reference / Error codes -->

# Error codes

All errors return JSON in the shape:

```json
{
  "error": {
    "code": "ERROR_CODE",
    "message": "Human-readable message",
    "details": { "optional": "context" }
  }
}
```

| Code | HTTP | Meaning |
|---|---|---|
| `UNAUTHORIZED` | 401 | Missing or invalid credentials |
| `INVALID_API_KEY` | 401 | API key not recognized |
| `REVOKED_API_KEY` | 401 | API key was revoked |
| `FORBIDDEN` | 403 | Authenticated but not allowed |
| `NOT_FOUND` | 404 | Resource does not exist |
| `VALIDATION_ERROR` | 400 | Request body failed schema validation |
| `EVENT_TOO_LARGE` | 413 | A payload field (typically `metadata` or `targets`) exceeded its size cap. `details` carries the offending field, its size and the limit |
| `PLAN_LIMIT_REACHED` | 429 | Monthly event quota hit |
| `RATE_LIMITED` | 429 | Too many requests in the current window |
| `DATABASE_ERROR` | 500 | Underlying DB error |
| `INTERNAL_ERROR` | 500 | Unhandled server error |

---

<!-- Reference / Rate limits -->

# Rate limits

Two buckets apply to `/v1/*`:

## Per-key ingest limit

`POST /v1/events`: **1200 requests per minute** per API key (default). When exceeded, the server returns `429 RATE_LIMITED`.

This limit is generous because ingest is the hot path. Upgrade your plan or contact support if you regularly hit it.

## Global IP limit

All other endpoints: **1500 requests per minute** per IP. Same `429` response shape.

## Response headers

We return standard IETF `RateLimit` headers on every response:

```
RateLimit-Limit: 1200
RateLimit-Remaining: 973
RateLimit-Reset: 45
```

Use them to back off gracefully. The SDK does automatic exponential backoff on `429`.

---

<!-- Reference / Outbound webhooks -->

# Outbound webhooks

Recalled can POST every ingested event to a URL you control, signed with HMAC-SHA256. Use it to route events to your back-end, a data warehouse, Zapier, n8n, a SIEM, or any tool that speaks HTTPS.

> **Plan + role requirements.** Generic webhooks are available on **Pro** (1 webhook) and **Scale** (up to 10). Only the **project owner** can create, edit or delete them. Admins and viewers invited to the project cannot see or manage webhooks even if they can edit the rest of the settings.

## How it works

1. You configure a target URL in the project's notification settings.
2. On save, Recalled generates an HMAC signing secret and shows it **once**. Store it in your environment next to your API keys.
3. Every time an event matching your filter is ingested, Recalled sends a `POST` to your URL with a JSON body.
4. The request carries `X-Recalled-Timestamp` and `X-Recalled-Signature` headers you verify on your side using the secret.
5. 2xx = acknowledged. 5xx / timeout = transient, retried with backoff. 4xx = fatal, dropped.
6. After **7 consecutive failures** the webhook is auto-disabled so we stop hammering a broken endpoint. Fix your endpoint, then re-enable from the dashboard.

## Payload format

Every delivery is wrapped in an envelope so we can add new webhook event types later without breaking consumers:

```json
{
  "id": "whd_abc123xyz",
  "type": "event.created",
  "createdAt": "2026-04-15T14:23:00.123Z",
  "projectId": "proj_xyz",
  "event": {
    "id": "evt_abc",
    "action": "invoice.refunded",
    "actor": { "id": "user_1", "email": "alice@acme.co", "name": "Alice" },
    "organization": "org_acme",
    "targets": [{ "type": "invoice", "id": "inv_42" }],
    "metadata": { "amount": 4200, "currency": "eur" },
    "occurredAt": "2026-04-15T14:22:58.000Z",
    "hash": "sha256:..."
  }
}
```

## Request headers

| Header | Example | Purpose |
|---|---|---|
| `Content-Type` | `application/json` | Always JSON |
| `User-Agent` | `Recalled-Webhooks/1.0` | Stable UA so you can allowlist |
| `X-Recalled-Timestamp` | `1713183780` | Unix seconds when the delivery was signed |
| `X-Recalled-Signature` | `v1=9f86d081...` | HMAC-SHA256 of `${timestamp}.${rawBody}` |
| `X-Recalled-Event-Id` | `evt_abc` | Dedupe key for idempotence |
| `X-Recalled-Delivery-Id` | `whd_abc123xyz` | Unique per delivery attempt |

Any custom headers you set on the channel (e.g. `Authorization: Bearer ...` for a private endpoint) are merged with these. Recalled-reserved headers always win.

## Verify the signature (Node.js)

```ts
import crypto from "node:crypto";

const SECRET = process.env.RECALLED_WEBHOOK_SECRET!;
const MAX_SKEW_SECONDS = 5 * 60;

export function verifyRecalledSignature(req: {
  rawBody: string;
  headers: Record<string, string | undefined>;
}): boolean {
  const timestamp = req.headers["x-recalled-timestamp"];
  const signature = req.headers["x-recalled-signature"];
  if (!timestamp || !signature) return false;

  // Reject stale deliveries to stop a captured request from being replayed.
  const skew = Math.abs(Math.floor(Date.now() / 1000) - Number(timestamp));
  if (!Number.isFinite(skew) || skew > MAX_SKEW_SECONDS) return false;

  const expected =
    "v1=" +
    crypto
      .createHmac("sha256", SECRET)
      .update(`${timestamp}.${req.rawBody}`, "utf8")
      .digest("hex");

  const a = Buffer.from(expected);
  const b = Buffer.from(signature);
  if (a.length !== b.length) return false;
  return crypto.timingSafeEqual(a, b);
}
```

## Verify the signature (Python)

```python
import hmac, hashlib, os, time

SECRET = os.environ["RECALLED_WEBHOOK_SECRET"]
MAX_SKEW = 5 * 60

def verify(raw_body: bytes, headers: dict) -> bool:
    ts = headers.get("x-recalled-timestamp")
    sig = headers.get("x-recalled-signature")
    if not ts or not sig:
        return False
    if abs(int(time.time()) - int(ts)) > MAX_SKEW:
        return False
    msg = f"{ts}.{raw_body.decode('utf-8')}".encode("utf-8")
    expected = "v1=" + hmac.new(SECRET.encode("utf-8"), msg, hashlib.sha256).hexdigest()
    return hmac.compare_digest(expected, sig)
```

## Retry policy

| Attempt | Delay from previous | Cumulative |
|---|---|---|
| 1 | immediate | 0 |
| 2 | 1 min | 1 min |
| 3 | 5 min | 6 min |
| 4 | 30 min | 36 min |
| 5 | 2 h | 2 h 36 |
| 6 | 6 h | 8 h 36 |
| 7 | 12 h | 20 h 36 |

**Transient** errors (408, 429, 5xx, network, timeout) are retried. **Fatal** errors (4xx other than 408/429, 401/403/404) drop immediately.

After **7 consecutive failures** across distinct events, the webhook flips to `enabled = false` and stops receiving deliveries until you manually re-enable it.

## Idempotence

Deliveries are **at-least-once**: the same event may reach you twice if a first attempt timed out on our side but actually succeeded on yours. Dedupe on `X-Recalled-Event-Id` or the `event.id` field inside the body. Both are stable per event.

## SSRF protection

At creation and before every delivery, Recalled refuses URLs that resolve to private IP ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 169.254.0.0/16, loopback, carrier-grade NAT, link-local, IPv6 ULA), blocked ports (SSH, SQL, Redis, metadata endpoints), and redirects are not followed. Your target must be reachable from the public internet over HTTPS.

## Testing

Hit **Send test notification** in the dashboard. Recalled sends a `test.ping` event with synthetic fields so you can exercise your signature verification code without waiting for a real user action.

---

<!-- Reference / npm SDK -->

# npm SDK

```bash
npm install @recalled/sdk
```

## Client

```ts
import { Recalled } from "@recalled/sdk";

const client = new Recalled({
  apiKey: process.env.RECALLED_API_KEY!,
  // baseUrl defaults to https://api.recalled.dev/v1
});
```

## Events

```ts
// create (strict, throws on failure)
await client.events.create({
  action: "user.password.changed",
  actor: { id: "user_1" },
});

// list
const { data, nextCursor } = await client.events.list({
  limit: 50,
  organization: "org_abc",
});

// search
const { data } = await client.events.search({ q: "invoice" });

// get one
const event = await client.events.retrieve("evt_xxx");
```

## Resilient `emit()` (recommended)

`emit()` returns immediately and delivers the event in the background. If the API is unreachable, the SDK holds the event in an **in-memory queue** and retries automatically with exponential backoff for up to 24 hours. Your request path never sees an exception on a transient outage, which is what you want for audit logs 99% of the time.

```ts
client.events.emit({
  action: "invoice.paid",
  actor: { id: user.id, email: user.email },
  organization: user.organizationId,
  metadata: { amount, currency },
});
// returns immediately, no await needed
```

Delivery outcomes are observed via callbacks on the `Recalled` constructor:

```ts
const client = new Recalled({
  apiKey: process.env.RECALLED_API_KEY!,
  resilience: {
    onDelivered: (input, event) => {},
    onError:     (err, input) => {},
    onDrop:      (input, reason, err) => {
      // reason: "ttl_expired" | "fatal_error" | "queue_full"
    },
  },
});
```

### Flush before exit

Short-lived processes (CLI, cron, Lambda) must `await client.flush()` before exiting, otherwise pending events are lost when the process terminates.

```ts
client.events.emit({ action: "job.completed" });
await client.flush(); // waits up to 30s for the queue to drain
```

### Resilience options

| Option | Default | Description |
|---|---|---|
| `maxQueueSize` | `5000` | Events held in memory |
| `maxAgeMs` | `24h` | TTL before an event is dropped |
| `minBackoffMs` | `1000` | First retry delay |
| `maxBackoffMs` | `10min` | Backoff cap |

Pass `resilience: false` to disable buffering entirely (`emit()` becomes a raw fire-and-forget).

### `create` vs `emit`

| | `create()` | `emit()` |
|---|---|---|
| Return | `Promise<Event>` | `void` |
| Throws on failure | yes | no |
| Buffers on outage | no | yes |
| Blocks the request path | yes | no |
| Use when | failure must surface | failure must be invisible |

## Embed

```ts
const { token } = await client.embed.createToken({
  organization: "org_abc",
  ttlSeconds: 900,
});
```

## Actors (GDPR)

```ts
await client.actors.delete({ id: "user_123" });
```

## Exports

```ts
// Returns the raw export body as a string (CSV or JSON per your format).
const body = await client.exports.fetch({
  format: "csv",
  organization: "org_abc",
});
```

## Types

Every method is fully typed and the package ships its own `.d.ts`. Import types directly when you need them:

```ts
import type { Event, CreateEventInput } from "@recalled/sdk";
```

---

<!-- Reference / MCP server -->

# MCP server

Recalled ships an official **Model Context Protocol** server so AI agents (Claude Desktop, Cursor, ChatGPT, custom agents built on the official MCP SDKs) can read and act on your audit trail directly.

> **Plan requirement.** The MCP endpoint is available on **Starter, Pro and Scale**. Free accounts can still mint API keys for the SDK and REST API, but `/v1/mcp` returns a JSON-RPC error pointing at the upgrade page. Upgrade at [recalled.dev/dashboard/billing](/dashboard/billing).

The MCP endpoint reuses the API keys you already have. Same authentication, same scoping by project, no new infrastructure to set up. If you can call `/v1/events`, you can use the MCP, provided the project owner is on a paid plan.

## Endpoint

```
POST https://api.recalled.dev/v1/mcp
```

The endpoint speaks **Streamable HTTP** transport (MCP spec 2025-03-26). It runs in stateless mode: every JSON-RPC request gets a fresh server, isolated to the API key it was authenticated with. There is no session token to manage on your side.

## Authentication

Same Bearer token as the REST API:

```
Authorization: Bearer rec_live_<prefix>_<secret>
Content-Type: application/json
```

The MCP scopes everything to the project that owns the API key. An MCP session bound to project A can never read project B's events.

## Connect from Claude Desktop

Edit `claude_desktop_config.json` and add:

```json
{
  "mcpServers": {
    "recalled": {
      "url": "https://api.recalled.dev/v1/mcp",
      "headers": {
        "Authorization": "Bearer rec_live_xxx_yyy"
      }
    }
  }
}
```

Restart Claude Desktop. You should see Recalled appear in the tools list.

## Connect from Cursor

Open Cursor Settings, MCP, then add a new server:

```json
{
  "name": "recalled",
  "transport": "http",
  "url": "https://api.recalled.dev/v1/mcp",
  "headers": {
    "Authorization": "Bearer rec_live_xxx_yyy"
  }
}
```

## Tools available

| Tool | What it does |
|------|--------------|
| `get_project_info` | Identification of the project this session is bound to and the current API key's scopes. |
| `get_recent_events` | Most recent events newest first, capped at 100. |
| `search_events` | Free text search across action, actor name, actor email, actor id. Cursor pagination. |
| `list_events` | Structured filters (action, actor, organization, IP, date range). Cursor pagination. |
| `retrieve_event` | Fetch a single event by id with full details. |
| `get_event_receipt` | Return a portable, citable receipt for one event with public verification_url and view_url. |
| `list_actions_summary` | Top actions over a window of N days, with count and percent share. |
| `verify_chain` | Recompute every hash and HMAC signature, return integrity report. |
| `usage_summary` | Current month event count vs plan limit, percent used. |
| `delete_actor` | GDPR Article 17 erasure. Requires `confirm: true` to actually run. |
| `audit_actor_plan` | Returns a step by step audit plan for a specific actor. The assistant then executes it by calling the data tools. |
| `investigate_incident_plan` | Returns a step by step investigation plan for events around a given timestamp. |
| `compliance_check` | Returns a GDPR / SOC 2 / ISO 27001 readiness audit plan that the assistant runs by chaining the data tools. |
| `get_setup_guide` | Returns the opinionated setup prompt for adding Recalled to a codebase. EN or FR. |

## Resources available

The same data exposed by `get_project_info`, `usage_summary` and `get_recent_events` is also published as MCP resources, for clients that prefer the resource model over tool calls. Tools and resources stay in sync; pick whichever your client supports best.

| URI | What it exposes |
|-----|-----------------|
| `recalled://project/info` | Project metadata and the API key's id, name, prefix and scopes. |
| `recalled://usage/current` | Current month usage, plan limit, percent used. |
| `recalled://events/recent` | The 50 most recent events for situational awareness. |

## Prompts available

Prompts are reusable recipes that combine tools and resources to answer common questions. They surface as quick actions in clients that support MCP prompts (Cursor, ChatGPT, custom agents). Claude Desktop currently ignores prompts and only sees tools, so each prompt is also exposed as a `*_plan` tool with the same content.

| Prompt | Tool equivalent | What it produces |
|--------|------------------|------------------|
| `/audit_actor` | `audit_actor_plan` | Audit one actor's activity over a chosen window. Inputs: `actor_id`, optional `days`. |
| `/investigate_incident` | `investigate_incident_plan` | Investigate events around a timestamp, propose a root cause narrative. Inputs: `at` (ISO), optional `window_minutes`, optional `focus`. |
| `/compliance_check` | `compliance_check` | GDPR / SOC 2 / ISO 27001 readiness assessment based on the data in the project. |

## Example session

Once connected, you can ask the assistant questions in plain language and it will pick the right tools.

```
You: who deleted invoices in the last 7 days?
Assistant: (calls list_events with action="invoice.deleted", date_from set to 7 days ago)
Assistant: 4 invoices were deleted by 2 distinct actors over the last 7 days. Most recent was 3 hours ago by user_42.
```

```
You: run a compliance check on this project.
Assistant: (calls /compliance_check, then verify_chain, list_actions_summary, reads project_info)
Assistant: GDPR green, SOC 2 amber (admin role changes not consistently logged), ISO 27001 amber. ...
```

## Pricing

The MCP is **available on every plan**, including Free. It does not consume your event quota: it reads existing events and exposes the same actions as the REST API. The actions a tool performs (e.g. ingestion via REST, GDPR erasure) are still billed and rate limited like any other call.

## Security model

- Bearer token only, in the `Authorization` header. Never in query strings.
- HTTPS only.
- No "token passthrough": the MCP server validates the key against your project before doing anything.
- Each request opens its own scoped server and closes it on response. No shared state between tenants.
- Destructive tools (`delete_actor`) require an explicit `confirm` argument so an agent cannot run them by accident.

## Troubleshooting

**"Invalid API key"**: the key was not found, was revoked, or the prefix does not match. Generate a new one in the dashboard.

**"Method Not Allowed"** on a GET request: the endpoint is POST only in stateless mode. MCP clients should send JSON-RPC over POST.

**Rate limited**: the MCP shares the global rate limit of `/v1/*`. If your agent loops aggressively, batch its calls or back off.

---
