Server Integration
If your product is a backend service that processes documents — a queue worker editing contracts overnight, a scheduled job that summarizes incoming reports, a document workflow that auto-improves files before they hit your storage — SuperDocs slots in as a synchronous API call, no UI required. This guide covers the pattern + working snippets for the five most common backend stacks (Node, Python, Go, Ruby, .NET), plus when to choose sync vs async vs polling.Why this shape is different from a UI integration
In a typical SuperDocs UI integration, a user types in a chat panel, the chat opens an SSE stream, and the user reviews proposed edits in real time before approving. The whole architecture is shaped around interactive review. In a server-side integration, none of that applies. There’s no user typing, no chat panel, no SSE consumer to build, and (typically) no human approval. Your service has documents in a queue or storage, calls SuperDocs synchronously to get them edited, persists the results, and moves on. The integration is simpler — usually one synchronous HTTP call per document, plus a wrapper around it for retries, error handling, and persistence.The pattern, in 4 steps
- Get a document HTML from wherever your service stores documents (S3, database, file, message queue payload).
- Call
POST /v1/chatsynchronously withapproval_mode: "approve_all". SuperDocs applies all proposed changes automatically and returns the final updated HTML. - Persist the updated HTML back to wherever the document lives.
- Handle errors and retries per your service’s existing conventions.
When to use sync vs async vs polling
Three SuperDocs endpoints can drive a server-side integration. Pick based on document size and your service’s needs:| Endpoint | When to use |
|---|---|
POST /v1/chat (sync) | Default. Documents under ~50 pages, single-call edits, batch processing. Simplest pattern: one HTTP call in, one HTTP response out. |
POST /v1/chat/async + GET /v1/jobs/{job_id} (polling) | Long-running edits where you want to fire-and-forget and check back. Documents over ~50 pages, complex multi-section rewrites, or workflows where you don’t want to block on a single HTTP call. |
POST /v1/chat/async + SSE stream | Rare for server-side. Use only if you specifically want to react to intermediate progress events or proposed_change events one at a time. Most server integrations don’t need this. |
/v1/chat endpoint is the right answer.
Node.js (TypeScript)
Python (FastAPI / Django / standalone)
Go (net/http)
Ruby (Rails service object)
.NET (C#, minimal API + service)
Persisting updated_html to storage
Where the result goes depends on your service’s existing storage. Common patterns:
- Database row.
UPDATE documents SET html = $1, updated_at = now() WHERE id = $2;. Make sure your column type accommodates the document size —textin Postgres,LONGTEXTin MySQL. - S3 / object storage.
s3.put_object(Bucket="...", Key="documents/{id}.html", Body=updated_html). Versioning at the bucket level gives you free history. - Local file.
with open(f"output/{job_id}.html", "w") as f: f.write(updated_html). Fine for batch jobs that produce files. - Message queue. Publish the updated HTML to a downstream queue for the next step in your pipeline.
data-chunk-id attributes that SuperDocs adds to block-level elements must round-trip if you plan to send the document back to SuperDocs again later (e.g., for further edits in the same session). Most string-based storage preserves them automatically; if you parse the HTML through a sanitizer or DOM library, ensure unknown attributes survive.
Error handling and retries
SuperDocs returns standard HTTP status codes. Worth handling:- 401 / 403 — bad or missing API key. Not retryable; surface immediately.
- 429 — rate limit. Wait and retry with exponential backoff. Check the
Retry-Afterheader. - 5xx — transient server error. Retry with backoff.
- 4xx (other) — bad request, malformed HTML, unsupported parameters. Read the response body for the actual error and fix the request shape; not retryable.
retry_on, Celery’s autoretry_for, AWS SQS dead-letter queues, etc.) rather than building retry logic in the SuperDocs wrapper itself.
Auto-approve vs human-out-of-band
If your server-side workflow needs a human’s sign-off before applying SuperDocs’ edits — even though there’s no interactive UI — you have two patterns:- Send a notification (Slack, email, SMS) with the proposed change. The notification includes a button or reply mechanism. When the human responds, your service POSTs to
/v1/chat/{session_id}/approveto apply the change. Useapproval_mode: "ask_every_time"and stream the proposed changes via SSE to your notification dispatcher. - Queue proposed changes for batch human review. The human reviews a list of pending changes in your existing admin UI (or a CSV export, or a Slack digest). Same
ask_every_time+ approve POST pattern, just with a delayed human in the loop.
Stuck?
If your server stack isn’t covered here or your workflow needs a pattern not described above, email hello@superdocs.app or book a 15-minute integration call at cal.com/superdocs. We’ll talk through the pattern and add a snippet for the next person.Related guides
- Async Jobs — when you want polling instead of sync
/v1/chat. - SSE Streaming — when you want streaming progress events from a server-side consumer.
- Agent Tool Integration — when your service is wrapping an AI agent that needs SuperDocs as a tool.
- Human-in-the-Loop — when you need human approval, even out-of-band.
- Integration Starter Prompt — paste this into your coding agent to wire all the above up automatically.

