Platform / Logs

Total Visibility.
Zero Instrumentation.

Stop flying blind or wrestling with complex observability stacks. Every endpoint you prompt into existence comes with comprehensive, real-time logging, latency tracking, and AI token monitoring right out of the box.

1. You Prompt

"Create a /api/process-payment endpoint. Verify the user session, charge the Stripe card, and generate a thank-you email using AI."

2. Init Auto-Captures the Trace
[200 OK]req_9x2b4f • 842ms
GET /api/process-payment ├─ Auth Middleware (Validated JWT) 2ms ├─ DB Query SELECT * FROM users... 14ms ├─ Stripe API (Charge Successful) 310ms └─ AI Gen (Prompt: 120t, Comp: 84t) 516ms Cost: $0.0012

Everything tracked, natively.

Init’s edge infrastructure automatically captures every layer of your API’s execution context without you writing a single line of logging code.

Live Streaming

Watch requests hit your API in real-time with sub-second latency tailing.

AI Token Tracking

Precise breakdown of prompt tokens, completion tokens, and exact USD costs per request.

Error Tracing

Instantly capture unhandled exceptions, syntax errors, and full stack traces.

Payload Inspection

Deep dive into exactly what JSON body was sent and what response was returned.

Metrics & Alerts

Monitor p99 latency spikes and error rate anomalies automatically.

Auto-Instrumentation

No more `console.log()` driven development.

Forget setting up Winston, configuring Datadog agents, or wrapping every third-party API call in a try/catch block just to see what happened. Init automatically instruments your entire endpoint flow at the compiler level.

logger.info("Starting db query...")
console.error("Failed:", e.message)
Init Execution Graph
100% Coverage
Unit Economics

AI Observability built into the platform.

When you deploy generative APIs, standard latency tracking isn't enough. You need to know exactly how much each endpoint costs to run. Init automatically captures token usage and maps it to live pricing models, giving you perfect unit economics per request.

// Attached to every AI log trace
"ai_execution_metrics": {
   "model":  "gpt-4o",
   "tokens": {
     "prompt":  3,104,
     "completion":  842,
     "total":  3,946
  },
   "cost_usd":  0.0284,
   "latency_ms":  1840
}
Advanced Filtering

Find the needle in the haystack.

When you have millions of requests, plain text search doesn't cut it. Filter your API logs by HTTP status code, specific user IDs, unique request trace IDs, or execution time duration to debug issues in seconds.

Trace IDs

Share exact error traces with your team via unique URLs.

Status Codes

Instantly isolate all 500s or 429 Rate Limit hits.

Time Ranges

Scrub through historical data to find when a regression started.

Metadata Tags

Filter by `user_id`, `tenant_id`, or `country` automatically.

Actionable insights instantly.

See how deep visibility accelerates your debugging workflow.

Debugging Webhooks

"Stripe is reporting failed webhook deliveries. I open Init Logs, filter by POST /api/webhooks, and immediately see the malformed JSON payload causing the 400 error."
Payload InspectionReq/Res Body

Optimizing AI Latency

"My support bot feels slow. I look at the execution graph and see the DB query takes 10ms, but the secondary Claude 3 call is taking 4 seconds. I know exactly what to optimize."
Performance ProfilingGraph Trace

Audit Trails

"A sensitive record was deleted. I filter the logs by DELETE /api/records/123 and see the exact user_id, timestamp, and IP address that authenticated the request."
Security AuditingMetadata Auth

Stop guessing. Start knowing.

Join thousands of developers who trust Init to provide absolute visibility into their AI-native endpoints.