Read stdout, stderr, files, Docker logs, or a supervised local dev
stack. Inspect grouped failures in a local UI, and use
sentinel dev to stop, restart, and copy logs for a single
service. No SDK is required for log ingestion.
macOS, Linux, and Git Bash on Windows install in one command. Native
Windows starts from the latest release page. After first install,
upgrades are just
sentinel update.
$curl -fsSL https://github.com/aneesahammed/sentinel-dist/releases/latest/download/sentinel.sh | bash
Using Git Bash, MSYS2, or Cygwin? Run the same command above. Otherwise download the latest .exe from the public release page and place it on your PATH
Open public releases ↗
Three steps. No account. Basic ingestion needs no config;
.sentinel.yml is only for the dev supervisor.
Run a stack with sentinel dev, wrap one process with
sentinel run -- your_command, tail files, watch Docker,
or pipe stdin. Sentinel reads whatever your app already emits.
Multiline stack traces are grouped. UUIDs, IDs, and timestamps are stripped before hashing so the same error from different requests lands in the same bucket, every time.
Each new fingerprint is triaged — heuristically offline, or via a local or remote OpenAI-compatible endpoint when enabled. Mark issues as investigating, resolved, or suppressed. Regressions are auto-flagged.
Five workflows, one binary.
sentinel dev starts or attaches to services from
.sentinel.yml, tails their logs, shows a terminal UI,
tracks metrics, and lets you stop, restart, or copy logs for one
service.
Pipe from Docker, journald, your build system, CI output — any process that writes to stdout.
Glob patterns, seek-based offset tracking, and truncation handling for file-based logs.
Captures stdout and stderr from a child process. Tags the session with your git branch and commit.
Auto-discover a Compose stack near your repo, or watch every running container with one command.
Everything you need. Nothing you don't.
SHA-256 over a normalized message and top-3 stack frames. UUIDs, IDs, timestamps, and paths are stripped first — so the same bug gets the same fingerprint across every run.
Structured stack trace parsing for Python, Go, Node.js, Java, Kotlin, .NET, and Rust. Multiline exceptions are grouped automatically — no config required.
Same error in your billing service and auth service stays separate.
Use --project to scope ingestion; the dashboard has a
project filter built in.
Works with Ollama, LM Studio, or any OpenAI-compatible endpoint.
Falls back to heuristic automatically. Pass
--offline to skip LLM entirely — zero latency, zero
calls.
When a resolved issue reappears after a 10-minute quiet period, it's automatically flagged as a regression. Status history tracks every transition.
The 50 log lines preceding each error are captured and stored with the occurrence. You see what led up to it, not just the exception itself.
Built-in OpenTelemetry HTTP receiver. Send spans from your existing OpenTelemetry setup, correlate traces with errors, and inspect the waterfall in the UI without running a separate collector.
Logs and state live in SQLite with WAL mode. API keys are encrypted with AES-256-GCM. Offline mode or a local LLM keeps analysis local; remote LLM mode sends triage requests only to your configured endpoint.
Single binary. No runtime, no daemon, no background services. Drop it in your PATH and it works.
One command to install. One binary to run your dev stack, wrap a process, tail files, or watch Docker logs.
sentinel dev