Open-source observability platform with ML-powered Sift investigations and an AI assistant that generates PromQL/LogQL queries from natural language. Adaptive Telemetry automatically drops high-cardinality data before indexing, cutting ingest costs. The open-core model lets you self-host Grafana OSS free or use managed Cloud tiers.
| Tier | Price | Includes |
|---|---|---|
Free | Free | 10k active metrics, 50GB logs/mo, 50GB traces/mo, 14-day retention, 3 active AI users |
Pro | $19/seat/mo | — |
Enterprise | Contact sales | — |
Grafana Labs runs an open-core observability platform built on Loki for logs, Tempo for traces, Mimir for metrics, and the Grafana visualization layer. You can self-host every piece under AGPLv3, run on Grafana Cloud, or buy Grafana Enterprise for on-prem deployments with commercial support. The architectural argument is simple: keep your data in object storage, pay for queries instead of ingest, and avoid the cardinality explosions that make traditional TSDBs and indexed log stores expensive.
The 2025–2026 product motion is around AI and cost control. Sift correlates logs, metrics, and traces during incidents to surface probable causes. The AI assistant generates working PromQL and LogQL from natural-language prompts, which is the single most useful AI feature in observability today for engineers who do not write PromQL daily. Adaptive Telemetry drops high-cardinality dimensions and low-signal metrics in the ingest path before they ever hit storage — a real, measurable cost lever rather than a marketing one.
High fit for Platform Engineers and SREs with the engineering budget to operate LGTM at scale, or who are happy to pay Grafana Cloud rates and skip ahead. Watch out for: self-hosting LGTM is a real commitment — most teams that try either undersize their cluster and fall over on incidents or migrate to Grafana Cloud after eighteen painful months.