Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 9 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@
<a href="https://goreportcard.com/report/github.com/grafana/tempo"><img src="https://goreportcard.com/badge/github.com/grafana/tempo" alt="Go Report Card" /></a>
</p>

Grafana Tempo is an open source, easy-to-use, and high-scale distributed tracing backend. Tempo is cost-efficient, requiring only object storage to operate, and is deeply integrated with Grafana, Prometheus, and Loki.
Grafana Tempo is an open source, easy-to-use, and high-scale distributed tracing backend. Tempo is cost-efficient, requiring only object storage to operate, and is deeply integrated with Grafana, Prometheus, and Loki.


## Business value of distributed tracing

Distributed tracing helps teams quickly pinpoint performance issues and understand the flow of requests across services.
The Traces Drilldown UI simplifies this process by offering a user-friendly interface to view and analyze trace data, making it easier to identify and resolve issues without needing to write complex queries.

Refer to [Use traces to find solutions](https://grafana.com/docs/tempo/latest/introduction/solutions-with-traces/) to learn more about how you can use distributed tracing to investigate and solve issues.
Refer to [Use traces to find solutions](https://grafana.com/docs/tempo/latest/introduction/solutions-with-traces/) to learn more about how you can use distributed tracing to investigate and solve issues.

## Traces Drilldown UI: A better way to get value from your tracing data
We are excited to introduce the [Traces Drilldown](https://github.com/grafana/traces-drilldown) (formerly Explore Traces) app as part of the Grafana Explore suite. This app provides a queryless and intuitive experience for analyzing tracing data, allowing teams to quickly identify performance issues, latency bottlenecks, and errors without needing to write complex queries or use TraceQL.
Expand All @@ -30,18 +30,17 @@ Key Features:
![image](https://github.com/user-attachments/assets/991205df-1b27-489f-8ef0-1a05ee158996)

To learn more see the following links:
- [Traces Drilldown repo](https://github.com/grafana/races-drilldown)
- [Traces Drilldown repo](https://github.com/grafana/traces-drilldown)
- [Traces Drilldown documentation](https://grafana.com/docs/grafana/latest/explore/simplified-exploration/traces/)
- [Demo video](https://www.youtube.com/watch?v=a3uB1C2oHA4
)
- [Demo video](https://www.youtube.com/watch?v=a3uB1C2oHA4)

## TraceQL

Tempo implements [TraceQL](https://grafana.com/docs/tempo/latest/traceql/), a traces-first query language inspired by LogQL and PromQL, which enables targeted queries or rich UI-driven analyses.
Tempo implements [TraceQL](https://grafana.com/docs/tempo/latest/traceql/), a traces-first query language inspired by LogQL and PromQL, which enables targeted queries or rich UI-driven analyses.

### TraceQL metrics
### TraceQL metrics

[TraceQL metrics](https://grafana.com/docs/tempo/latest/traceql/metrics-queries/) is an experimental feature in Grafana Tempo that creates metrics from traces. Metric queries extend trace queries by applying a function to trace query results. This powerful feature allows for ad hoc aggregation of any existing TraceQL query by any dimension available in your traces, much in the same way that LogQL metric queries create metrics from logs.
[TraceQL metrics](https://grafana.com/docs/tempo/latest/traceql/metrics-queries/) is an experimental feature in Grafana Tempo that creates metrics from traces. Metric queries extend trace queries by applying a function to trace query results. This powerful feature allows for ad hoc aggregation of any existing TraceQL query by any dimension available in your traces, much in the same way that LogQL metric queries create metrics from logs.

Tempo is Jaeger, Zipkin, Kafka, OpenCensus, and OpenTelemetry compatible. It ingests batches in any of the mentioned formats, buffers them, and then writes them to Azure, GCS, S3, or local disk. As such, it is robust, cheap, and easy to operate!

Expand All @@ -57,6 +56,8 @@ Tempo is Jaeger, Zipkin, Kafka, OpenCensus, and OpenTelemetry compatible. It ing

To learn more about Tempo, consult the following documents & talks:

- [How to get started with Tmepo with Joe Elliot (video)](https://www.youtube.com/watch?v=zDrA7Ly3ovU)
- [Grafana blog posts about Tempo](https://grafana.com/tags/tempo/)
- [New in Grafana Tempo 2.0: Apache Parquet as the default storage format, support for TraceQL][tempo_20_announce]
- [Get to know TraceQL: A powerful new query language for distributed tracing][traceql-post]

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/tempo/api_docs/pushing-spans-with-http.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ The easiest way to get the trace is to execute a simple curl command to Tempo. T

### Use TraceQL to search for a trace

Alternatively, you can also use [TraceQL](../traceql) to search for the trace that was pushed.
Alternatively, you can also use [TraceQL](https://grafana.com/docs/tempo/<TEMPO_VERSION>/traceql/) to search for the trace that was pushed.
You can search by using the unique trace attributes that were set:

```bash
Expand Down
23 changes: 12 additions & 11 deletions docs/sources/tempo/configuration/grafana-agent/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@ title: Grafana Agent
description: Configure the Grafana Agent to work with Tempo
weight: 600
aliases:
- /docs/tempo/grafana-agent
- /docs/tempo/grafana-agent
- ../../grafana-agent # /docs/tempo/latest/grafana-agent
---

# Grafana Agent
Expand Down Expand Up @@ -35,22 +36,22 @@ leverages all the data that's processed in the pipeline.

Grafana Agent is available in two different variants:

* [Static mode](/docs/agent/latest/static): The original Grafana Agent.
* [Flow mode](/docs/agent/latest/flow): The new, component-based Grafana Agent.
* [Static mode](/docs/agent/<AGENT_VERSION>/static): The original Grafana Agent.
* [Flow mode](/docs/agent/<AGENT_VERSION>/flow): The new, component-based Grafana Agent.

Grafana Agent Flow configuration files are [written in River](/docs/agent/latest/flow/concepts/config-language/).
Static configuration files are [written in YAML](/docs/agent/latest/static/configuration/).
Grafana Agent Flow configuration files are [written in River](/docs/agent/<AGENT_VERSION>/flow/concepts/config-language/).
Static configuration files are [written in YAML](/docs/agent/<AGENT_VERSION>/static/configuration/).
Examples in this document are for Flow mode.

For more information, refer to the [Introduction to Grafana Agent](/docs/agent/latest/about/).
For more information, refer to the [Introduction to Grafana Agent](/docs/agent/<AGENT_VERSION>/about/).

## Architecture

The Grafana Agent can be configured to run a set of tracing pipelines to collect data from your applications and write it to Tempo.
Pipelines are built using OpenTelemetry,
and consist of `receivers`, `processors`, and `exporters`.
The architecture mirrors that of the OTel Collector's [design](https://github.com/open-telemetry/opentelemetry-collector/blob/846b971758c92b833a9efaf742ec5b3e2fbd0c89/docs/design.md).
See the [configuration reference](/agent/latest/static/configuration/traces-config/) for all available configuration options.
See the [configuration reference](/agent/<AGENT_VERSION>/static/configuration/traces-config/) for all available configuration options.

<p align="center"><img src="https://raw.githubusercontent.com/open-telemetry/opentelemetry-collector/846b971758c92b833a9efaf742ec5b3e2fbd0c89/docs/images/design-pipelines.png" alt="Tracing pipeline architecture"></p>

Expand All @@ -75,13 +76,13 @@ The Grafana Agent processes tracing data as it flows through the pipeline to mak

The Agent supports batching of traces.
Batching helps better compress the data, reduces the number of outgoing connections, and is a recommended best practice.
To configure it, refer to the `batch` block in the [configuration reference](/docs/agent/latest/configuration/traces-config).
To configure it, refer to the `batch` block in the [configuration reference](/docs/agent/<AGENT_VERSION>/configuration/traces-config).

#### Attributes manipulation

The Grafana Agent allows for general manipulation of attributes on spans that pass through this agent.
A common use may be to add an environment or cluster variable.
To configure it, refer to the `attributes` block in the [configuration reference](/docs/agent/latest/configuration/traces-config).
To configure it, refer to the `attributes` block in the [configuration reference](/docs/agent/<AGENT_VERSION>/configuration/traces-config).

#### Attaching metadata with Prometheus Service Discovery

Expand Down Expand Up @@ -113,7 +114,7 @@ All of Prometheus' [various service discovery mechanisms](https://prometheus.io/
This means you can use the same `scrape_configs` between your metrics, logs, and traces to get the same set of labels,
and easily transition between your observability data when moving from your metrics, logs, and traces.

Refer to the `scrape_configs` block in the [configuration reference](/docs/agent/latest/configuration/traces-config).
Refer to the `scrape_configs` block in the [configuration reference](/docs/agent/<AGENT_VERSION>/configuration/traces-config).

#### Trace discovery through automatic logging

Expand Down Expand Up @@ -156,4 +157,4 @@ Aside from endpoint and authentication, the exporter also provides mechanisms fo
and implements a queue buffering mechanism for transient failures, such as networking issues.

To see all available options,
refer to the `remote_write` block in the [Agent configuration reference](/docs/agent/latest/configuration/traces-config).
refer to the `remote_write` block in the [Agent configuration reference](/docs/agent/<AGENT_VERSION>/configuration/traces-config).
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ For more information, refer to [Migrate to Alloy](https://grafana.com/docs/tempo

To configure automatic logging, you need to select your preferred backend and the trace data to log.

To see all the available configuration options, refer to the [configuration reference](https://grafana.com/docs/agent/latest/configuration/traces-config).
To see all the available configuration options, refer to the [configuration reference](https://grafana.com/docs/agent/<AGENT_VERSION>/configuration/traces-config).

This simple example logs trace roots to `stdout` and is a good way to get started using automatic logging:
```yaml
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ traces:
enabled: true
```

To see all the available configuration options, refer to the [configuration reference](/docs/agent/latest/configuration/traces-config).
To see all the available configuration options, refer to the [configuration reference](/docs/agent/<AGENT_VERSION>/configuration/traces-config).

Metrics are registered in the Agent's default registerer.
Therefore, they are exposed at `/metrics` in the Agent's server port (default `12345`).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ Probabilistic sampling strategies are easy to implement,
but also run the risk of discarding relevant data that you'll later want.

Tail-based sampling works with Grafana Agent in Flow or static modes.
Flow mode configuration files are [written in River](/docs/agent/latest/flow/concepts/config-language).
Static mode configuration files are [written in YAML](/docs/agent/latest/static/configuration).
Examples in this document are for Flow mode. You can also use the [Static mode Kubernetes operator](/docs/agent/latest/operator).
Flow mode configuration files are [written in River](/docs/agent/<AGENT_VERSION>/flow/concepts/config-language).
Static mode configuration files are [written in YAML](/docs/agent/<AGENT_VERSION>/static/configuration).
Examples in this document are for Flow mode. You can also use the [Static mode Kubernetes operator](/docs/agent/<AGENT_VERSION>/operator).

## How tail-based sampling works

Expand Down Expand Up @@ -57,7 +57,7 @@ This overhead increases with the number of Agent instances that share the same t
To start using tail-based sampling, define a sampling policy.
If you're using a multi-instance deployment of the agent,
add load balancing and specify the resolving mechanism to find other Agents in the setup.
To see all the available configuration options, refer to the [configuration reference](/docs/agent/latest/configuration/traces-config/).
To see all the available configuration options, refer to the [configuration reference](/docs/agent/<AGENT_VERSION>/configuration/traces-config/).

{{< admonition type="note">}}
Grafana Alloy provides tooling to convert your Agent Static or Flow configuration files into a format that can be used by Alloy.
Expand All @@ -67,10 +67,10 @@ For more information, refer to [Migrate to Alloy](https://grafana.com/docs/tempo

### Example for Grafana Agent Flow

[Grafana Agent Flow](/docs/agent/latest/flow/) is a component-based revision of Grafana Agent with a focus on ease-of-use, debuggability, and ability to adapt to the needs of power users.
[Grafana Agent Flow](/docs/agent/<AGENT_VERSION>/flow/) is a component-based revision of Grafana Agent with a focus on ease-of-use, debuggability, and ability to adapt to the needs of power users.
Flow configuration files are written in River instead of YAML.

Grafana Agent Flow uses the [`otelcol.processor.tail_sampling component`](/docs/agent/latest/flow/reference/components/otelcol.processor.tail_sampling/)` for tail-based sampling.
Grafana Agent Flow uses the [`otelcol.processor.tail_sampling component`](/docs/agent/<ALLOY_VERSION>/flow/reference/components/otelcol/otelcol.processor.tail_sampling/)` for tail-based sampling.

```river
otelcol.receiver.otlp "otlp_receiver" {
Expand Down
10 changes: 5 additions & 5 deletions docs/sources/tempo/configuration/grafana-alloy/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Alloy is flexible, and you can easily configure it to fit your needs in on-prem,
It's commonly used as a tracing pipeline, offloading traces from the
application and forwarding them to a storage backend.

Grafana Alloy configuration files are written in the [Alloy configuration syntax](https://grafana.com/docs/alloy/latest/concepts/configuration-syntax/).
Grafana Alloy configuration files are written in the [Alloy configuration syntax](https://grafana.com/docs/alloy/<ALLOY_VERSION>/get-started/configuration-syntax/).

For more information, refer to the [Introduction to Grafana Alloy](https://grafana.com/docs/alloy/latest/introduction).

Expand Down Expand Up @@ -52,13 +52,13 @@ Grafana Alloy processes tracing data as it flows through the pipeline to make th

Alloy supports batching of traces.
Batching helps better compress the data, reduces the number of outgoing connections, and is a recommended best practice.
To configure it, refer to the `otelcol.processor.batch` block in the [components reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.batch/).
To configure it, refer to the `otelcol.processor.batch` block in the [components reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.processor.batch/).

#### Attributes manipulation

Grafana Alloy allows for general manipulation of attributes on spans that pass through it.
A common use may be to add an environment or cluster variable.
There are several processors that can manipulate attributes, some examples include: the `otelcol.processor.attributes` block in the [component reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.attributes/) and the `otelcol.processor.transform` block [component reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.transform/)
There are several processors that can manipulate attributes, some examples include: the `otelcol.processor.attributes` block in the [component reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.processor.attributes/) and the `otelcol.processor.transform` block [component reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.processor.transform/)

#### Attaching metadata with Prometheus Service Discovery

Expand Down Expand Up @@ -97,7 +97,7 @@ otelcol.exporter.otlp "default" {
}
```

Refer to the `otelcol.processor.k8sattributes` block in the [components reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.k8sattributes/).
Refer to the `otelcol.processor.k8sattributes` block in the [components reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.processor.k8sattributes/).

#### Trace discovery through automatic logging

Expand Down Expand Up @@ -138,4 +138,4 @@ Aside from endpoint and authentication, the exporter also provides mechanisms fo
and implements a queue buffering mechanism for transient failures, such as networking issues.

To see all available options,
refer to the `otelcol.exporter.otlp` block in the [Alloy configuration reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.otlp/) and the `otelcol.exporter.otlphttp` block in the [Alloy configuration reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.otlphttp/).
refer to the `otelcol.exporter.otlp` block in the [Alloy configuration reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.exporter.otlp/) and the `otelcol.exporter.otlphttp` block in the [Alloy configuration reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.exporter.otlphttp/).
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ pipeline. This allows for automatically building a mechanism for trace discovery
On top of that, you can also get metrics from traces using a logs source, and
allow quickly jumping from a log message to the trace view in Grafana.

While this approach is useful, it isn't as powerful as TraceQL.
While this approach is useful, it isn't as powerful as TraceQL.
If you are here because you know you want to log the
trace ID, to enable jumping from logs to traces, then read on.

Expand All @@ -47,7 +47,7 @@ This allows searching by those key-value pairs in Loki.
To configure automatic logging, you need to configure the `otelcol.connector.spanlogs` connector with
appropriate options.

To see all the available configuration options, refer to the `otelcol.connector.spanlogs` [components reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.connector.spanlogs/).
To see all the available configuration options, refer to the `otelcol.connector.spanlogs` [components reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.connector.spanlogs/).

This simple example logs trace roots before exporting them to the Grafana OTLP gateway,
and is a good way to get started using automatic logging:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ otelcol.exporter.otlp "default" {
}
```

To see all the available configuration options, refer to the [component reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.connector.servicegraph/).
To see all the available configuration options, refer to the [component reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.connector.servicegraph/).

### Grafana

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ There are a number of ways to lower trace volume, including varying sampling str

Sampling is the process of determining which traces to store (in Tempo or Grafana Cloud Traces) and which to discard. Sampling comes in two different strategy types: head and tail sampling.

Sampling functionality exists in both [Grafana Alloy](https://grafana.com/docs/alloy/) and the OpenTelemetry Collector. Alloy can collect, process, and export telemetry signals, with configuration files written in [Alloy configuration syntax](https://grafana.com/docs/alloy/<ALLOY_VERSION>/concepts/configuration-syntax/).
Sampling functionality exists in both [Grafana Alloy](https://grafana.com/docs/alloy/) and the OpenTelemetry Collector. Alloy can collect, process, and export telemetry signals, with configuration files written in [Alloy configuration syntax](https://grafana.com/docs/alloy/<ALLOY_VERSION>/get-started/configuration-syntax/).

Refer to [Enable tail sampling](https://grafana.com/docs/tempo/<TEMPO_VERSION>/configuration/grafana-alloy/enable-tail-sampling/) for instructions on how to enable tail sampling.
Refer to [Enable tail sampling](https://grafana.com/docs/tempo/<TEMPO_VERSION>/configuration/grafana-alloy/tail-sampling/enable-tail-sampling/) for instructions.

## Head and tail sampling

Expand Down
Loading
Loading