You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -9,6 +9,14 @@ Before proceeding, please ensure you have [installed GreptimeDB](./installation/
9
9
10
10
This guide will walk you through creating a metric table and a log table, highlighting the core features of GreptimeDB.
11
11
12
+
You’ll learn (10–15 minutes)
13
+
* Start and connect to GreptimeDB locally
14
+
* Create metrics and logs tables and insert sample data
15
+
* Query and aggregate data
16
+
* Compute p95 and ERROR counts in 5-second windows and align them
17
+
* Join metrics with logs to spot anomalous hosts and time periods
18
+
* Combine SQL and PromQL to query data
19
+
12
20
## Connect to GreptimeDB
13
21
14
22
GreptimeDB supports [multiple protocols](/user-guide/protocols/overview.md) for interacting with the database.
@@ -38,25 +46,25 @@ Suppose you have an event table named `grpc_latencies` that stores the gRPC serv
38
46
The table schema is as follows:
39
47
40
48
```sql
49
+
-- Metrics: gRPC call latency in milliseconds
41
50
CREATETABLEgrpc_latencies (
42
51
ts TIMESTAMPTIME INDEX,
43
52
host STRING INVERTED INDEX,
44
53
method_name STRING,
45
54
latency DOUBLE,
46
55
PRIMARY KEY (host, method_name)
47
-
) with('append_mode'='true');
56
+
);
48
57
```
49
58
50
59
-`ts`: The timestamp when the metric was collected. It is the time index column.
51
60
-`host`: The hostname of the application server, enabling [inverted index](/user-guide/manage-data/data-index.md#inverted-index).
52
61
-`method_name`: The name of the RPC request method.
53
62
-`latency`: The latency of the RPC request.
54
63
55
-
And it's [append only](/user-guide/deployments-administration/performance-tuning/design-table.md#when-to-use-append-only-tables) by setting `append_mode` to true, which is good for performance.
56
-
57
64
Additionally, there is a table `app_logs` for storing application logs:
58
65
59
66
```sql
67
+
-- Logs: application logs
60
68
CREATETABLEapp_logs (
61
69
ts TIMESTAMPTIME INDEX,
62
70
host STRING INVERTED INDEX,
@@ -73,7 +81,8 @@ CREATE TABLE app_logs (
73
81
-`log_level`: The log level of the log entry.
74
82
-`log_msg`: The log message, enabling [fulltext index](/user-guide/manage-data/data-index.md#fulltext-index).
75
83
76
-
It's append only, too.
84
+
And it's [append only](/user-guide/deployments-administration/performance-tuning/design-table.md#when-to-use-append-only-tables) by setting `append_mode` to true, which is good for performance. Other table options, such as data retention, are supported too.
85
+
77
86
::::tip
78
87
We use SQL to ingest the data below, so we need to create the tables manually. However, GreptimeDB is [schemaless](/user-guide/ingest-data/overview.md#automatic-schema-generation) and can automatically generate schemas when using other ingestion methods.
@@ -228,92 +245,214 @@ You can use [range queries](/reference/sql/range.md#range-query) to monitor late
228
245
For example, to calculate the p95 latency of requests using a 5-second window:
229
246
230
247
```sql
231
-
SELECT
232
-
ts,
233
-
host,
234
-
approx_percentile_cont(0.95) WITHIN GROUP (ORDER BY latency) RANGE '5s'AS p95_latency
235
-
FROM
248
+
SELECT
249
+
ts,
250
+
host,
251
+
approx_percentile_cont(0.95) WITHIN GROUP (ORDER BY latency)
252
+
RANGE '5s'AS p95_latency
253
+
FROM
236
254
grpc_latencies
237
-
ALIGN '5s' FILL PREV;
255
+
ALIGN '5s' FILL PREV
256
+
ORDER BY
257
+
host,ts;
238
258
```
239
259
240
260
```sql
241
261
+---------------------+-------+-------------+
242
262
| ts | host | p95_latency |
243
263
+---------------------+-------+-------------+
244
-
| 2024-07-1120:00:05 | host2 | 114 |
245
-
| 2024-07-1120:00:10 | host2 | 111 |
246
-
| 2024-07-1120:00:15 | host2 | 115 |
247
-
| 2024-07-1120:00:20 | host2 | 95 |
248
264
| 2024-07-1120:00:05 | host1 | 104.5 |
249
265
| 2024-07-1120:00:10 | host1 | 4200 |
250
266
| 2024-07-1120:00:15 | host1 | 3500 |
251
267
| 2024-07-1120:00:20 | host1 | 2500 |
268
+
| 2024-07-1120:00:05 | host2 | 114 |
269
+
| 2024-07-1120:00:10 | host2 | 111 |
270
+
| 2024-07-1120:00:15 | host2 | 115 |
271
+
| 2024-07-1120:00:20 | host2 | 95 |
252
272
+---------------------+-------+-------------+
253
273
8 rows inset (0.06 sec)
254
274
```
255
275
276
+
The range query is very powerful for querying and aggregating data based on time windows, please read the [manual](/reference/sql/range.md#range-query) to learn more.
277
+
256
278
### Correlate Metrics and Logs
257
279
258
280
By combining the data from the two tables,
259
281
you can easily and quickly determine the time of failure and the corresponding logs.
260
282
The following SQL query uses the `JOIN` operation to correlate the metrics and logs:
261
283
262
284
```sql
263
-
--CTE using Range Query to query metrics and logs with aligned time windows
285
+
--Align metrics and logs into 5s buckets, then join
264
286
WITH
287
+
-- metrics: per-host p95 latency in 5s buckets
265
288
metrics AS (
266
-
SELECT
267
-
ts,
268
-
host,
269
-
approx_percentile_cont(0.95) WITHIN GROUP (ORDER BY latency) RANGE '5s'AS p95_latency
270
-
FROM
271
-
grpc_latencies
289
+
SELECT
290
+
ts,
291
+
host,
292
+
approx_percentile_cont(0.95) WITHIN GROUP (ORDER BY latency) RANGE '5s'AS p95_latency
293
+
FROM grpc_latencies
272
294
ALIGN '5s' FILL PREV
273
-
),
295
+
),
296
+
-- logs: per-host ERROR counts in the same 5s buckets
We can see that during the time window when the gRPC latencies increases, the number of error logs also increases significantly, and we've identified that the problem is on `host1`.
335
+
336
+
### Query data via PromQL
337
+
338
+
GreptimeDB supports [Prometheus Query Language and its APIs](/user-guide/query-data/promql.md), allowing you to query metrics using PromQL. For example, you can retrieve the p95 latency over the last 1 minute per host with this query:
By using [TQL](/reference/sql/tql.md) commands, you can combine the power of SQL and PromQL, making correlation analysis and complex queries no longer difficult.
0 commit comments