Skip to content

Commit 696e1e6

Browse files
authored
docs: msq not an extension anymore (#18725)
1 parent f3d2dfb commit 696e1e6

File tree

8 files changed

+9
-50
lines changed

8 files changed

+9
-50
lines changed

docs/data-management/automatic-compaction.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -307,7 +307,6 @@ To stop the automatic compaction task, suspend or terminate the supervisor throu
307307

308308
The MSQ task engine is available as a compaction engine if you configure auto-compaction to use compaction supervisors. To use the MSQ task engine for automatic compaction, make sure the following requirements are met:
309309

310-
* [Load the MSQ task engine extension](../multi-stage-query/index.md#load-the-extension).
311310
* In your Overlord runtime properties, set the following properties:
312311
* `druid.supervisor.compaction.enabled` to `true` so that compaction tasks can be run as a supervisor task.
313312
* Optionally, set `druid.supervisor.compaction.engine` to `msq` to specify the MSQ task engine as the default compaction engine. If you don't do this, you'll need to set `spec.engine` to `msq` for each compaction supervisor spec where you want to use the MSQ task engine.

docs/multi-stage-query/concepts.md

Lines changed: 3 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -23,21 +23,12 @@ sidebar_label: "Key concepts"
2323
~ under the License.
2424
-->
2525

26-
:::info
27-
This page describes SQL-based batch ingestion using the [`druid-multi-stage-query`](../multi-stage-query/index.md)
28-
extension, new in Druid 24.0. Refer to the [ingestion methods](../ingestion/index.md#batch) table to determine which
29-
ingestion method is right for you.
30-
:::
26+
This page describes SQL-based batch ingestion using the [multi-stage query (MSQ) task engine](../multi-stage-query/index.md). Refer to the [ingestion methods](../ingestion/index.md#batch) table to determine which ingestion method is right for you.
3127

3228
## Multi-stage query task engine
3329

34-
The `druid-multi-stage-query` extension adds a multi-stage query (MSQ) task engine that executes SQL statements as batch
35-
tasks in the indexing service, which execute on [Middle Managers](../design/architecture.md#druid-services).
36-
[INSERT](reference.md#insert) and [REPLACE](reference.md#replace) tasks publish
37-
[segments](../design/storage.md) just like [all other forms of batch
38-
ingestion](../ingestion/index.md#batch). Each query occupies at least two task slots while running: one controller task,
39-
and at least one worker task. As an experimental feature, the MSQ task engine also supports running SELECT queries as
40-
batch tasks. The behavior and result format of plain SELECT (without INSERT or REPLACE) is subject to change.
30+
The MSQ task engine executes SQL statements as batch tasks in the indexing service, which execute on [Middle Managers](../design/architecture.md#druid-services).
31+
[INSERT](reference.md#insert) and [REPLACE](reference.md#replace) tasks publish [segments](../design/storage.md) just like [all other forms of batch ingestion](../ingestion/index.md#batch). Each query occupies at least two task slots while running: one controller task, and at least one worker task. As an experimental feature, the MSQ task engine also supports running SELECT queries as batch tasks. The behavior and result format of plain SELECT (without INSERT or REPLACE) is subject to change.
4132

4233
You can execute SQL statements using the MSQ task engine through the **Query** view in the [web
4334
console](../operations/web-console.md) or through the [`/druid/v2/sql/task` API](../api-reference/sql-ingestion-api.md).

docs/multi-stage-query/examples.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -23,11 +23,8 @@ sidebar_label: Examples
2323
~ under the License.
2424
-->
2525

26-
:::info
27-
This page describes SQL-based batch ingestion using the [`druid-multi-stage-query`](../multi-stage-query/index.md)
28-
extension, new in Druid 24.0. Refer to the [ingestion methods](../ingestion/index.md#batch) table to determine which
29-
ingestion method is right for you.
30-
:::
26+
27+
This page describes SQL-based batch ingestion using the [multi-stage query task engine](../multi-stage-query/index.md) (MSQ task engine). Refer to the [ingestion methods](../ingestion/index.md#batch) table to determine which ingestion method is right for you.
3128

3229
These example queries show you some of the things you can do when modifying queries for your use case. Copy the example queries into the **Query** view of the web console and run them to see what they do.
3330

docs/multi-stage-query/index.md

Lines changed: 4 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -24,14 +24,9 @@ description: Introduces multi-stage query architecture and its task engine
2424
~ under the License.
2525
-->
2626

27-
:::info
28-
This page describes SQL-based batch ingestion using the [`druid-multi-stage-query`](../multi-stage-query/index.md)
29-
extension, new in Druid 24.0. Refer to the [ingestion methods](../ingestion/index.md#batch) table to determine which
30-
ingestion method is right for you.
31-
:::
27+
This page describes SQL-based batch ingestion using the [multi-stage query (MSQ) task engine](../multi-stage-query/index.md). Refer to the [ingestion methods](../ingestion/index.md#batch) table to determine which ingestion method is right for you.
3228

33-
Apache Druid supports SQL-based ingestion using the bundled [`druid-multi-stage-query` extension](#load-the-extension).
34-
This extension adds a [multi-stage query task engine for SQL](concepts.md#multi-stage-query-task-engine) that allows running SQL
29+
Apache® Druid supports SQL-based ingestion using MSQ task engine that allows running SQL
3530
[INSERT](concepts.md#load-data-with-insert) and [REPLACE](concepts.md#overwrite-data-with-replace) statements as batch tasks. As an experimental feature,
3631
the task engine also supports running `SELECT` queries as batch tasks.
3732

@@ -59,12 +54,9 @@ transformation: creating new tables based on queries of other tables.
5954
- **Shuffle**: Workers exchange data between themselves on a per-partition basis in a process called
6055
shuffling. During a shuffle, each output partition is sorted by a clustering key.
6156

62-
## Load the extension
57+
## External resource types
6358

64-
To add the extension to an existing cluster, add `druid-multi-stage-query` to `druid.extensions.loadlist` in your
65-
`common.runtime.properties` file.
66-
67-
For more information about how to load an extension, see [Loading extensions](../configuration/extensions.md#loading-extensions).
59+
The MSQ task engine supports reading and writing data from external sources through the EXTERN function.
6860

6961
To use [EXTERN](reference.md#extern-function), you need READ permission on the resource named "EXTERNAL" of the resource type
7062
"EXTERNAL". If you encounter a 403 error when trying to use `EXTERN`, verify that you have the correct permissions.

docs/multi-stage-query/known-issues.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,12 +23,6 @@ sidebar_label: Known issues
2323
~ under the License.
2424
-->
2525

26-
:::info
27-
This page describes SQL-based batch ingestion using the [`druid-multi-stage-query`](../multi-stage-query/index.md)
28-
extension, new in Druid 24.0. Refer to the [ingestion methods](../ingestion/index.md#batch) table to determine which
29-
ingestion method is right for you.
30-
:::
31-
3226
## Multi-stage query task runtime
3327

3428
- Fault tolerance is partially implemented. Workers get relaunched when they are killed unexpectedly. The controller does not get relaunched if it is killed unexpectedly.

docs/multi-stage-query/reference.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,12 +23,6 @@ sidebar_label: Reference
2323
~ under the License.
2424
-->
2525

26-
:::info
27-
This page describes SQL-based batch ingestion using the [`druid-multi-stage-query`](../multi-stage-query/index.md)
28-
extension, new in Druid 24.0. Refer to the [ingestion methods](../ingestion/index.md#batch) table to determine which
29-
ingestion method is right for you.
30-
:::
31-
3226
## SQL reference
3327

3428
This topic is a reference guide for the multi-stage query architecture in Apache Druid. For examples of real-world

docs/multi-stage-query/security.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,12 +23,6 @@ sidebar_label: Security
2323
~ under the License.
2424
-->
2525

26-
:::info
27-
This page describes SQL-based batch ingestion using the [`druid-multi-stage-query`](../multi-stage-query/index.md)
28-
extension, new in Druid 24.0. Refer to the [ingestion methods](../ingestion/index.md#batch) table to determine which
29-
ingestion method is right for you.
30-
:::
31-
3226
All authenticated users can use the multi-stage query task engine (MSQ task engine) through the UI and API if the
3327
extension is loaded. However, without additional permissions, users are not able to issue queries that read or write
3428
Druid datasources or external data. The permission needed depends on what the user is trying to do.

docs/querying/query-from-deep-storage.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,6 @@ Druid can query segments that are only stored in deep storage. Running a query f
2626

2727
## Prerequisites
2828

29-
Query from deep storage requires the Multi-stage query (MSQ) task engine. Load the extension for it if you don't already have it enabled before you begin. See [enable MSQ](../multi-stage-query/index.md#load-the-extension) for more information.
30-
3129
To be queryable, your datasource must meet one of the following conditions:
3230

3331
- At least one segment from the datasource is loaded onto a Historical service for Druid to plan the query. This segment can be any segment from the datasource. You can verify that a datasource has at least one segment on a Historical service if it's visible in the Druid console.

0 commit comments

Comments
 (0)