From a2799a10c22664a6f2fbcaeadc60ea44262678bc Mon Sep 17 00:00:00 2001 From: Steven Tan Date: Mon, 9 Mar 2026 18:50:50 +0800 Subject: [PATCH] Fix 2 broken reference links in spark-declarative-pipelines skill MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - validation-checklist.md (doesn't exist) → 7-advanced-configuration.md - 3-scd-patterns.md (wrong name) → 3-scd-query-patterns.md --- .../databricks-spark-declarative-pipelines/SKILL.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/databricks-skills/databricks-spark-declarative-pipelines/SKILL.md b/databricks-skills/databricks-spark-declarative-pipelines/SKILL.md index 60afef0b..feea8afd 100644 --- a/databricks-skills/databricks-spark-declarative-pipelines/SKILL.md +++ b/databricks-skills/databricks-spark-declarative-pipelines/SKILL.md @@ -138,7 +138,7 @@ databricks bundle deploy --target prod **Using Python API?** → Read [5-python-api.md](5-python-api.md) **Migrating from DLT?** → Read [6-dlt-migration.md](6-dlt-migration.md) **Advanced configuration?** → Read [7-advanced-configuration.md](7-advanced-configuration.md) - **Validating?** → Read [validation-checklist.md](validation-checklist.md) + **Validating?** → Read [7-advanced-configuration.md](7-advanced-configuration.md) (dry_run, development mode) 2. Follow the instructions in the relevant guide @@ -526,7 +526,7 @@ def enriched_orders(): | **Streaming reads fail** | For file ingestion in a streaming table, you must use the `STREAM` keyword with `read_files`: `FROM STREAM read_files(...)`. For table streams use `FROM stream(table)`. See [read_files — Usage in streaming tables](https://docs.databricks.com/aws/en/sql/language-manual/functions/read_files#usage-in-streaming-tables). | | **Timeout during run** | Increase `timeout`, or use `wait_for_completion=False` and check status with `get_pipeline` | | **MV doesn't refresh** | Enable row tracking on source tables | -| **SCD2: query column not found** | Lakeflow uses `__START_AT` and `__END_AT` (double underscore), not `START_AT`/`END_AT`. Use `WHERE __END_AT IS NULL` for current rows. See [3-scd-patterns.md](3-scd-patterns.md). | +| **SCD2: query column not found** | Lakeflow uses `__START_AT` and `__END_AT` (double underscore), not `START_AT`/`END_AT`. Use `WHERE __END_AT IS NULL` for current rows. See [3-scd-query-patterns.md](3-scd-query-patterns.md). | | **AUTO CDC parse error at APPLY/SEQUENCE** | Put `APPLY AS DELETE WHEN` **before** `SEQUENCE BY`. Only list columns in `COLUMNS * EXCEPT (...)` that exist in the source (omit `_rescued_data` unless bronze uses rescue data). Omit `TRACK HISTORY ON *` if it causes "end of input" errors; default is equivalent. See [2-streaming-patterns.md](2-streaming-patterns.md). | | **"Cannot create streaming table from batch query"** | In a streaming table query, use `FROM STREAM read_files(...)` so `read_files` leverages Auto Loader; `FROM read_files(...)` alone is batch. See [1-ingestion-patterns.md](1-ingestion-patterns.md) and [read_files — Usage in streaming tables](https://docs.databricks.com/aws/en/sql/language-manual/functions/read_files#usage-in-streaming-tables). |