Releases: vzakharchenko/forge-sql-orm
2.1.21
🚀 What’s New
🛡️ Safer Development WebTriggers.
Development WebTriggers now strictly verify the environment and will not run in Production. Note: Please remember to remove these triggers from your production manifest.
Available Developer WebTriggers:
- fetchSchemaWebTrigger. Generates a DDL script of the current Forge-SQL database, allowing schema export to any MySQL-compatible DB for debugging.
- dropSchemaMigrations. Performs a complete cleanup by dropping all tables and sequences.
- dropTableSchemaMigrations. Drops all tables while preserving sequences.
✨ Updated Examples • All examples have been updated to align with RoA eligibility. • Development triggers in examples are now disabled by default.
🐛 CLI Bug Fixes • Fixed various bugs in forge-sql-orm-cli to improve stability.
🤖 Improved Rovo Integration
- Added support for single quotes (') in context variables.
- Ensures correct syntax handling when wrapping variables for AI context.
📦 Dependency Updates • Dependencies updated for better stability.
2.1.18
✨ What’s New / Updated
🟢 Node.js 24 Examples
• All examples have been updated to node24.x
• Aligns sample code with the latest Forge runtime
🛠 forge-sql-orm-cli: migration:create
• Migration generation no longer relies on drizzle-orm-cli
• Table metadata is now fetched directly from the database (SHOW TABLES)
• More predictable and reliable migration creation
📦 Dependency Updates
• Project dependencies have been updated
• Improves compatibility, stability, and security
2.1.17
✨ What’s New
✅ Node.js 24 Support
• Full compatibility with Node.js 24
• No breaking changes for existing Node 20/22 users
• Ready for upcoming Forge runtime updates
⚡ Non-Blocking Query Degradation Analysis
• Query analysis now runs asynchronously
• Resolver responses are not blocked by performance diagnostics
• Uses Forge event queue + consumer model
🔁 Automatic Fallback
• If async queue push fails or times out, analysis falls back to synchronous execution
• Guarantees analysis is never skipped
🔗 Log Correlation with Job IDs
• Resolver and consumer logs include a shared Job ID
• Makes tracing async analysis straightforward in production logs
🎯 Why It Matters
• Faster resolvers in production
• Safer performance diagnostics
• Scales observability without impacting user experience
2.1.16
🔧 Improved Migration Engine & CLI Stability
This release focuses on making schema evolution more predictable and strengthening the CLI, especially around complex Drizzle-based diff generation and safe NOT NULL transitions.
✨ What’s New
- Refined forge-sql-orm-cli Behavior
1.1 Removed Dynamic Imports During Diff Generation
The CLI no longer relies on dynamic imports when resolving schema diffs between the reference database and Drizzle models.
1.2 Safe NOT NULL Field Introduction (3-Step Migration)
When adding a new field that is NOT NULL in the reference schema, the CLI now generates a deterministic 3-migration sequence:
• Migration 1 — create the field as nullable
• Migration 2 — populate the field with a default value (if the default is not defined in schema, it must be provided manually)
• Migration 3 — convert the field to NOT NULL
This avoids runtime failures and ensures smooth transitions in production schemas.
1.3 Converting Nullable → NOT NULL (2-Step Migration)
When updating an existing column from nullable to not null, the CLI now generates:
• Migration 1 —update all existing rows with a default value (if the default is not defined in the schema, it must be provided manually) where the field is NULL
• Migration 2 — apply the NOT NULL constraint
This prevents TiDB errors and enforces predictable schema tightening.
⸻
- Improved Table Extraction Logic (Regex as Fallback)
The mechanism for extracting table names from SQL queries has been redesigned:
• The new parser avoids regex entirely for the primary extraction logic
• Regex is used only as a fallback for unusual or nested SQL structures
This improvement is essential for the caching layer:
by reliably detecting which tables a query touches, the ORM can precisely determine which cache entries should be invalidated — without over-invalidating or missing dependencies.
2.1.15
🔍 Enhanced Observability Layer
This release introduces a fully updated SQL observability system with deterministic diagnostics, safer failure analysis, and improved tooling for resolver-level profiling.
✨ What’s New
- Deterministic TopSlowest Mode (New Default)
The default mode no longer depends on CLUSTER_STATEMENTS_SUMMARY.
• Logs exact SQL digests executed inside a resolver
• Prints slowest queries with optional EXPLAIN ANALYZE
• Stable diagnostics for long-running workflows
{
topQueries: 2,
showSlowestPlans: true,
}
- Improved SummaryTable Mode (Optional)
summaryTableWindowTime: 15000 // 15s
Now treated as an advanced diagnostic mode.
Automatically falls back to TopSlowest if resolver execution exceeds the window.
- Updated Diagnostics API
executeWithMetadata(fn, onStats, {
mode: "TopSlowest" | "SummaryTable",
summaryTableWindowTime: 15000,
topQueries: 1,
showSlowestPlans: true,
});Gives full control over plan printing, thresholds, and sampling.
- Post-Mortem Analysis for Timeout & OOM
On Timeout or OOM errors, forge-sql-orm now performs an immediate post-mortem lookup:
• Captures the actual failing SQL
• Captures the real TiDB execution plan (no re-execution)
• Fully Forge-safe (metadata only)
Useful for diagnosing severe SQL failures without triggering them again.
- Deprecated Method Temporarily Restored
The deprecated trigger:
topSlowestStatementLastHourTrigger(...)
was restored for backward compatibility.
• Still marked @deprecated
• Internally wraps slowQuerySchedulerTrigger
• Will be removed in a future version
• Not equivalent to the new executeWithMetadata system
• Provided only to support existing apps during migration
Developers are encouraged to migrate to the new observability system:
• TopSlowest mode (default) for deterministic profiling
• SummaryTable mode for advanced use cases
• Automatic fallbacks and reliable post-mortem diagnostics
2.1.14
🔒 Rovo Integration - Secure Natural-Language Analytics
Forge SQL ORM now includes Rovo — a secure pattern for enabling AI-powered natural-language analytics with comprehensive security validations.
-
Secure Dynamic SQL Execution
• Enables safe execution of user-generated SQL queries for AI analytics features.
• Multiple layers of security validations prevent SQL injection and unauthorized data access.
• AST-based SQL normalization ensures queries are properly parsed and validated before execution.
• Example:
const rovo = forgeSQL.rovo();
const settings = await rovo
.rovoSettingBuilder(usersTable, accountId)
.useRLS()
.addRlsColumn(usersTable.id)
.addRlsWherePart((alias) =>${alias}.id = '${accountId}')
.finish()
.build();const result = await rovo.dynamicIsolatedQuery(
"SELECT id, name FROM users WHERE status = 'active'",
settings
);- Comprehensive Security Validations
• Query Type Restriction: Only SELECT queries are allowed — blocks INSERT, UPDATE, DELETE, and other data modification operations.
• Single Table Isolation: Queries are restricted to a single table to prevent cross-table data access.
• JOIN Detection: Automatically detects and blocks JOIN operations using EXPLAIN analysis.
• Subquery Blocking: Prevents scalar subqueries in SELECT columns that could access other tables.
• Window Function Blocking: Blocks window functions (e.g.,
COUNT(*) OVER()) for security.• Execution Plan Validation: Verifies that only the expected table is accessed in the query execution plan.
• Post-Execution Validation: Ensures all result fields come from the correct table and required security fields are present.
-
Row-Level Security (RLS) Support
• Built-in RLS support for data isolation based on user context.
• Conditional RLS activation based on user roles or permissions.
• Type-safe column references using Drizzle ORM table objects.
• Automatic query wrapping with RLS filtering when enabled.
• Example:
const settings = await rovo
.rovoSettingBuilder(securityNotesTable, accountId)
.useRLS()
.addRlsCondition(async () => {
const userService = getUserService();
return !(await userService.isAdmin()); // Only apply RLS for non-admin users
})
.addRlsColumn(securityNotesTable.createdBy)
.addRlsWherePart((alias) =>${alias}.createdBy = '${accountId}')
.finish()
.build();- Type-Safe Configuration
• Uses Drizzle ORM table objects for type-safe column references.
• Supports both raw table names and Drizzle table objects.
• Context parameter substitution for dynamic query values.
• Optional query logging for debugging (
logRawSqlQueryoption).
⚡ Designed to enable secure AI-powered analytics features where users can query data using natural language, with comprehensive security validations built directly into Forge SQL ORM.
📖 Real-World Example: See Forge-Secure-Notes-for-Jira for a complete implementation of Rovo AI agent with secure natural-language analytics.
2.1.13
What's Changed
- Bump @vitest/coverage-v8 from 4.0.7 to 4.0.8 by @dependabot[bot] in #1028
Full Changelog: 2.1.12...2.1.13
2.1.12
🚀 What’s New
✨ Enhanced Observability Layer
Forge SQL ORM now provides end-to-end observability — from resolver-level profiling to automatic diagnostics and scheduled slow-query monitoring.
-
Resolver-Level Observability
• Added built-in profiling across all SQL queries executed inside a resolver.
• Automatically aggregatestotalDbExecutionTimeandtotalResponseSizefor every invocation.
• Emits performance warnings when exceeding baseline thresholds and can print execution plans for analysis.
• Example:resolver.define("fetch", async (req: Request) => { try { return await forgeSQL.executeWithMetadata( async () => { const users = await forgeSQL.selectFrom(demoUsers); const orders = await forgeSQL .selectFrom(demoOrders) .where(eq(demoOrders.userId, demoUsers.id)); return { users, orders }; }, async (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) => { const threshold = 500; // ms baseline if (totalDbExecutionTime > threshold * 1.5) { console.warn(`[Performance Warning fetch] Resolver exceeded DB time: ${totalDbExecutionTime} ms`); await printQueriesWithPlan(); // log or capture diagnostics } else if (totalDbExecutionTime > threshold) { console.debug(`[Performance Debug fetch] High DB time: ${totalDbExecutionTime} ms`); } } ); } catch (e) { const error = e?.cause?.debug?.sqlMessage ?? e?.cause; console.error(error, e); throw error; } });
-
Automatic Error Analysis
• Automatically detects and diagnoses queries that fail with
“Your query has been cancelled due to exceeding the allowed memory limit for a single SQL query.”
or
“The provided query took more than 5000 milliseconds to execute.”
• Retrieves execution plans for failed queries directly from TiDB metadata tables for instant analysis.
• Works entirely within Forge SQL ORM — no external tools required. -
Slow Query Scheduler Trigger
• Introduces an hourly scheduler trigger that automatically collects and logs slow queries.
• LeveragesINFORMATION_SCHEMA.CLUSTER_SLOW_QUERYto track execution time, memory usage, and query plans.
• Example:import ForgeSQL, { slowQuerySchedulerTrigger } from "forge-sql-orm"; const forgeSQL = new ForgeSQL(); export const slowQueryTrigger = () => slowQuerySchedulerTrigger(forgeSQL, { hours: 1, timeout: 3000 });
Configure in
manifest.yml:scheduledTrigger: - key: slow-query-trigger function: slowQueryTrigger interval: hour function: - key: slowQueryTrigger handler: index.slowQueryTrigger
⚡ Designed to provide full resolver-level profiling, automatic failure diagnostics, and periodic slow-query monitoring — all built directly into Forge SQL ORM.
2.1.9
Смотри у меня релиз ноты такие
🚀 What’s New
✨ New Execution APIs
I added three new execution helpers to make working with Forge SQL more powerful and transparent:
-
executeWithMetadata(query, onMetadata)
• Runs any SQL/ORM query and provides full execution metadata.
• ExposestotalDbExecutionTime,totalResponseSize, and rawForgeSQLMetadata.
• Perfect for monitoring query performance, logging, or building custom dashboards.
• Example:const result = await forgeSQL.executeWithMetadata( () => forgeSQL.select().from(users).where(eq(users.id, 1)), (dbTime, size, meta) => { console.log(`DB time: ${dbTime}ms, size: ${size} bytes`, meta); } );
-
executeDDL(sql)
• Execute DDL statements likeCREATE,ALTER,DROP,TRUNCATE.
• Ensures proper context handling for schema modifications in Forge SQL.
• Example:await forgeSQL.executeDDL(` CREATE TABLE users ( id INT PRIMARY KEY AUTO_INCREMENT, name VARCHAR(255) NOT NULL, email VARCHAR(255) UNIQUE ) `);
-
executeDDLActions(actions)
• Run a series of queries wrapped in a DDL operation context.
• Useful for treating normal SQL as DDL for monitoring / tracking purposes.
• Example:const slowQueries = await forgeSQL.executeDDLActions(async () => { return await forgeSQL.execute(` SELECT * FROM INFORMATION_SCHEMA.STATEMENTS_SUMMARY WHERE AVG_LATENCY > 1000000 `); });
⚡ Designed to make query execution more transparent, schema management safer, and monitoring easier inside Forge apps.
Мне нужно точно таком же формате только в новая версия вышла
Вот что нового, новый релиз сделан по статью но в релиз ноте не надо указывать про статью. Просто добавлено:
-
Добавлена Observobility на уровне резолвера
resolver.define("fetch", async (req: Request) => {
try {
return await forgeSQL.executeWithMetadata(
async () => {
// Resolver logic with multiple queries
const users = await forgeSQL.selectFrom(demoUsers);
const orders = await forgeSQL.selectFrom(demoOrders)
.where(eq(demoOrders.userId, demoUsers.id));
return { users, orders };
},
async (totalDbExecutionTime, totalResponseSize, printQueriesWithPlan) => {
const threshold = 500; // ms baseline for this resolver
if (totalDbExecutionTime > threshold * 1.5) {
console.warn(`[Performance Warning fetch] Resolver exceeded DB time: ${totalDbExecutionTime} ms`);
await printQueriesWithPlan(); // Optionally log or capture diagnostics for further analysis
} else if (totalDbExecutionTime > threshold) {
console.debug(`[Performance Debug fetch] High DB time: ${totalDbExecutionTime} ms`);
}
}
);
} catch (e) {
const error = e?.cause?.debug?.sqlMessage ?? e?.cause;
console.error(error, e);
throw error;
}
});
```
2. Automatic error analysis - handling and diagnosing queries that fail with
“Your query has been cancelled due to exceeding the allowed memory limit for a
single SQL query.
” or
“The provided query took more than 5000 milliseconds to execute. Optimize the
query which you are trying to run”
by retrieving their execution plans directly from metadata tables.
- Добавлен часовой scheduler trigger, который в лог записывает все медленные запросы
`Slow Query Scheduler Trigger
This scheduler trigger automatically monitors and analyzes slow queries on a scheduled basis. For detailed information, see the Slow Query Monitoring section.
Quick Setup:
import ForgeSQL, { slowQuerySchedulerTrigger } from "forge-sql-orm";
const forgeSQL = new ForgeSQL();
export const slowQueryTrigger = () =>
slowQuerySchedulerTrigger(forgeSQL, { hours: 1, timeout: 3000 });Configure in manifest.yml:
scheduledTrigger:
- key: slow-query-trigger
function: slowQueryTrigger
interval: hour
function:
- key: slowQueryTrigger
handler: index.slowQueryTrigger2.1.5
🚀 What’s New
✨ Memory & Latency Monitoring Trigger
I added a built-in scheduled trigger to help surface problematic queries automatically:
• topSlowestStatementLastHourTrigger(warnThresholdMs, warnMemMb?)
• Checks the slowest query or the most memory-intensive query from the last hour.
• Outputs compact JSON and writes structured warnings to the Forge Developer Console logs for easy inspection.
• Provides digest, preview of SQL text, latency (ms), memory (MB), and an execution plan snapshot.
• Skips noise (e.g., empty digest and USE/SET/SHOW statements).
• Designed for tenant-isolated Forge apps, helping stay within the Forge SQL ~16 MiB per-query limit.
📦 Cache Example Update
• Cache Example now shows advanced caching combined with topSlowestStatementLastHourTrigger.
• Demonstrates how caching + monitoring can both optimize query performance and detect high-latency or high-memory queries early.