-
Notifications
You must be signed in to change notification settings - Fork 11
Expose builtin cache API #33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -101,11 +101,15 @@ extension OPA.Engine { | |
| /// - Parameters: | ||
| /// - input: The input data to evaluate the query against. | ||
| /// - tracer: (optional) The tracer to use for this evaluation. | ||
| /// - cache: (optional) Per-evaluation builtin cache shared across all builtin invocations within a single top-down policy evaluation. | ||
| /// You can provide a custom cache instance to reuse results between evaluations. | ||
| /// By default, a new, empty cache is created for each evaluation. | ||
| /// - strictBuiltins: (optional) Whether to run in strict builtin evaluation mode. | ||
| /// In strict mode, builtin errors abort evaluation, rather than returning undefined. | ||
| public func evaluate( | ||
| input: AST.RegoValue = .undefined, | ||
| tracer: OPA.Trace.QueryTracer? = nil, | ||
| cache: BuiltinsCache = .init(), | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Given that this is a per-evaluation cache, and the scope of this function is a new evaluation, what is the use case for allowing the caller to pass in an existing cache, which would likely violate our constraints around non-concurrent access?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Great question! I completely agree that in most cases, the caller shouldn’t pass in an existing cache. That said, we’ve encountered scenarios where it’s useful to reuse a warmed cache across multiple sequential evaluations (never concurrently). This can significantly reduce overhead for deterministic builtins that e.g. perform costly network calls that are stable for a short period of time. To support that optimization, I exposed the cache parameter on I’ll extend the documentation to make it clear that the cache must never be shared across concurrent evaluations, only reused sequentially under controlled conditions.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this mixes two separate types of caches in a problematic way. There's a separate concept of an inter-query cache, which spans evaluations, where we need to give consideration to synchronization, TTL/expiration, etc. I propose we keep these two layers of cache separate and take on designing the inter-query cache in a separate PR.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In general, agreed. In the long run, we should absolutely separate the two cache types with clear semantics (per-evaluation vs. inter-query, including TTL and synchronization) and proper API. That said, the current approach simply offers an opt-in escape hatch for advanced users to reuse a warmed cache sequentially, never concurrently. It’s clearly and extensively documented and leaves both the choice and responsibility to the developer. A full inter-query cache will definitely need a proper design pass with synchronization and TTL management, but until then, this lightweight, documented option offers a practical middle ground, giving developers flexibility without impacting default behavior. Also worth noting: the
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
That precedent seems like a bug to me - the QueryTracer should absolutely be |
||
| strictBuiltins: Bool = false | ||
| ) async throws -> ResultSet { | ||
| let ctx = EvaluationContext( | ||
|
|
@@ -114,6 +118,7 @@ extension OPA.Engine { | |
| store: self.store, | ||
| builtins: self.builtinRegistry, | ||
| tracer: tracer, | ||
| cache: cache, | ||
| strictBuiltins: strictBuiltins | ||
| ) | ||
|
|
||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.