Skip to content

Gate chunking on RL convergence (EMA of |delta_Q|)#577

Draft
kimjune01 wants to merge 4 commits intoSoarGroup:developmentfrom
kimjune01:rl-convergence-gate
Draft

Gate chunking on RL convergence (EMA of |delta_Q|)#577
kimjune01 wants to merge 4 commits intoSoarGroup:developmentfrom
kimjune01:rl-convergence-gate

Conversation

@kimjune01
Copy link
Copy Markdown

@kimjune01 kimjune01 commented Mar 25, 2026

Summary

Experimental / demonstration only. This PR implements a convergence gate for RL-chunking composition as described in Diagnosis: Soar and Prescription: Soar. It is intended to illustrate the approach and invite discussion, not for immediate merge.

  • Tracks an exponential moving average of |ΔQ| per RL production rule
  • When all RL rules on a slot converge (EMA below threshold), forces greedy selection instead of stochastic exploration
  • Deterministic selection enables chunking to compile the converged policy into a production rule
  • Three new parameters: chunk-gate (on/off), chunk-gate-threshold (default 0.01), chunk-gate-ema-decay (default 0.95)
  • Off by default — zero behavior change until explicitly enabled

Motivation

Laird (2022) §4 identifies the RL–chunking composition gap: "RL uses stochastic selection, chunking requires deterministic results, so the two cannot compose." The planned fix is gating chunking on RL convergence. This patch implements that gate.

The same EMA that triggers compilation can later trigger invalidation (detecting divergence after a period of stability), but that is left for a follow-up.

Changes

File Change
production.h Add rl_ema_delta_q field to production struct
production.cpp, rete.cpp, reinforcement_learning.cpp Initialize new field to 1.0 (unconverged) at all creation sites
reinforcement_learning.h Declare 3 new params + rl_slot_converged()
reinforcement_learning.cpp Update EMA in rl_perform_update(), add param init, implement rl_slot_converged()
decide.cpp In run_preference_semantics(), check convergence before exploration policy; if converged, select greedily

Tests

All pass locally (M4 Pro, macOS, 16s clean build):

  • testRLConvergenceGate — agent with chunk-gate ON, 50 decisions, completes successfully
  • testRLConvergenceGateOff — same agent with chunk-gate OFF, regression test
  • testRLConvergenceGateParams — verifies new params accepted by command parser
  • Existing RL/chunking tests (Chunk_RL_Proposal, RL_Variablization, testPreferenceSemantics, testLearn) all pass

Reference


See also: #578 — Episodic-to-semantic consolidation and eviction (prescription steps 1 & 2)

Gate chunking on RL convergence by tracking an exponential moving
average (EMA) of |delta_Q| per production rule.  When all RL rules
contributing numeric-indifferent preferences to a slot have converged
(EMA below threshold), the decision is made greedily instead of
stochastically. This makes the decision deterministic, which enables
chunking to compile the converged policy into a production rule.

New parameters (all under rl):
  chunk-gate           on/off (default off) — enable convergence gating
  chunk-gate-threshold double  (default 0.01) — EMA below this = converged
  chunk-gate-ema-decay double  (default 0.95) — EMA smoothing factor

When chunk-gate is off, behavior is identical to the existing codebase.

Motivation: Laird (2022) §4 identifies the RL–chunking composition gap
as a known limitation. RL uses stochastic exploration, chunking requires
deterministic results, so the two cannot compose. The planned fix is to
gate chunking on RL convergence. This patch implements that gate.

Reference: "Introduction to the Soar Cognitive Architecture"
(Laird, 2022, arXiv:2205.03854), §4, p.10.
Three FullTests covering the chunk-gate feature:

1. testRLConvergenceGate: agent with RL learning and chunk-gate ON
   (fast EMA decay=0.5, threshold=0.1). Two operators with consistent
   reward. Verifies 50 decisions complete without crash/hang.

2. testRLConvergenceGateOff: same agent with chunk-gate OFF.
   Regression test — identical decision count confirms no behavior
   change when feature is disabled.

3. testRLConvergenceGateParams: verifies the three new parameters
   (chunk-gate, chunk-gate-threshold, chunk-gate-ema-decay) are
   accepted by the command parser with valid values.
Three FullTests covering the chunk-gate feature:

1. testRLConvergenceGate: agent with RL learning and chunk-gate ON
   (fast EMA decay=0.5, threshold=0.1). Two operators with consistent
   reward. Verifies 50 decisions complete successfully.

2. testRLConvergenceGateOff: same agent with chunk-gate OFF.
   Regression test — identical decision count confirms no behavior
   change when feature is disabled.

3. testRLConvergenceGateParams: verifies the three new parameters
   (chunk-gate, chunk-gate-threshold, chunk-gate-ema-decay) are
   accepted by the command parser with valid values.

All three tests pass. Existing RL/chunking tests (Chunk_RL_Proposal,
RL_Variablization, testPreferenceSemantics, testLearn) also pass,
confirming zero regression.
The two test agents were nearly identical — only the chunk-gate
params differed. Now there's one shared agent file; the C++ tests
set rl params via ExecuteCommandLine before sourcing.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants