Skip to content

Performance Issue: O(n) Linear Search in AutocompleteLruCache for LRU Eviction #9303

@continue-staging

Description

@continue-staging

Problem

The class in has a performance bottleneck in its LRU eviction mechanism. When the cache reaches capacity and needs to evict the oldest entry, it performs an O(n) linear search through all cache entries to find the one with the oldest timestamp.

Current Implementation (Lines 146-157)

if (this.cache.size > AutocompleteLruCache.capacity) {
  let oldestKey: string | null = null;
  let oldestTime = Infinity;

  for (const [key, entry] of this.cache.entries()) {
    if (entry.timestamp < oldestTime) {
      oldestTime = entry.timestamp;
      oldestKey = key;
    }
  }

  if (oldestKey) {
    this.cache.delete(oldestKey);
    this.dirty.add(oldestKey);
  }
}

Performance Impact

  • Time Complexity: O(n) for each eviction operation
  • Cache Size: Default capacity is 1000 entries
  • Frequency: Occurs on every put() operation when cache is full
  • Scaling: Performance degrades linearly with cache size

For autocomplete functionality that may be called frequently during typing, this can introduce noticeable latency, especially as the cache grows toward its capacity limit.

Proposed Solution

Implement a more efficient LRU eviction strategy using one of these approaches:

Option 1: Doubly Linked List + HashMap

Maintain a doubly linked list to track access order alongside the existing Map:

  • Get/Put Operations: O(1)
  • LRU Eviction: O(1) - remove from tail
  • Memory Overhead: ~2-3x current usage (acceptable for autocomplete cache)

Option 2: Timestamp-Based Min-Heap

Use a min-heap to track entries by timestamp:

  • Eviction: O(log n) - better than current O(n)
  • Memory Overhead: Minimal additional space

Option 3: Periodic Cleanup Strategy

Instead of finding exact LRU on every eviction, periodically sort and bulk-evict:

  • Eviction: O(1) most of the time, O(n log n) during cleanup
  • Trade-off: Slightly less precise LRU semantics for better average performance

Implementation Plan

  1. Benchmark Current Performance

    • Add timing metrics to eviction operations
    • Measure performance with cache at various fill levels
  2. Implement Doubly Linked List Solution (Recommended)

    • Create internal CacheNode class with prev/next pointers
    • Maintain head/tail pointers for O(1) access to oldest/newest
    • Update all cache operations to maintain list invariants
  3. Add Performance Tests

    • Test eviction performance with full cache (1000 entries)
    • Verify O(1) eviction time regardless of cache size
    • Add stress tests for high-frequency put operations
  4. Backward Compatibility

    • Ensure all existing functionality remains unchanged
    • Maintain same public API
    • Preserve SQLite persistence behavior

Files to Modify

  • core/autocomplete/util/AutocompleteLruCache.ts - Main implementation
  • core/autocomplete/util/AutocompleteLruCache.test.ts - Add performance tests

Expected Benefits

  • Improved Autocomplete Responsiveness: Eliminate O(n) eviction delays
  • Better Scaling: Performance remains constant as cache grows
  • Reduced Typing Latency: Faster cache operations during active coding

Priority

Medium - This affects autocomplete performance but has workarounds (smaller cache size). Should be addressed to improve user experience as autocomplete usage scales.

Metadata

Metadata

Assignees

No one assigned

    Labels

    area:autocompleteRelates to the auto complete featurejavascriptPull requests that update Javascript code

    Type

    No type

    Projects

    Status

    Todo

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions