Skip to content

Atomic GetOrAdd

Alex Peck edited this page Sep 18, 2022 · 7 revisions

Use a cache builder to create a cache with atomic GetOrAdd.

Atomic GetOrAdd

To mitigate cache stampedes, caches can be configured with atomic GetOrAdd. When multiple threads attempt to insert a value for the same key, the first caller will invoke valueFactory and store the value. Subsequent callers will be blocked and wait until the new value is generated.

Atomic GetOrAdd is enabled by creating a cache using a cache builder.

Default cache behavior matches ConcurrentDictionary

By default, both ConcurrentLru and ConcurrentLfu behave the same as ConcurrentDictionary. Modifications to the internal hash map are protected by locks. However, the valueFactory delegate is called outside the locks to avoid the problems that can arise from executing unknown code under a lock.

Since a key/value can be inserted by another thread while valueFactory is generating a value, you cannot trust that just because valueFactory executed, its produced value will be inserted into the dictionary and returned. When calling GetOrAdd simultaneously on different threads, valueFactory may be called multiple times, but only one key/value pair will be added to the dictionary. The last write wins.

Under load, this can result in a cascading failure known as a cache stampede.

Under the hood

For sync caches, atomic GetOrAdd is functionally equivalent to caching a Lazy, except exceptions are not cached. For async caches, subsequent callers will await the first caller's task. If the valueFactory throws an exception, all callers concurrently awaiting the task will see the same exception. Once the valueFactory has failed, a subsequent call to GetOrAddAsync will start fresh.

The valueFactory is not cached internally and values created atomically do not hold references to the factory delegate. This avoids rooting object graphs unexpectedly and causing leaks.

Clone this wiki locally