The Problem

Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.

Implement the LRUCache class:
LRUCache(int capacity) โ€” Initialize the cache with positive size capacity.
int get(int key) โ€” Return the value of the key if it exists, otherwise return -1. This marks the key as recently used.
void put(int key, int value) โ€” Update the value of the key if it exists. Otherwise, add the key-value pair to the cache. If the number of keys exceeds the capacity, evict the least recently used key.

Both get and put must run in O(1) average time complexity.

Interactive Visualization

Speed:
Doubly Linked List cap: 2 size: 0
โ† Most Recent ยทยทยท Least Recent โ†’
Hash Map (cache)
Operations
Step Log
Press Step or Play to begin.
Code
class Node(var key: Int, var value: Int) {    var prev: Node? = null    var next: Node? = null}class LRUCache(private val cap: Int) {    private val cache = HashMap<Int, Node>() // key -> node    private val head = Node(0, 0) // dummy    private val tail = Node(0, 0) // dummy    init { head.next = tail; tail.prev = head }    fun get(key: Int): Int {        val node = cache[key] ?: return -1        remove(node)        addToFront(node) // move to front        return node.value    }    fun put(key: Int, value: Int) {        cache[key]?.let { remove(it) }        val node = Node(key, value)        addToFront(node)        cache[key] = node        if (cache.size > cap) {            val lru = tail.prev!!            remove(lru)            cache.remove(lru.key)        }    }}

How It Works

Key Insight

The LRU Cache requires two properties: O(1) lookup and O(1) insertion/deletion/reordering. No single data structure provides both. A hash map gives O(1) lookup, and a doubly linked list gives O(1) insert/remove at any position (given a reference to the node). By combining them โ€” storing node references in the hash map โ€” we get O(1) for every operation.

The Algorithm

  • Data structures: A hash map cache maps keys to linked list nodes. A doubly linked list maintains the usage order, with most recently used near the head and least recently used near the tail.
  • Dummy nodes: Head and tail sentinel nodes simplify edge cases โ€” we never insert before head or after tail.
  • get(key): Look up in hash map. If found, remove the node from its current position and re-insert it right after head (marking it as most recently used). Return the value. If not found, return -1.
  • put(key, value): If key exists, remove the old node. Create a new node, insert after head, and add to hash map. If the cache now exceeds capacity, remove the node just before tail (the LRU node) and delete its entry from the hash map.
  • _remove(node): Unlink a node from the doubly linked list in O(1) by updating its neighbors' pointers.
  • _add(node): Insert a node right after head in O(1) โ€” this position represents "most recently used".

Complexity

Time (get) O(1)
Time (put) O(1)
Space O(capacity)

Every operation involves a constant number of hash map lookups/insertions and linked list pointer updates. The space is bounded by the cache capacity โ€” we store at most capacity nodes plus two dummy sentinels.