Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.
Implement the LRUCache
class:
LRUCache(int capacity)
initializes the LRU cache with positive size capacity.int get(int key)
returns the value of thekey
if thekey
exists in the cache, otherwise returns-1
.void put(int key, int value)
updates the value of thekey
if thekey
exists. Otherwise, adds thekey-value
pair to the cache. If the number of keys exceeds thecapacity
from this operation, evict the least recently used key.
The functions get
and put
must each run in O(1) average time complexity.
Example:
Input:
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output:
[null, null, null, 1, null, -1, null, -1, 3, 4]
Explanation:
LRUCache lruCache = new LRUCache(2);
lruCache.put(1, 1); // cache is {1=1}
lruCache.put(2, 2); // cache is {1=1, 2=2}
lruCache.get(1); // return 1
lruCache.put(3, 3); // evicts key 2, cache is {1=1, 3=3}
lruCache.get(2); // returns -1 (not found)
lruCache.put(4, 4); // evicts key 1, cache is {4=4, 3=3}
lruCache.get(1); // returns -1 (not found)
lruCache.get(3); // returns 3
lruCache.get(4); // returns 4
问题
设计一个遵循最近最少使用 (LRU) 缓存约束的数据结构。
实现 LRUCache
类:
LRUCache(int capacity)
初始化具有正容量的 LRU 缓存。int get(int key)
返回key
对应的值,如果不存在则返回-1
。void put(int key, int value)
更新key
对应的值。如果key
不存在,则添加key-value
对到缓存中。如果键的数量超过capacity
,则驱逐最近最少使用的键。
get
和 put
操作必须在 O(1) 平均时间复杂度内运行。
解决方案
哈希表与双向链表
Approach: HashMap + Doubly Linked List / 哈希表加双向链表
An LRU Cache needs to support O(1)
time complexity for both get
and put
operations. We can achieve this by combining a HashMap with a Doubly Linked List.
- HashMap: Maps the
key
to the corresponding node in the doubly linked list forO(1)
access. - Doubly Linked List: Keeps track of the order in which keys are accessed. The most recently accessed key is moved to the head, and the least recently used key is at the tail.
Whenever we access or update a key, we move that key’s node to the head of the list to indicate that it is the most recently used. When the cache exceeds the capacity, we remove the least recently used key, which is always at the tail of the list.
Implementation / 实现
Python
class DLinkedNode:
def __init__(self, key=0, value=0):
self.key = key
self.value = value
self.prev = None
self.next = None
class LRUCache:
def __init__(self, capacity: int):
self.cache = {}
self.capacity = capacity
self.size = 0
self.head = DLinkedNode()
self.tail = DLinkedNode()
self.head.next = self.tail
self.tail.prev = self.head
def get(self, key: int) -> int:
node = self.cache.get(key)
if not node:
return -1
# Move the accessed node to the head
self._move_to_head(node)
return node.value
def put(self, key: int, value: int) -> None:
node = self.cache.get(key)
if not node:
# If the key is not in the cache, create a new node
new_node = DLinkedNode(key, value)
self.cache[key] = new_node
self._add_node(new_node)
self.size += 1
if self.size > self.capacity:
# Pop the tail
tail = self._pop_tail()
del self.cache[tail.key]
self.size -= 1
else:
# Update the value and move the node to the head
node.value = value
self._move_to_head(node)
def _add_node(self, node):
"""Add node right after head."""
node.prev = self.head
node.next = self.head.next
self.head.next.prev = node
self.head.next = node
def _remove_node(self, node):
"""Remove an existing node from the linked list."""
prev = node.prev
next = node.next
prev.next = next
next.prev = prev
def _move_to_head(self, node):
"""Move the given node to the head (mark as recently used)."""
self._remove_node(node)
self._add_node(node)
def _pop_tail(self):
"""Pop the least recently used node (the tail)."""
node = self.tail.prev
self._remove_node(node)
return node
Explanation / 解释
-
Node Structure / 节点结构
class DLinkedNode: def __init__(self, key=0, value=0): self.key = key self.value = value self.prev = None self.next = None
- English: A doubly linked list node stores a
key
andvalue
, along with pointers to the previous and next nodes in the list. - Chinese: 一个双向链表节点存储
key
和value
,并有指向前一个和下一个节点的指针。
- English: A doubly linked list node stores a
-
Initialize LRUCache / 初始化 LRUCache
self.cache = {} self.capacity = capacity self.size = 0 self.head = DLinkedNode() self.tail = DLinkedNode() self.head.next = self.tail self.tail.prev = self.head
- English: We use a HashMap (
cache
) to store key-node pairs forO(1)
access. The doubly linked list is initialized with dummyhead
andtail
nodes. - Chinese: 我们使用一个哈希表(
cache
)来存储键-节点对,以便O(1)
访问。双向链表使用虚拟的head
和tail
节点初始化。
- English: We use a HashMap (
-
Get Operation / 获取操作
node = self.cache.get(key) if not node: return -1 self._move_to_head(node) return node.value
- English: If the key exists in the cache, move the node to the head of the linked list (indicating it was recently used) and return its value.
- Chinese: 如果键存在于缓存中,将节点移动到链表的头部(表示它最近被使用),然后返回它的值。
-
Put Operation / 放置操作
node = self.cache.get(key) if not node: new_node = DLinkedNode(key, value) self.cache[key] = new_node self._add_node(new_node) self.size += 1 if self.size > self.capacity: tail = self._pop_tail() del self.cache[tail.key] self.size -= 1 else: node.value = value self._move_to_head(node)
- English: If the key doesn’t exist, add it to the cache and linked list. If the size exceeds capacity, remove the least recently used node (at the tail). If the key exists, update its value and move it to the head.
- Chinese: 如果键不存在,将其添加到缓存和链表中。如果大小超过容量,则删除最近最少使用的节点(位于尾部)。如果键存在,更新其值并将其移到头部。
-
Helper Functions / 辅助函数
-
_add_node
: Adds a node to the head of the list. -
_remove_node
: Removes a node from the list. -
_move_to_head
: Moves a node to the head of the list. -
_pop_tail
: Removes and returns the least recently used node (at the tail).
-
Complexity Analysis / 复杂度分析
- Time Complexity / 时间复杂度: O(1) for both
get
andput
operations, because both operations involve a constant-time update to the HashMap and doubly linked list. - Space Complexity / 空间复杂度: O(capacity), where capacity is the number of items in the cache.
Key Concept / 关键概念
- Doubly Linked List and HashMap Combination / 双向链表和哈希表结合: The doubly linked list helps maintain the order of access for the cache, and the HashMap allows for constant-time access to the cache elements.
- 双向链表和哈希表结合: 双向链表帮助维护缓存的访问顺序,哈希表允许对缓存元素进行常数时间的访问。
Summary / 总结
- English: The combination of a HashMap and a Doubly Linked List allows us to achieve
O(1)
time complexity for bothget
andput
operations in an LRU Cache. - Chinese: 哈希表和双向链表的结合使我们能够在 LRU 缓存中实现
get
和put
操作的O(1)
时间复杂度。
Tips / 提示
- English: Practice problems that involve implementing custom data structures, as they require a deep understanding of how to optimize for time and space complexity.
- Chinese: 练习实现自定义数据结构的问题,因为它们需要深入理解如何优化时间和空间复杂度。
Solution Template for Similar Questions / 提示
- English: When designing data structures that need to maintain an order (like LRU Cache), combining a HashMap with a linked list or similar structure can help optimize for
O(1)
operations. - Chinese: 在设计需要维护顺序的数据结构时(如 LRU 缓存),结合哈希表和链表或类似结构可以帮助优化为
O(1)
操作。
5 More Similar Questions / 推荐5问题
- LeetCode 460. LFU Cache
- LeetCode 432. All O`one Data Structure
- LeetCode 1352. Product of the Last K Numbers
- LeetCode 710. Random Pick with Blacklist
- LeetCode 641. Design Circular Deque
Recommended Resources / 推荐资源
- English: Practice problems that involve designing and implementing custom data structures to deepen your understanding of data structure combinations like HashMap + Linked List.
- Chinese: 练习涉及设计和实现自定义数据结构的问题,以加深对哈希表+链表等数据结构组合的理解。
LRU Cache – Twitch Interview Question – Leetcode 146 by NeetCode
Leave a Reply