For working professionals
For fresh graduates
More
1. Introduction
6. PyTorch
9. AI Tutorial
10. Airflow Tutorial
11. Android Studio
12. Android Tutorial
13. Animation CSS
16. Apex Tutorial
17. App Tutorial
18. Appium Tutorial
21. Armstrong Number
22. ASP Full Form
23. AutoCAD Tutorial
27. Belady's Anomaly
30. Bipartite Graph
35. Button CSS
39. Cobol Tutorial
46. CSS Border
47. CSS Colors
48. CSS Flexbox
49. CSS Float
51. CSS Full Form
52. CSS Gradient
53. CSS Margin
54. CSS nth Child
55. CSS Syntax
56. CSS Tables
57. CSS Tricks
58. CSS Variables
61. Dart Tutorial
63. DCL
65. DES Algorithm
83. Dot Net Tutorial
86. ES6 Tutorial
91. Flutter Basics
92. Flutter Tutorial
95. Golang Tutorial
96. Graphql Tutorial
100. Hive Tutorial
103. Install Bootstrap
107. Install SASS
109. IPv 4 address
110. JCL Programming
111. JQ Tutorial
112. JSON Tutorial
113. JSP Tutorial
114. Junit Tutorial
115. Kadanes Algorithm
116. Kafka Tutorial
117. Knapsack Problem
118. Kth Smallest Element
119. Laravel Tutorial
122. Linear Gradient CSS
129. Memory Hierarchy
133. Mockito tutorial
134. Modem vs Router
135. Mulesoft Tutorial
136. Network Devices
138. Next JS Tutorial
139. Nginx Tutorial
141. Octal to Decimal
142. OLAP Operations
143. Opacity CSS
144. OSI Model
145. CSS Overflow
146. Padding in CSS
148. Perl scripting
149. Phases of Compiler
150. Placeholder CSS
153. Powershell Tutorial
158. Pyspark Tutorial
161. Quality of Service
162. R Language Tutorial
164. RabbitMQ Tutorial
165. Redis Tutorial
166. Redux in React
167. Regex Tutorial
170. Routing Protocols
171. Ruby On Rails
172. Ruby tutorial
173. Scala Tutorial
175. Shadow CSS
178. Snowflake Tutorial
179. Socket Programming
180. Solidity Tutorial
181. SonarQube in Java
182. Spark Tutorial
189. TCP 3 Way Handshake
190. TensorFlow Tutorial
191. Threaded Binary Tree
196. Types of Queue
197. TypeScript Tutorial
198. UDP Protocol
202. Verilog Tutorial
204. Void Pointer
205. Vue JS Tutorial
206. Weak Entity Set
207. What is Bandwidth?
208. What is Big Data
209. Checksum
211. What is Ethernet
214. What is ROM?
216. WPF Tutorial
217. Wireshark Tutorial
218. XML Tutorial
An LRU Cache plays a very important role in efficiently storing data. As most people are aware of the fact that a cache is a portion of a computer memory that contains frequently accessed data. Such data is stored in cache memory temporarily. However, the capacity to store data of the cache memory is limited and it is the responsibility of the management to ensure that old data is transferred so that the new data can be stored in it.
Thus, LRU Cache implementation works by eliminating the least utilized data and making memory space available for new data. This is how the LRU Cache replacement mechanism works.
In this tutorial, we will investigate the function of LRU Cache, its working model, data structures, and various other elements it contains. Additionally, we will take a look at the implementation of the LRU mechanism in Java, in order to provide a comprehensive understanding of this crucial concept of computer systems.
LRU stands for Least Recently Used Cache which arranges the data and items stored temporarily in the cache memory in order of their utilization and importance. The LRU mechanism helps you to quickly locate which data has been used for the most amount of time and which portion of the data is optional.
LRU Cache is one of the most famous caching techniques as it automatically allows the users to remove the least recently used data and make space for new data in the cache memory, once it reaches the highest storage capacity.
The structure of LRU Cache is like a series of memory blocks connected in a string-like manner. LRU Cache can be comparable to a series of memory chunks that contain data. Whenever a user calls for any data which is not present in the catch memory, the LRU mechanism fetches the data from the disc and then returns it to the user.
However, as this process continues, the recently fetched data that is returned to the user now becomes the recently used data as well. So, now the recently used data is positioned at the front of the cache list.
Initially, only two data elements are contained in a cache list. Because of the LRU mechanism, whenever any request for data is made by a user, the cache will immediately return them with the data rather than following the time-consuming process of fetching the data from the disc and then providing it to the user.
The objective of implementing all are you caching schemes is to create a data structure that follows the elements of a Least Recently Used (LRU) Cache. This is how we can implement an LRU Cache class with the help of the following operations:
We would want to derive the following specification from the LRU mechanism:
Let's understand it with the help of an LRU Cache implementation example:
Let's say we have five items that are named A1, A2, A3, A4, and A5 in the main memory, and suppose the size of our cache memory is 3.
At the start of the process, the cache memory is empty and all the items are stored in the main memory. So if we want to retrieve A1, we would get the value of A1 from the main memory and then store it in the cache memory.
In the next step, we would want to get the value of A2 so we retrieve the value of A2 from the main memory. Now, A2 is the most recently used item so it will be placed at the top of the cache memory list. Automatically, A1 will move down in the list and will no longer be the most recently used item.
Next, we would want to get the value of A3 so the same process will continue and A3 will become the most recently used item in the list. Let's say we would want to get the value of A2 again. Now we can easily get the value of A2 from our cache list rather than retrieving it from the main memory again. So, A2 will be placed at the top of the list again as it is the most recently used item now.
Now let's say we want to get the value of A4 so you will have to get it from the main memory. Now the point is where will it be stored in our cache? Our cache memory is already full. To store A4 we have to get rid of some items in the list. In this case, we remove A1 from the list as it is the least recently used item in this list, placed at the bottom.
Since the maximum limit of cache memory is 3, we have to eliminate the least recently used element from the list so that we can make space for the new items.
The size of our cache memory is usually much smaller than that of our main memory. So, fitting everything from the main memory into the cache memory is an impossible task. The LRU cache implementation is one of the most convenient ways of handling this. The main goal of the LRU mechanism is to store only the recently accessed items in 'n' (where the size of the cache memory is n).
LRU cache cannot be implemented without the use of proper data structures. For the LRU Cache implementation scheme, the following two data structures are to be used:
Code:
class Node:
def __init__(self, key, value):
self.key = key
self.value = value
self.prev = None
self.next = None
class LRUCache:
def __init__(self, capacity):
self.capacity = capacity
self.cache = {} # Hash map for quick lookup
self.head = Node(None, None) # Dummy head node
self.tail = Node(None, None) # Dummy tail node
self.head.next = self.tail
self.tail.prev = self.head
def _add_node(self, node):
# Add a node after the dummy head
node.prev = self.head
node.next = self.head.next
self.head.next.prev = node
self.head.next = node
def _remove_node(self, node):
# Remove a node from the linked list
prev_node = node.prev
next_node = node.next
prev_node.next = next_node
next_node.prev = prev_node
def _move_to_front(self, node):
# Move a node to the front of the linked list
self._remove_node(node)
self._add_node(node)
def get(self, key):
if key in self.cache:
node = self.cache[key]
self._move_to_front(node)
return node.value
return -1
def put(self, key, value):
if key in self.cache:
node = self.cache[key]
node.value = value
self._move_to_front(node)
else:
if len(self.cache) >= self.capacity:
# Remove the least recently used node (tail.prev)
del self.cache[self.tail.prev.key]
self._remove_node(self.tail.prev)
new_node = Node(key, value)
self.cache[key] = new_node
self._add_node(new_node)
# Create an LRU cache with a capacity of 3
lru_cache = LRUCache(3)
# Add items to the cache
lru_cache.put(1, 'one')
lru_cache.put(2, 'two')
lru_cache.put(3, 'three')
# Access items from the cache
print(lru_cache.get(2)) # Output: 'two'
# Adding a new item will remove the least recently used item ('one')
lru_cache.put(4, 'four')
# Trying to access the removed item will return -1
print(lru_cache.get(1)) # Output: -1
In this implementation, the LRUCache class uses a doubly linked list to maintain the order of item usage. The cache dictionary acts as a hash map to provide quick access to the nodes. When an item is accessed or added, the relevant node is moved to the front of the linked list. When the cache becomes full, the least recently used item is removed from the linked list and the hash map.
We can also implement an LRU cache in Java using LinkedHashMap, which provides a built-in way to create a map with a specified order. Here's an example:
import java.util.LinkedHashMap;
import java.util.Map;
class LRUCache<K, V> extends LinkedHashMap<K, V> {
private final int capacity;
public LRUCache(int capacity) {
super(capacity, 0.75f, true);
this.capacity = capacity;
}
@Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > capacity;
}
}
public class Main {
public static void main(String[] args) {
LRUCache<Integer, String> lruCache = new LRUCache<>(3);
lruCache.put(1, "one");
lruCache.put(2, "two");
lruCache.put(3, "three");
System.out.println(lruCache.get(2)); // Output: "two"
lruCache.put(4, "four");
System.out.println(lruCache.get(1)); // Output: null (removed due to capacity)
}
}
In this example, the LRUCache class extends LinkedHashMap, where the ordering mode is set to "access-order" (by passing true as the third parameter to the LinkedHashMap constructor). The removeEldestEntry method is overridden to control when the oldest entry should be removed from the cache based on the capacity.
LinkedHashMap automatically maintains the order of insertion and access. When an element is accessed (through get or put), it's moved to the end of the list, making it the most recently accessed item.
Remember that LinkedHashMap isn't thread-safe by default. If you need thread safety, you might need to consider synchronization mechanisms or use a thread-safe map like ConcurrentHashMap.
In this tutorial, we learned what LRU Cache is and how it works, including its code implementation, data structures, and all the major components of LRU Cache. LRU Cache implementation is a crucial subject that is booming in the tech industry and it will remain indispensable in multiple roles and applications.
Consider earning a Computer Science certification from upGrad if you want to become an expert in this important role. The upGrad course will enable you to become an online master of coding. You can also use the upGrad courses to assist you in securing executive positions in the tech sector.
The time complexity of the Put and the Get operations in LRU Cache is O ( 1 ) O(1) O(1) in the least favored circumstances. The LRU mechanism can easily be implemented using a doubly linked list and hashMap.
LRU Cache works by dividing the items in the cache list into least recently used and most recently used ones. This technique organizes the items in the list in order of their usage. Each time you access an item in the list, the LRU Cache mechanism will move it to the top of the list as it is the most recently used item.
LRU Cache implementation in Java, allows the user to remove the least recently used item from the cashless to make space for the new items in the list. It works with the help of Put and Get operations.
Author
Talk to our experts. We are available 7 days a week, 9 AM to 12 AM (midnight)
Indian Nationals
1800 210 2020
Foreign Nationals
+918045604032
1.The above statistics depend on various factors and individual results may vary. Past performance is no guarantee of future results.
2.The student assumes full responsibility for all expenses associated with visas, travel, & related costs. upGrad does not provide any a.