When the CPU needs a data (from main memory), it’ll provide the memory address for that data.
The cache controller decodes the memory address to determine if the particular block is in the cache. If yes, it will determine the offset for that block to retrieve the data (in the cache memory). So, if the data is available in the cache, CPU picks up the data from the cache. Else, the CPU will carry out a READ operation from the main memory. Remember, the latency to access the main memory is significantly longer than the cache.
***
Hi prof,
I have this doubt that I would like to clarify –
You had told in the lecture that a memory address is divided into three parts in direct mapping – tag, block and offset, and by looking at the tag, the processor can determine whether the memory block under consideration is present in the cache. Now let us consider the case that the required memory block is present in the cache so that the processor can directly get its contents from the cache without needing to access that particular memory block.
I assume that when we need to access some particular data or instruction, we do so by giving its memory address. However, inside the cache block, according to my understanding, just the content of a memory location is present and not the address of the memory location where it is stored. So how can the processor directly determine if the content of a particular memory block is present in the cache without actually accessing the memory and comparing the content of the cache with the content of the required memory block ?
Please correct me if I am wrong.
Thanks,
Regards,
MUNDHRA SHREYAS SUDHIR