VIPT Cache
A cache that indexes its sets using virtual address bits but tags its lines with physical addresses — "Virtually Indexed, Physically Tagged".
Detailed Explanation
A VIPT cache exists to overlap address translation with cache access. Because the index bits come straight from the virtual address, the set lookup can start the moment the CPU computes the virtual effective address — in parallel with the TLB translating the rest of the address. By the time the indexed set returns its candidate tags, the TLB has produced the physical frame number, and the cache compares those physical tags to decide hit or miss. This hides one stage of latency that a purely physically-indexed cache would incur serially.
The catch is the *aliasing* constraint. If the index bits extend above the page offset, two different virtual addresses that map to the same physical page can land in different sets — meaning the same physical line lives in two places in the cache. Most designs sidestep this by keeping `index_bits + offset_bits ≤ log2(page_size)`, which bounds how big a single way can be. That is why L1 caches in real CPUs are usually small and highly associative: it lets them stay VIPT without software-visible aliasing.
Industry Context
Nearly every modern CPU's L1 data cache — Arm, x86, RISC-V application-class — is VIPT for exactly this latency reason. L2 and beyond are typically physically-indexed, physically-tagged because they already sit behind the TLB.
