3-62
MPC7400 RISC Microprocessor Users Manual
L2 Cache Interface
4. Wait for the L2 cache DLL to achieve phase lock. This can be timed by setting the
decrementer for a time period equal to 640 L2 cache clocks, or by performing an L2
cache global invalidate.
5. Perform an L2 cache global invalidate. The global invalidate could be performed
before enabling the DLL, or in parallel with waiting for the DLL to stabilize. Refer
to Section 3.7.3.7, òL2 Cache Global Invalidation,ó for more information about L2
cache global invalidation. Note that a global invalidate always takes much longer
than it takes for the DLL to stabilize.
6. After the DLL stabilizes, an L2 cache global invalidate has been performed, and the
other L2 cache conTguration bits have been set, enable the L2 cache for normal
operation by setting the L2CR[L2E] bit to 1.
3.7.5 L2 Cache Operation
The MPC7400s L2 cache is a combined instruction and data cache that receives memory
requests from both L1 instruction and data caches independently. The L1 requests are
generally the result of instruction fetch misses, data load or store misses, L1 data cache
castouts, write-through operations, or cache management instructions. Each L1 request
generates an address lookup in the L2 cache tags. If a hit occurs, the instructions or data are
forwarded to the appropriate L1 cache. A miss in the L2 cache tags causes the L1 request
to be forwarded to the system bus interface. The L2 cache also services snoop requests from
the system bus.
Generally, the L2 cache operates according to the following rules:
¥
In case of multiple pending requests to the L2 cache, snoop requests have the highest
priority. The next priority is a data cache reload, unless there is an address conict
with an L1 data cache castout. In this case, the L1 castout will have higher priority.
This insures that reads and writes to the same cache block are kept in order. The
lowest priorities are instruction fetches from the L1 instruction cache and L2
instruction reloads.
¥
All requests to the L2 cache that are marked caching-inhibited bypass the L2 cache
(even if they would have normally hit), and do not cause any L2 tag state changes.
¥
Requests to the L2 cache that are marked caching-allowed (even if the respective L1
cache is locked) are serviced by the L2 cache. Caching-allowed burst requests are
serviced in their entirety. Caching-allowed single-beat requests are allowed to hit
and update in case of a store hit, but do not cause allocation or deallocation. Note
that these comments apply only if the cache disabling conditions of Section 3.7.3.1,
òEnabling and Disabling the L2 Cache,ó are met.
¥
Burst read and single-beat read requests from the L1 instruction or data caches that
hit in the L2 cache are forwarded data from the L2 SRAMs.