A Channel Request Buffer RequestBuffer
Function Description
- The Request Buffer is used to buffer A channel requests that need to be temporarily blocked, while allowing A channel requests that meet the release conditions / do not need to be blocked to enter the main pipeline first.
- The Request Buffer can prevent A requests that need to be blocked from clogging the pipeline entry, thus avoiding impact on subsequent requests and improving cache processing efficiency.
- If a newly arrived Acquire has the same address as a prefetch request being processed in an MSHR entry, request merging can be performed. The Acquire's information is passed directly to the corresponding MSHR entry, allowing the MSHR to reply to the L1 Acquire simultaneously after processing is complete, thereby accelerating the Acquire processing flow and reducing the occupancy of both ReqBuf and MSHR.
Feature 1: Request Merging
When an Acquire request received by the RequestBuffer has the same address as a prefetch request in an MSHR entry, the RequestBuffer will send a merge task (aMergeTask) to the corresponding MSHR entry. This MSHR entry will be marked with mergeA and its relevant fields updated.
Feature 2: Request Reception Conditions
Under which conditions are requests at the RequestBuffer entry allowed to be received: - RequestBuffer is not full - RequestBuffer is full, but the Acquire request can be merged with a preceding prefetch request - RequestBuffer is full, but the request is a prefetch request, and there is already an Acquire/Prefetch request being processed by an MSHR entry
Feature 3: RequestBuffer Allocation
Which requests will be allocated a RequestBuffer entry: - RequestBuffer is not full - Cannot flow directly into the pipeline (i.e., address conflict with MainPipe or an MSHR entry) or chosenQ is also ready to dispatch - Cannot perform request merging
Feature 4: Fields in a RequestBuffer Entry
- Rdy: Whether ready to be dispatched/dequeued
- Task: Information of the request itself
- WaitMP: Which stages of the MainPipe pipeline are blocking it
- WaitMS: Which MSHR entries are blocking it
Feature 5: How the RequestBuffer Updates and Dispatches
- WaitMP (4bit): Since the MainPipe is a non-blocking pipeline, waitMP shifts right by one bit each cycle, and simultaneously checks every cycle for new address-conflicting requests in s1. [3] s1, same set conflict [2] s2, same set conflict [1] s3, same set conflict [0] reserved
- WaitMS (16bit): One cycle before an MSHR entry is released, the corresponding bit in waitMS will be reset (cleared); simultaneously, when a new MSHR entry is allocated, it checks for address conflicts (same set and tag). If there is a conflict, the corresponding bit in waitMS needs to be set. onehot encoding, each bit represents one MSHR
- noFreeWay: Since replacement might occur for the same set, when [the number of MSHRs for the same set + the number of S2/S3 entries in the pipeline for the same set >= l2 way], it means all ways of the same set might be replaced. At this point, RequestBuf entry into the pipeline is blocked. s2 + s3 + MSHR >= ways(L2)
- Rdy Condition: Rdy is high, indicating it can be dispatched to the pipeline to enter the RequestArbiter, when all the following conditions are met: waitMP + waitMS are all cleared noFreeWay is low There is no set conflict with A/B channel requests about to enter the pipeline at pipeline stage s1
Overall Block Diagram