跳转至

Memory Management Unit Overview

Terminology Explanation

Memory Management Unit Terminology Explanation
Abbreviation Full Name Description
MMU Memory Management Unit Memory Management Unit
TLB Translation Lookaside Buffer Cache for page tables
ITLB Instruction TLB Instruction page table cache
DTLB Data TLB Data page table cache
L1 TLB Level 1 TLB Level 1 TLB
L2 TLB Level 2 TLB Level 2 TLB
SV39 Page-Based 39-bit Virtual-Memory System A paging mechanism defined in the RISC-V manual
PGD Page Global Directory Page Global Directory
PMD Page Mid-level Directory Page Mid-level Directory
PTE Page Table Entry Page Table Entry
PTW Page Table Walk Page Table Walk process
PMP Physical Memory Protection Physical Memory Protection
PMA Physical Memory Attributes Physical Memory Attributes
ASID Address Space IDentifier Address Space Identifier
CSR Control and Status Register Control and Status Register
VPN Virtual Page Number Virtual Page Number
PPN Physical Page Number Physical Page Number
PLRU Pseudo-Least Recently Used An approximation of the Least Recently Used algorithm
VMID Virtual Machine Identifier Virtual Machine Identifier
GVPN Guest Virtual Page Number Virtual page number for second-stage translation (Guest Physical Address)
VS-Stage Virtual Superior Stage First-Stage Translation
G-Stage Guest Stage Second-Stage Translation
SV39x4 A Variation on Page-Based 39-bit Virtual-Memory System SV39 with two extended address bits, root page table is 16KB
HPTW Hypervisor Page Table Walker Page Table Walker responsible for second-stage translation
GPA Guest Physical Address Guest Physical Address

Design Specifications

The overall design specifications for the MMU module are as follows:

  1. Support converting virtual addresses to physical addresses
  2. Support Sv39 paging mechanism
  3. Support accessing page tables in memory
  4. Support dynamic and static PMP checks
  5. Support dynamic and static PMA checks
  6. Support ASID
  7. Support Sfence.vma
  8. Support software updates of A/D bits
  9. Support H extension's two-stage address translation
  10. Support Sv39x4 paging mechanism
  11. Support VMID
  12. Support hfence.vvma and hfence.gvma

Functional Description

XiangShan's MMU module consists of L1 TLB, Repeater, L2 TLB, PMP, and PMA modules. The L2TLB module is further divided into five parts: Page Cache, Page Table Walker, Last Level Page Table Walker, Miss Queue, and Prefetcher. Before memory reads and writes within the core, including frontend instruction fetch and backend memory access, address translation is required by the MMU module. Frontend instruction fetch and backend memory access use ITLB and DTLB respectively for address translation, both being non-blocking accesses. The TLB needs to return whether a request missed, signaling the request source to reschedule and resend the TLB query request until a hit occurs. For missed Load requests, the KunmingLake architecture supports TLB Hint, meaning when L2 TLB refills page tables to L1 TLB, it can precisely wake up the Load instruction that was blocked due to the TLB miss for that virtual address. When L1 TLB (ITLB and DTLB) misses, it will access the L2 TLB. If the L2 TLB also misses, it will access page tables in memory via the Page Table Walker.

The Repeater is a request buffer from L1 TLB to L2 TLB. There is a significant physical distance between L1 TLB and L2 TLB, requiring the Repeater to add a cycle delay in between. Since both ITLB and DTLB support multiple outstanding requests, the Repeater also performs functions similar to an MSHR and filters duplicate requests. The MMU module supports permission checks for physical address access, divided into PMP and PMA. PMP and PMA checks are queried in parallel, and violating either permission is an illegal operation. All physical address accesses within the core require physical address permission checks, including after the checks in ITLB and DTLB, and before the Page Table Walker accesses memory.

With the addition of the H extension, a Hypervisor Page Table Walker module was added within the L2TLB, primarily responsible for the second stage translation, and modifications were made to the L2TLB architecture.

Support Sv39 paging mechanism to translate virtual addresses to physical addresses

To achieve process isolation, each process has its own address space, using virtual addresses. The MMU is responsible for translating virtual addresses to physical addresses and using the translated physical addresses for memory access. The XiangShan processor KunmingLake architecture supports the Sv39 paging mechanism (refer to the RISC-V Privileged Architecture manual), with a virtual address length of 39 bits. The lower 12 bits are the page offset, and the upper 27 bits are divided into three sections (9 bits each), forming a three-level page table. The physical address length in the KunmingLake architecture is 36 bits. The structure of virtual and physical addresses in Sv39 is shown in 此图此图. Traversing the page table requires three memory accesses, so we need to cache the page table using a TLB.

Virtual Address Structure of XiangShan Processor

Physical Address Structure of XiangShan Processor

During address translation, frontend instruction fetch uses ITLB, and backend memory access uses DTLB. If ITLB or DTLB miss, they send requests to L2 TLB via the Repeater. In the current design, both frontend instruction fetch and backend memory access use non-blocking access for the TLB. If a request misses, the miss information is returned, and the request source schedules a resend of the TLB query request until a hit occurs.

Meanwhile, memory access has 2 Load pipelines, 2 Store pipelines, an SMS prefetcher, and an L1 Load stream & stride prefetcher. To handle numerous requests, the two Load pipelines and the L1 Load stream & stride prefetcher use the Load DTLB, the two Store pipelines use the Store DTLB, and prefetch requests use the Prefetch DTLB, totaling 3 DTLBs.

To avoid duplicate entries in the TLBs, ITLB repeater and DTLB repeater receive requests from ITLB and DTLB respectively and need to filter out duplicate requests before sending them to L2 TLB. If L2 TLB misses, the Hardware Page Table Walker is used to access the page table content in memory. After obtaining the page table content, it is returned to the Repeater and finally to ITLB and DTLB. (See Overall Design)

Support two-stage address translation for virtualization

With the addition of the H extension, in non-virtualization mode and when virtualization memory access instructions are not executed, the address translation process is largely the same as without the H extension. In virtualization mode or when executing virtualization memory access instructions, two-stage translation (VS-stage and G-stage) is enabled based on vsatp and hgatp. The VS-stage is responsible for converting Guest Virtual Addresses to Guest Physical Addresses, and the G-stage is responsible for converting Guest Physical Addresses to Host Physical Addresses. The first stage of translation is basically the same as non-virtualized translation. The second stage translation is performed in the PTW and LLPTW modules. The query logic is: first look up in the Page Cache. If found, it is returned to PTW or LLPTW. If not found, it enters HPTW for translation. HPTW returns the result and fills the Page Cache.

In G-stage, the paging mechanism is called Sv39x4, meaning the virtual address in this mode is 41 bits, and the root page table becomes 16KB.

Virtual Address Structure (Guest Physical Address) of XiangShan Processor Sv39x4

In two-stage address translation, the addresses obtained from the first stage translation (including page table addresses calculated during the translation process) are all Guest Physical Addresses. A second stage translation is required to obtain the true physical address before accessing memory to read the page table. The logical translation process is shown in 此图此图.

Sv39 - Sv39x4 Two-Stage Address Translation Process

Sv48 - Sv48x4 Two-Stage Address Translation Process

Support accessing page table content in memory

When L1 TLB sends a request to L2 TLB, it will first access the Page Cache. For requests without two-stage translation, if a leaf node is hit, it is returned directly to L1 TLB. Otherwise, depending on the level of the page table hit in the Page Cache and the availability of the Page Table Walker and Last Level Page Table Walker, it enters the Page Table Walker, Last Level Page Table Walker, or Miss Queue (see Section 5.3). For requests from the Miss Queue and Prefetcher, they are arbitrated together with requests from L1 TLB by an Arbiter (Arbiter 3to1) and re-access the Page Cache. In another scenario, the Page Cache receives a two-stage address translation request. If both stages of translation are enabled, and the first stage page table hits, the request is sent to PTW for second stage translation. Otherwise, depending on the level of the first stage page table hit and the availability of PTW and LLPTW, it is sent to PTW, LLPTW, or Miss Queue. If only the first stage translation is enabled, the handling is similar to non-two-stage requests: depending on the hit level and availability of PTW and LLPTW, it is sent to PTW, LLPTW, or Miss Queue. If only the second stage translation is enabled, and the entry is found, it is returned to L1TLB. Otherwise, it is sent to PTW for second stage translation. Additionally, the Page Cache may receive requests with isHptwReq valid, indicating that the request is for second stage translation. If this type of request hits in the Page Cache, it is sent to hptw_resp_arb. If it misses, it is sent to HPTW for lookup, and HPTW sends the lookup result to hptw_resp_arb.

Both Page Table Walker and Last Level Page Table Walker can perform second stage address translation. In PTW and LLPTW, if it is a two-stage address translation request, the address obtained from the PTE each time in PTW or LLPTW is a Guest Physical Address. Before memory access, a second stage address translation is performed to obtain the true physical address, as described in the PTW and LLPTW module introductions.

Both Page Table Walker and Last Level Page Table Walker can send requests to memory to access page table content in memory. Before accessing page table content using a physical address, the physical address needs to be checked by the PMP and PMA modules (see Sections 3.2.3 and 5.4). If an access fault occurs, no request will be sent to memory. Requests from the Page Table Walker and Last Level Page Table Walker, after arbitration (Memory Arbiter 2to1), send requests to the L2 Cache via the TileLink bus. In addition to sending the physical address, the L2 TLB also needs to indicate the source of the request via an ID. The memory access width of the L2 Cache is 512 bits, so 8 page table entries are returned each time. The page table entries returned from each memory access are refilled into the Page Cache.

KunmingLake's MMU implements a page table compression mechanism. It compresses consecutive page table entries. Specifically, for page table entries with the same high bits of the virtual page number, if the high bits of their physical page numbers and page attributes are also the same, these page table entries can be compressed and stored as one entry, thereby increasing the effective capacity of the TLB. Therefore, when L2 TLB hits a 4KB page, it will return up to 8 consecutive page table entries (see the relevant description for L2 TLB in Section 5.2). In the H extension, the page table compression mechanism related to the virtualization extension in L1TLB is invalidated and treated as a single page table. The page table compression mechanism related to the virtualization extension in L2TLB is still used.

Support permission checks for physical address access

XiangShan supports PMP and PMA checks. PMP and PMA checks are queried in parallel. If one of the permissions is violated, it is an illegal operation. The specific implementation of PMP and PMA is divided into four parts: CSR Unit, Frontend, Memblock, and L2 TLB. In the KunmingLake architecture, there are 16 PMP and 16 PMA entries. For the address space of PMP and PMA registers and the description of configuration registers, see Section 5.4.

The CSR Unit is responsible for responding to CSR instructions like CSRRW for reading and writing these PMP and PMA registers. Frontend, Memblock, and L2 TLB contain backups of these PMP and PMA registers and are responsible for address checking. The content consistency of these registers is ensured by pulling the write signal from the CSRs. Since the area of L1 TLB is small, backups of PMP and PMA registers are stored in Frontend or Memblock, providing checks for ITLB and DTLB respectively. The area of L2 TLB is larger, so backups of PMP and PMA registers are stored directly in L2 TLB.

After ITLB and DTLB query results are obtained, and before L2 TLB performs memory access using the physical address, PMP and PMA checks are required. According to the manual, PMP and PMA checks should be dynamic checks, meaning they should be performed using the translated physical address after TLB translation. For timing considerations, the PMP & PMA check results for DTLB can be queried in advance and stored in the TLB entry during refill (static check). Specifically, when L2 TLB page table entries are refilled into DTLB, the refilled entries are simultaneously sent to PMP and PMA for permission checks. The obtained attribute bits (including R, W, X, C, Atomic, whose specific meanings are described in Section 5.4) are also stored in DTLB, allowing these check results to be returned directly to MemBlock without needing to check again. To implement static checks, the granularity of PMP and PMA needs to be increased to 4KB.

It should be noted that PMP & PMA checks are currently not the timing bottleneck for KunmingLake, so static checks are not used. All checks are performed dynamically, meaning after the TLB queries the physical address, the check is performed. The KunmingLake V1 code does not include static checks, only dynamic checks. Please be aware of this. However, for compatibility, the granularity of PMP and PMA remains 4KB.

Support memory management fence instructions

昆明湖 V2R2 supports memory management fence instructions such as SFENCE.VMA, HFENCE.VVMA, HFENCE.GVMA.

When an Sfence.vma instruction is executed, it first writes all content of the Store Buffer back to DCache, then sends a flush signal to various parts of the MMU. The flush signal is unidirectional, lasting only one cycle, and there is no return signal. The Sfence.vma instruction finally flushes the entire pipeline and restarts execution from instruction fetch. The Sfence.vma instruction cancels all inflight requests, including those in Repeater and Filter, as well as inflight requests in L1TLB and L2 TLB. It also invalidates cached page tables in L1 TLB and L2 TLB based on address and ASID. The parameters of the Sfence.vma instruction are shown in 此图.

Sfence.vma Instruction Format

Additionally, the XiangShan KunmingLake architecture supports the Svinval extension. The format of the Svinval.vma instruction is shown in 此图, where the meanings of rs1 and rs2 are the same as for the Sfence.vma instruction. In the KunmingLake architecture, the internal logic of the TLB for Svinval.vma and Sfence.vma instructions is completely identical. The TLB only accepts the incoming sfence_valid signal and the corresponding rs1, rs2 parameters.

Svinval.vma Instruction Format

Hfence instructions include Hfence.vvma and Hfence.gvma. The execution effect of these instructions is similar to Sfence.vma: they first write all content of the Store Buffer back to DCache, then send a flush signal to various parts of the MMU. The flush signal is unidirectional, lasting only one cycle, and there is no return signal. The instructions finally flush the entire pipeline and restart execution from instruction fetch. The instructions cancel all inflight requests, including those in Repeater and Filter, as well as inflight requests in L1TLB and L2 TLB. Hfence.vvma invalidates page tables in L1TLB and L2TLB related to VSATP based on address and ASID and VMID. Hfence.gvma invalidates page tables in L1TLB and L2TLB related to HGATP based on address and VMID.

Hfence Instruction Format

Furthermore, since the KunmingLake architecture supports the Svinval extension, there are corresponding hinval.vvma and hinval.gvma instructions. These two instructions correspond to the two hfence instructions respectively.

Hinval Instruction Format

Support ASID and VMID

The XiangShan KunmingLake architecture supports ASID (Address Space Identifier) of length 16, stored in the SATP register. The format of the SATP register is shown in 此表.

SATP Register Format
Bits Field Description
[63:60] MODE Indicates the address translation mode. When this field is 0, it is Bare mode, and address translation or protection is not enabled. When this field is 8, it indicates Sv39 address translation mode. If this field has other values, an illegal instruction fault is reported.
[59:44] ASID Address Space Identifier. The length of ASID is parameterizable. For the Sv39 address translation mode used in the XiangShan KunmingLake architecture, the maximum ASID length is 16.
[43:0] PPN Indicates the physical page number of the root page table, obtained by right-shifting the physical address by 12 bits.

Note that in virtualization mode, SATP is replaced by the VSATP register, and the PPN within it is the Guest Physical Page Number of the Guest root page table, not the true physical address. A second stage translation is required to obtain the true physical address.

The XiangShan KunmingLake architecture supports VMID (Virtual Machine Identifier) of length 14, stored in the HGATP register. The HGATP register format is shown in 此表.

HGATP Register Format
Bits Field Description
[63:60] MODE Indicates the address translation mode. When this field is 0, it is Bare mode, and address translation or protection is not enabled. When this field is 8, it indicates Sv39x4 address translation mode. If this field has other values, an illegal instruction fault is reported.
[57:44] VMID Virtual Machine Identifier. For the Sv39x4 address translation mode used in the XiangShan KunmingLake architecture, the maximum VMID length is 14.
[43:0] PPN Indicates the physical page number of the root page table for second stage translation, obtained by right-shifting the physical address by 12 bits.

Support software updates of A/D bits

XiangShan supports software management of the A/D bits in page tables. The A bit indicates that the page has been read, written, or instruction-fetched since the last clear of the A bit. The D bit indicates that the page has been written to since the last clear of the D bit. The manual allows for both software and hardware methods to update the A/D bits. XiangShan chooses the software method, meaning when either of the following two situations is detected, a page fault is reported, and software updates the page table.

  • Accessing a page, but the A bit of that page's page table entry is 0.
  • Writing to a page, but the D bit of that page's page table entry is 0.

It should be noted that the current XiangShan KunmingLake architecture does not support hardware updates of A/D bits.

Support exception handling mechanism

When PMP or PMA checks report an access fault, or when a page fault or guest page fault occurs, the TLB module returns an exception based on the source of the PTW request. ITLB returns exceptions to Frontend, and DTLB returns exceptions to Memblock. The types of exceptions the TLB module may return to Frontend and Memblock are shown in Table 3.3, where Memblock can be further refined into LoadUnit, AtomicsUnit, and StoreUnit. The TLB module is only responsible for returning access faults, page faults, or guest page faults to Frontend or Memblock. Subsequent handling is performed by Frontend or Memblock. For a summary and explanation of exception handling, see Exception Handling Mechanism.

Exception Types Returned by TLB
Type Destination Description
pf_instr Frontend Indicates instruction page fault
af_instr Frontend Indicates instruction access fault
gpf_instr Frontend Indicates instruction guest page fault
pf_ld LoadUnit or AtomicsUnit Indicates load page fault
af_ld LoadUnit or AtomicsUnit Indicates load access fault
gpf_ld LoadUnit or AtomicsUnit Indicates load guest page fault
pf_st StoreUnit or AtomicsUnit Indicates store page fault
af_st StoreUnit or AtomicsUnit Indicates store access fault
gpf_st StoreUnit or AtomicsUnit Indicates store guest page fault

Exception Handling Mechanism

Exceptions that may be generated by the MMU module include: guest page fault, page fault, access fault, and ECC verification error in the L2 TLB Page Cache. ITLB, DTLB, and L2 TLB can all generate guest page faults, page faults, and access faults. For exceptions generated by ITLB and DTLB, they are delivered to the module that sent the physical address query for handling, based on the request source. ITLB will deliver to Icache or IFU; DTLB will deliver to LoadUnits, StoreUnits, or AtomicsUnit for handling.

If L2 TLB generates a guest page fault, page fault, or access fault, the L2 TLB does not handle the generated exception directly. Instead, it returns this information to L1 TLB. After L1 TLB detects a guest page fault, page fault, or access fault during a query, it generates different types of exceptions based on the request command and delivers them to the respective modules for handling based on the request source.

The Page Cache in L2 TLB supports ECC verification. If an ECC check reports an error, it does not report an exception but instead sends a miss signal for that request to L2 TLB. Simultaneously, the Page Cache invalidates the entry that caused the ECC error, resends the PTW request, and performs a Page Walk.

In other words, the MMU module only handles ECC verification errors in the L2 TLB Page Cache. For generated page faults and access faults, they are all delivered to the frontend or backend pipeline for handling.

Possible exceptions and the MMU module's handling process are shown in 此表:

Possible MMU Exceptions and Handling Process
Module Possible Exceptions Handling Process
ITLB
Generates inst page fault Delivered to Icache or IFU for handling based on request source
Generates inst guest page fault Delivered to Icache or IFU for handling based on request source
Generates inst access fault Delivered to Icache or IFU for handling based on request source
DTLB
Generates load page fault Delivered to LoadUnits for handling
Generates load guest page fault Delivered to LoadUnits for handling
Generates store page fault Delivered to StoreUnits or AtomicsUnit for handling based on request source
Generates store guest page fault Delivered to StoreUnits or AtomicsUnit for handling based on request source
Generates load access fault Delivered to LoadUnits for handling
Generates store access fault Delivered to StoreUnits or AtomicsUnit for handling based on request source
L2 TLB
Generates guest page fault Delivered to L1 TLB, L1 TLB delivers for handling based on request source
Generates page fault Delivered to L1 TLB, L1 TLB delivers for handling based on request source
Generates access fault Delivered to L1 TLB, L1 TLB delivers for handling based on request source
ECC verification error Invalidates the current entry, returns miss result and restarts Page Walk

Overall Design

The overall MMU architecture is shown in 此图.

Overall Block Diagram of MMU Module

ITLB receives PTW requests from Frontend, and DTLB receives PTW requests from Memblock. PTW requests from Frontend include 3 requests from ICache and 1 request from IFU. PTW requests from Memblock include 2 requests from LoadUnit (AtomicsUnit occupies 1 request channel of LoadUnit), 1 request from L1 Load stream & stride prefetcher, 2 requests from StoreUnit, and 1 request from SMSPrefetcher. ITLB and DTLB connect to L2 TLB via Repeaters, both using non-blocking access. These Repeaters, in addition to their buffering function, add filtering of duplicate requests, preventing duplicate requests from being sent from L1 TLB to L2 TLB and avoiding duplicate entries in L1 TLB.

Requests from ITLB and DTLB, after arbitration (Arbiter 2to1), will first access the Page Cache. For requests without two-stage address translation, if a leaf node is hit, it is returned directly to L1 TLB. If it misses, depending on the level of the page table hit in the Page Cache and the availability of the Page Table Walker and Last Level Page Table Walker, it enters the Page Table Walker, Last Level Page Table Walker, or Miss Queue (see Section 5.3). Requests from the Miss Queue and Prefetcher are arbitrated (Arbiter 3to1) together with requests from L1 TLB and re-access the Page Cache. In another scenario, the Page Cache receives a two-stage address translation request. If both stages are enabled and the first stage page table hits, it is sent to PTW for second stage translation. Otherwise, depending on the level of the first stage page table hit and the availability of PTW and LLPTW, it is sent to PTW, LLPTW, or Miss Queue. If only the first stage translation is enabled, the handling is similar to non-two-stage requests: depending on the hit level and availability of PTW and LLPTW, it is sent to PTW, LLPTW, or Miss Queue. If only the second stage translation is enabled, if found, it is returned to L1TLB; otherwise, it is sent to PTW for second stage translation. Furthermore, the Page Cache receives requests with isHptwReq valid, indicating a request for second stage translation. If this type of request hits in the Page Cache, it is sent to hptw_resp_arb. If it misses, it is sent to HPTW for lookup, and HPTW sends the lookup result to hptw_resp_arb.

Both Page Table Walker and Last Level Page Table Walker can perform second stage address translation. In PTW and LLPTW, if it is a two-stage address translation request, the address obtained from the PTE each time in PTW or LLPTW is a Guest Physical Address. Before memory access, a second stage address translation is performed to obtain the true physical address. See the PTW and LLPTW module introductions.

Both Page Table Walker and Last Level Page Table Walker can send requests to memory to access page table content in memory. Before accessing page table content using a physical address, the physical address needs to be checked by the PMP and PMA modules. If an access fault occurs, no request will be sent to memory. Requests from the Page Table Walker and Last Level Page Table Walker, after arbitration (Memory Arbiter 2to1), send requests to the L2 Cache via the TileLink bus. In addition to sending the physical address, the L2 TLB also needs to indicate the source of the request via an ID. The memory access width of the L2 Cache is 512 bits, so 8 page table entries are returned each time. The page table entries returned from each memory access are refilled into the Page Cache.

After ITLB and DTLB query results are obtained, and before L2 TLB performs a Page Table Walker, PMP and PMA checks are required. The area of L1 TLB is small, so backups of PMP and PMA registers are not stored internally in L1 TLB but in Frontend or Memblock, providing checks for ITLB and DTLB respectively. The area of L2 TLB is larger, and backups of PMP and PMA registers are stored directly in L2 TLB.

Interface List

The interface list for the MMU module's parts with upper modules is shown in 此表.

MMU IO Interface List
Upper Module Module Name Instance Name Description
Frontend
TLB itlb ITLB, introduced in Section 5.1
PMP pmp Distributed PMP registers, introduced in Section 5.4
PMPChecker PMPChecker PMP Checker, introduced in Section 5.4
PMPChecker PMPChecker_1 PMP Checker, introduced in Section 5.4
PMPChecker PMPChecker_2 PMP Checker, introduced in Section 5.4
PMPChecker PMPChecker_3 PMP Checker, introduced in Section 5.4
PTWFilter itlbRepeater1 Repeater 1 connecting ITLB and L2 TLB, introduced in Section 5.2
PTWRepeaterNB itlbRepeater2 Repeater 2 connecting ITLB and L2 TLB, introduced in Section 5.2
MemBlock
TLBNonBlock dtlb_ld_tlb_ld Load DTLB, introduced in Section 5.1
TLBNonBlock_1 dtlb_ld_tlb_st Store DTLB, introduced in Section 5.1
TLBNonBlock_2 dtlb_prefetch_tlb_prefetch Prefetch DTLB, introduced in Section 5.1
PTWNewFilter dtlbRepeater Repeater 1 connecting DTLB and L2 TLB, introduced in Section 5.2
PTWRepeaterNB itlbRepeater3 Repeater 3 connecting ITLB and L2 TLB, introduced in Section 5.2
PMP_2 pmp Distributed PMP registers, introduced in Section 5.4
PMPChecker_8 PMPChecker PMP Checker, introduced in Section 5.4
PMPChecker_8 PMPChecker_1 PMP Checker, introduced in Section 5.4
PMPChecker_8 PMPChecker_2 PMP Checker, introduced in Section 5.4
PMPChecker_8 PMPChecker_3 PMP Checker, introduced in Section 5.4
PMPChecker_8 PMPChecker_4 PMP Checker, introduced in Section 5.4
PMPChecker_8 PMPChecker_5 PMP Checker, introduced in Section 5.4
L2TLBWrapper ptw L2 TLB, introduced in Section 5.3
TLBuffer_20 ptw_to_l2_buffer Buffer between L2 TLB and L2 Cache, introduced in Section 5.3

L2 TLB Module's Parts and L2 TLB Interface List

L2 TLB IO Interface List
Upper Module Module Name Instance Name Description
L2TLBWrapper
L2TLB ptw L2 TLB, introduced in Section 5.3
L2TLB
PMP pmp Distributed PMP registers, introduced in Section 5.4
PMPChecker PMPChecker PMP Checker, introduced in Section 5.4
PMPChecker PMPChecker_1 PMP Checker, introduced in Section 5.4
L2TlbMissQueue missQueue L2 TLB Miss Queue, introduced in Section 5.3.11
PtwCache cache L2 TLB Page Table Cache, introduced in Section 5.3.7
PTW ptw L2 TLB Page Table Walker, introduced in Section 5.3.8
LLPTW llptw L2 TLB Last Level Page Table Walker, introduced in Section 5.3.9
HPTW hptw L2 TLB Hypervisor Page Table Walker, introduced in Section 5.3.10
L2TlbPrefetch prefetch L2 TLB Prefetcher, introduced in Section 5.3.12

See the interface list document for details. Some arbiters are omitted from the interface list.

Interface Timing

The overall MMU interface with the external environment involves the interface between L1 TLB and Frontend/Memblock, as well as the interface between L2 TLB and memory (L2 Cache).

For the interface timing between L1 TLB and Frontend/Memblock, see ITLB and Frontend Interface Timing and DTLB and Memblock Interface Timing.

The interface timing between L2 TLB and L2 Cache follows the TileLink bus protocol.