HSX_EVENTS(3CPC) CPU Performance Counters Library Functions HSX_EVENTS(3CPC)

hsx_eventsprocessor model specific performance counter events

This manual page describes events specific to the following Intel CPU models and is derived from Intel's perfmon data. For more information, please consult the Intel Software Developer's Manual or Intel's perfmon website.

CPU models described by this document:

The following events are supported:

This event counts loads that followed a store to the same address, where the data could not be forwarded inside the pipeline from the store to the load. The most common reason why store forwarding would be blocked is when a load's address range overlaps with a preceding smaller uncompleted store. The penalty for blocked store forwarding is that the load must wait for the store to write its value to the cache before it can be issued.
The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.
Speculative cache-line split load uops dispatched to L1D.
Speculative cache-line split store-address uops dispatched to L1D.
Aliasing occurs when a load is issued after a store and their memory addresses are offset by 4K. This event counts the number of loads that aliased with a preceding store, resulting in an extended address check in the pipeline which can have a performance impact.
Misses in all TLB levels that cause a page walk of any page size.
Completed page walks due to demand load misses that caused 4K page walks in any TLB levels.
Completed page walks due to demand load misses that caused 2M/4M page walks in any TLB levels.
Load miss in all TLB levels causes a page walk that completes. (1G)
Completed page walks in any TLB of any page size due to demand load misses.
This event counts cycles when the page miss handler (PMH) is servicing page walks caused by DTLB load misses.
This event counts load operations from a 4K page that miss the first DTLB level but hit the second and do not cause page walks.
This event counts load operations from a 2M page that miss the first DTLB level but hit the second and do not cause page walks.
Number of cache load STLB hits. No page walk.
DTLB demand load misses with low part of linear-to-physical address translation missed.
This event counts the number of cycles spent waiting for a recovery after an event such as a processor nuke, JEClear, assist, hle/rtm abort etc.
Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke).
This event counts the number of uops issued by the Front-end of the pipeline to the Back-end. This event is counted at the allocation stage and will count both retired and non-retired uops.
Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the thread.
Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for all threads.
Number of flags-merge uops allocated. Such uops add delay.
Number of slow LEA or similar uops allocated. Such uop has 3 sources (for example, 2 sources + immediate) regardless of whether it is a result of LEA instruction or not.
Number of multiply packed/scalar single precision uops allocated.
Any uop executed by the Divider. (This includes all divide uops, sqrt, ...)
Demand data read requests that missed L2, no rejects.

The following errata may apply to this: HSD78, HSM80

Counts the number of store RFO requests that miss the L2 cache.
Number of instruction fetches that missed the L2 cache.
Demand requests that miss L2 cache.

The following errata may apply to this: HSD78, HSM80

Counts all L2 HW prefetcher requests that missed L2.
All requests that missed L2.

The following errata may apply to this: HSD78, HSM80

Counts the number of demand Data Read requests, initiated by load instructions, that hit L2 cache

The following errata may apply to this: HSD78, HSM80

Counts the number of store RFO requests that hit the L2 cache.
Number of instruction fetches that hit the L2 cache.
Counts all L2 HW prefetcher requests that hit L2.
Counts any demand and L1 HW prefetch data load requests to L2.

The following errata may apply to this: HSD78, HSM80

Counts all L2 store RFO requests.
Counts all L2 code requests.
Demand requests to L2 cache.

The following errata may apply to this: HSD78, HSM80

Counts all L2 HW prefetcher requests.
All requests to L2 cache.

The following errata may apply to this: HSD78, HSM80

Not rejected writebacks that hit L2 cache.
This event counts each cache miss condition for references to the last level cache.
This event counts requests originating from the core that reference a cache line in the last level cache.
Counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling.
Core cycles when at least one thread on the physical core is not in halt state.
Increments at the frequency of XCLK (100 MHz) when not halted.
Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).
Reference cycles when the thread is unhalted. (counts at 100 MHz rate)
Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).
Count XClk pulses when this thread is unhalted and the other thread is halted.
Count XClk pulses when this thread is unhalted and the other thread is halted.
Increments the number of outstanding L1D misses every cycle. Set Cmask = 1 and Edge =1 to count occurrences.
Cycles with L1D load Misses outstanding.
Cycles with L1D load Misses outstanding from any thread on physical core.
Number of times a request needed a FB entry but there was no entry available for it. That is the FB unavailability was dominant reason for blocking the request. A request includes cacheable/uncacheable demands that is load, store or SW prefetch. HWP are e.
Cycles a demand request was blocked due to Fill Buffers inavailability.
Miss in all TLB levels causes a page walk of any page size (4K/2M/4M/1G).
Completed page walks due to store misses in one or more TLB levels of 4K page structure.
Completed page walks due to store misses in one or more TLB levels of 2M/4M page structure.
Store misses in all DTLB levels that cause completed page walks. (1G)
Completed page walks due to store miss in any TLB levels of any page size (4K/2M/4M/1G).
This event counts cycles when the page miss handler (PMH) is servicing page walks caused by DTLB store misses.
This event counts store operations from a 4K page that miss the first DTLB level but hit the second and do not cause page walks.
This event counts store operations from a 2M page that miss the first DTLB level but hit the second and do not cause page walks.
Store operations that miss the first TLB level but hit the second and do not cause page walks.
DTLB store misses with low part of linear-to-physical address translation missed.
Non-SW-prefetch load dispatches that hit fill buffer allocated for S/W prefetch.
Non-SW-prefetch load dispatches that hit fill buffer allocated for H/W prefetch.
Cycle count for an Extended Page table walk.
This event counts when new data lines are brought into the L1 Data cache, which cause other lines to be evicted from the cache.
Number of times a transactional abort was signaled due to a data conflict on a transactionally accessed address.
Number of times a transactional abort was signaled due to a data capacity limitation for transactional writes.
Number of times a HLE transactional region aborted due to a non XRELEASE prefixed instruction writing to an elided lock in the elision buffer.
Number of times an HLE transactional execution aborted due to NoAllocatedElisionBuffer being non-zero.
Number of times an HLE transactional execution aborted due to XRELEASE lock not satisfying the address and value requirements in the elision buffer.
Number of times an HLE transactional execution aborted due to an unsupported read alignment from the elision buffer.
Number of times HLE lock could not be elided due to ElisionBufferAvailable being zero.
Number of integer move elimination candidate uops that were eliminated.
Number of SIMD move elimination candidate uops that were eliminated.
Number of integer move elimination candidate uops that were not eliminated.
Number of SIMD move elimination candidate uops that were not eliminated.
Unhalted core cycles when the thread is in ring 0.
Number of intervals between processor halts while thread is in ring 0.
Unhalted core cycles when the thread is not in ring 0.
Counts the number of times a class of instructions that may cause a transactional abort was executed. Since this is the count of execution, it may not always cause a transactional abort.
Counts the number of times a class of instructions (e.g., vzeroupper) that may cause a transactional abort was executed inside a transactional region.
Counts the number of times an instruction execution caused the transactional nest count supported to be exceeded.
Counts the number of times a XBEGIN instruction was executed inside an HLE transactional region.
Counts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional region.
This event counts cycles when the Reservation Station ( RS ) is empty for the thread. The RS is a structure that buffers allocated micro-ops from the Front-end. If there are many cycles when the RS is empty, it may represent an underflow of instructions delivered from the Front-end.
Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues.
Offcore outstanding demand data read transactions in SQ to uncore. Set Cmask=1 to count cycles.

The following errata may apply to this: HSD78, HSD62, HSD61, HSM63, HSM80

Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore.

The following errata may apply to this: HSD78, HSD62, HSD61, HSM63, HSM80

Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue.

The following errata may apply to this: HSD78, HSD62, HSD61, HSM63, HSM80

Offcore outstanding Demand code Read transactions in SQ to uncore. Set Cmask=1 to count cycles.

The following errata may apply to this: HSD62, HSD61, HSM63

Offcore outstanding RFO store transactions in SQ to uncore. Set Cmask=1 to count cycles.

The following errata may apply to this: HSD62, HSD61, HSM63

Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle.

The following errata may apply to this: HSD62, HSD61, HSM63

Offcore outstanding cacheable data read transactions in SQ to uncore. Set Cmask=1 to count cycles.

The following errata may apply to this: HSD62, HSD61, HSM63

Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore.

The following errata may apply to this: HSD62, HSD61, HSM63

Cycles in which the L1D and L2 are locked, due to a UC lock or split lock.
Cycles in which the L1D is locked.
Counts cycles the IDQ is empty.

The following errata may apply to this: HSD135

Increment each cycle # of uops delivered to IDQ from MITE path. Set Cmask = 1 to count cycles.
Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE path.
Increment each cycle. # of uops delivered to IDQ from DSB path. Set Cmask = 1 to count cycles.
Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) path.
Increment each cycle # of uops delivered to IDQ when MS_busy by DSB. Set Cmask = 1 to count cycles. Add Edge=1 to count # of delivery.
Cycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequenser (MS) is busy.
Deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequenser (MS) is busy.
Counts cycles DSB is delivered four uops. Set Cmask = 4.
Counts cycles DSB is delivered at least one uops. Set Cmask = 1.
Increment each cycle # of uops delivered to IDQ when MS_busy by MITE. Set Cmask = 1 to count cycles.
Counts cycles MITE is delivered four uops. Set Cmask = 4.
Counts cycles MITE is delivered at least one uop. Set Cmask = 1.
This event counts uops delivered by the Front-end with the assistance of the microcode sequencer. Microcode assists are used for complex instructions or scenarios that can't be handled by the standard decoder. Using other instructions, if possible, will usually improve performance.
This event counts cycles during which the microcode sequencer assisted the Front-end in delivering uops. Microcode assists are used for complex instructions or scenarios that can't be handled by the standard decoder. Using other instructions, if possible, will usually improve performance.
Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer.
Number of uops delivered to IDQ from any path.
Number of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetches.
This event counts Instruction Cache (ICACHE) misses.
Cycles where a code fetch is stalled due to L1 instruction-cache miss.
Cycles where a code fetch is stalled due to L1 instruction-cache miss.
Misses in ITLB that causes a page walk of any page size.
Completed page walks due to misses in ITLB 4K page entries.
Completed page walks due to misses in ITLB 2M/4M page entries.
Store miss in all TLB levels causes a page walk that completes. (1G)
Completed page walks in ITLB of any page size.
This event counts cycles when the page miss handler (PMH) is servicing page walks caused by ITLB misses.
ITLB misses that hit STLB (4K).
ITLB misses that hit STLB (2M).
ITLB misses that hit STLB. No page walk.
This event counts cycles where the decoder is stalled on an instruction with a length changing prefix (LCP).
Stall cycles due to IQ is full.
Not taken macro-conditional branches.
Taken speculative and retired macro-conditional branches.
Taken speculative and retired macro-conditional branch instructions excluding calls and indirects.
Taken speculative and retired indirect branches excluding calls and returns.
Taken speculative and retired indirect branches with return mnemonic.
Taken speculative and retired direct near calls.
Taken speculative and retired indirect calls.
Speculative and retired macro-conditional branches.
Speculative and retired macro-unconditional branches excluding calls and indirects.
Speculative and retired indirect branches excluding calls and returns.
Speculative and retired indirect return branches.
Speculative and retired direct near calls.
Counts all near executed branches (not necessarily retired).
Not taken speculative and retired mispredicted macro conditional branches.
Taken speculative and retired mispredicted macro conditional branches.
Taken speculative and retired mispredicted indirect branches excluding calls and returns.
Taken speculative and retired mispredicted indirect branches with return mnemonic.
Taken speculative and retired mispredicted indirect calls.
Speculative and retired mispredicted macro conditional branches.
Mispredicted indirect branches excluding calls and returns.
Counts all near executed branches (not necessarily retired).
This event count the number of undelivered (unallocated) uops from the Front-end to the Resource Allocation Table (RAT) while the Back-end of the processor is not stalled. The Front-end can allocate up to 4 uops per cycle so this event can increment 0-4 times per cycle depending on the number of unallocated uops. This event is counted on a per-core basis.

The following errata may apply to this: HSD135

This event counts the number cycles during which the Front-end allocated exactly zero uops to the Resource Allocation Table (RAT) while the Back-end of the processor is not stalled. This event is counted on a per-core basis.

The following errata may apply to this: HSD135

Cycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled.

The following errata may apply to this: HSD135

Cycles with less than 2 uops delivered by the front end.

The following errata may apply to this: HSD135

Cycles with less than 3 uops delivered by the front end.

The following errata may apply to this: HSD135

Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE.

The following errata may apply to this: HSD135

Cycles which a uop is dispatched on port 0 in this thread.
Cycles per core when uops are exectuted in port 0.
Cycles per thread when uops are executed in port 0.
Cycles which a uop is dispatched on port 1 in this thread.
Cycles per core when uops are exectuted in port 1.
Cycles per thread when uops are executed in port 1.
Cycles which a uop is dispatched on port 2 in this thread.
Cycles per core when uops are dispatched to port 2.
Cycles per thread when uops are executed in port 2.
Cycles which a uop is dispatched on port 3 in this thread.
Cycles per core when uops are dispatched to port 3.
Cycles per thread when uops are executed in port 3.
Cycles which a uop is dispatched on port 4 in this thread.
Cycles per core when uops are exectuted in port 4.
Cycles per thread when uops are executed in port 4.
Cycles which a uop is dispatched on port 5 in this thread.
Cycles per core when uops are exectuted in port 5.
Cycles per thread when uops are executed in port 5.
Cycles which a uop is dispatched on port 6 in this thread.
Cycles per core when uops are exectuted in port 6.
Cycles per thread when uops are executed in port 6.
Cycles which a uop is dispatched on port 7 in this thread.
Cycles per core when uops are dispatched to port 7.
Cycles per thread when uops are executed in port 7.
Cycles allocation is stalled due to resource related reason.

The following errata may apply to this: HSD135

Cycles stalled due to no eligible RS entry available.
This event counts cycles during which no instructions were allocated because no Store Buffers (SB) were available.
Cycles stalled due to re-order buffer full.
Cycles with pending L2 miss loads. Set Cmask=2 to count cycle.

The following errata may apply to this: HSD78, HSM63, HSM80

Cycles with pending memory loads. Set Cmask=2 to count cycle.
This event counts cycles during which no instructions were executed in the execution stage of the pipeline.
Number of loads missed L2.

The following errata may apply to this: HSM63, HSM80

This event counts cycles during which no instructions were executed in the execution stage of the pipeline and there were memory instructions pending (waiting for data).
Cycles with pending L1 data cache miss loads. Set Cmask=8 to count cycle.
Execution stalls due to L1 data cache miss loads. Set Cmask=0CH.
Number of uops delivered by the LSD.
Cycles Uops delivered by the LSD, but didn't come from the decoder.
Cycles 4 Uops delivered by the LSD, but didn't come from the decoder.
Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles.
Counts the number of ITLB flushes, includes 4k/2M/4M pages.
Demand data read requests sent to uncore.

The following errata may apply to this: HSD78, HSM80

Demand code read requests sent to uncore.
Demand RFO read requests sent to uncore, including regular RFOs, locks, ItoM.
Data read requests sent to uncore (demand and prefetch).
Counts number of cycles no uops were dispatched to be executed on this thread.

The following errata may apply to this: HSD144, HSD30, HSM31

This events counts the cycles where at least one uop was executed. It is counted per thread.

The following errata may apply to this: HSD144, HSD30, HSM31

This events counts the cycles where at least two uop were executed. It is counted per thread.

The following errata may apply to this: HSD144, HSD30, HSM31

This events counts the cycles where at least three uop were executed. It is counted per thread.

The following errata may apply to this: HSD144, HSD30, HSM31

Cycles where at least 4 uops were executed per-thread.

The following errata may apply to this: HSD144, HSD30, HSM31

Counts total number of uops to be executed per-core each cycle.

The following errata may apply to this: HSD30, HSM31

Cycles at least 1 micro-op is executed from any thread on physical core.

The following errata may apply to this: HSD30, HSM31

Cycles at least 2 micro-op is executed from any thread on physical core.

The following errata may apply to this: HSD30, HSM31

Cycles at least 3 micro-op is executed from any thread on physical core.

The following errata may apply to this: HSD30, HSM31

Cycles at least 4 micro-op is executed from any thread on physical core.

The following errata may apply to this: HSD30, HSM31

Cycles with no micro-ops executed from any thread on physical core.

The following errata may apply to this: HSD30, HSM31

Offcore requests buffer cannot take more entries for this thread core.
Number of DTLB page walker loads that hit in the L1+FB.
Number of DTLB page walker loads that hit in the L2.
Number of DTLB page walker loads that hit in the L3.

The following errata may apply to this: HSD25

Number of DTLB page walker loads from memory.

The following errata may apply to this: HSD25

Number of ITLB page walker loads that hit in the L1+FB.
Number of ITLB page walker loads that hit in the L2.
Number of ITLB page walker loads that hit in the L3.

The following errata may apply to this: HSD25

Number of ITLB page walker loads from memory.

The following errata may apply to this: HSD25

Counts the number of Extended Page Table walks from the DTLB that hit in the L1 and FB.
Counts the number of Extended Page Table walks from the DTLB that hit in the L2.
Counts the number of Extended Page Table walks from the DTLB that hit in the L3.
Counts the number of Extended Page Table walks from the DTLB that hit in memory.
Counts the number of Extended Page Table walks from the ITLB that hit in the L1 and FB.
Counts the number of Extended Page Table walks from the ITLB that hit in the L2.
Counts the number of Extended Page Table walks from the ITLB that hit in the L2.
Counts the number of Extended Page Table walks from the ITLB that hit in memory.
DTLB flush attempts of the thread-specific entries.
Count number of STLB flush attempts.
Number of instructions at retirement.

The following errata may apply to this: HSD11, HSD140

Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution.

The following errata may apply to this: HSD140

This is a non-precise version (that is, does not use PEBS) of the event that counts FP operations retired. For X87 FP operations that have no exceptions counting also includes flows that have several X87, or flows that use X87 uops in the exception handling.
Number of transitions from AVX-256 to legacy SSE when penalty applicable.

The following errata may apply to this: HSD56, HSM57

Number of transitions from SSE to AVX-256 when penalty applicable.

The following errata may apply to this: HSD56, HSM57

Number of microcode assists invoked by HW upon uop writeback.
Counts the number of micro-ops retired. Use Cmask=1 and invert to count active cycles or stalled cycles.
Cycles without actually retired uops.
Cycles with less than 10 actually retired uops.
Cycles without actually retired uops.
This event counts the number of retirement slots used each cycle. There are potentially 4 slots that can be used each cycle - meaning, 4 uops or 4 instructions could retire each cycle.
Cycles there was a Nuke. Account for both thread-specific and All Thread Nukes.
This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data inflight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.
This event is incremented when self-modifying code (SMC) is detected, which causes a machine clear. Machine clears can have a significant performance impact if they are happening frequently.
This event counts the number of executed Intel AVX masked load operations that refer to an illegal address range with the mask bits set to 0.
Branch instructions at retirement.
Counts the number of conditional branch instructions retired.
Direct and indirect near call instructions retired.
Direct and indirect macro near call instructions retired (captured in ring 3).
All (macro) branch instructions retired.
Counts the number of near return instructions retired.
Counts the number of not taken branch instructions retired.
Number of near taken branches retired.
Number of far branches retired.
Mispredicted branch instructions at retirement.
Mispredicted conditional branch instructions retired.
This event counts all mispredicted branch instructions retired. This is a precise event.
Number of near branch instructions retired that were taken but mispredicted.
Note that a whole rep string only counts AVX_INST.ALL once.
Number of times an HLE execution started.
Number of times an HLE execution successfully committed.
Number of times an HLE execution aborted due to any reasons (multiple categories may count as one).
Number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts).
Number of times an HLE execution aborted due to uncommon conditions.
Number of times an HLE execution aborted due to HLE-unfriendly instructions.
Number of times an HLE execution aborted due to incompatible memory type.

The following errata may apply to this: HSD65

Number of times an HLE execution aborted due to none of the previous 4 categories (e.g. interrupts).
Number of times an RTM execution started.
Number of times an RTM execution successfully committed.
Number of times an RTM execution aborted due to any reasons (multiple categories may count as one).
Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts).
Number of times an RTM execution aborted due to various memory events (e.g., read/write capacity and conflicts).
Number of times an RTM execution aborted due to HLE-unfriendly instructions.
Number of times an RTM execution aborted due to incompatible memory type.

The following errata may apply to this: HSD65

Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt).
Number of X87 FP assists due to output values.
Number of X87 FP assists due to input values.
Number of SIMD FP assists due to output values.
Number of SIMD FP assists due to input values.
Cycles with any input/output SSE* or FP assists.
Count cases of saving new LBR records by hardware.
Retired load uops that miss the STLB.

The following errata may apply to this: HSD29, HSM30

Retired store uops that miss the STLB.

The following errata may apply to this: HSD29, HSM30

Retired load uops with locked access.

The following errata may apply to this: HSD76, HSD29, HSM30

Retired load uops that split across a cacheline boundary.

The following errata may apply to this: HSD29, HSM30

Retired store uops that split across a cacheline boundary.

The following errata may apply to this: HSD29, HSM30

All retired load uops.

The following errata may apply to this: HSD29, HSM30

All retired store uops.

The following errata may apply to this: HSD29, HSM30

Retired load uops with L1 cache hits as data sources.

The following errata may apply to this: HSD29, HSM30

Retired load uops with L2 cache hits as data sources.

The following errata may apply to this: HSD76, HSD29, HSM30

Retired load uops with L3 cache hits as data sources.

The following errata may apply to this: HSD74, HSD29, HSD25, HSM26, HSM30

Retired load uops missed L1 cache as data sources.

The following errata may apply to this: HSM30

Retired load uops missed L2. Unknown data source excluded.

The following errata may apply to this: HSD29, HSM30

Retired load uops missed L3. Excludes unknown data source .

The following errata may apply to this: HSD74, HSD29, HSD25, HSM26, HSM30

Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready.

The following errata may apply to this: HSM30

Retired load uops which data sources were L3 hit and cross-core snoop missed in on-pkg core cache.

The following errata may apply to this: HSD29, HSD25, HSM26, HSM30

Retired load uops which data sources were L3 and cross-core snoop hits in on-pkg core cache.

The following errata may apply to this: HSD29, HSD25, HSM26, HSM30

Retired load uops which data sources were HitM responses from shared L3.

The following errata may apply to this: HSD29, HSD25, HSM26, HSM30

Retired load uops which data sources were hits in L3 without snoops required.

The following errata may apply to this: HSD74, HSD29, HSD25, HSM26, HSM30

This event counts retired load uops where the data came from local DRAM. This does not include hardware prefetches.

The following errata may apply to this: HSD74, HSD29, HSD25, HSM30

Retired load uop whose Data Source was: remote DRAM either Snoop not needed or Snoop Miss (RspI)

The following errata may apply to this: HSD29, HSM30

Retired load uop whose Data Source was: Remote cache HITM

The following errata may apply to this: HSM30

Retired load uop whose Data Source was: forwarded from remote cache

The following errata may apply to this: HSM30

Number of front end re-steers due to BPU misprediction.
Demand data read requests that access L2 cache.
RFO requests that access L2 cache.
L2 cache accesses when fetching instructions.
Any MLC or L3 HW prefetch accessing L2, including rejects.
L1D writebacks that access L2 cache.
L2 fill requests that access L2 cache.
L2 writebacks that access L2 cache.
Transactions accessing L2 pipe.
L2 cache lines in I state filling L2.
L2 cache lines in S state filling L2.
L2 cache lines in E state filling L2.
This event counts the number of L2 cache lines brought into the L2 cache. Lines are filled into the L2 cache when there was an L2 miss.
Clean L2 cache lines evicted by demand.
Dirty L2 cache lines evicted by demand.
tbd

cpc(3CPC)

https://download.01.org/perfmon/index/

June 18, 2018 OmniOS