This manual page describes events specific to the following Intel
CPU models and is derived from Intel's perfmon data. For more information,
please consult the Intel Software Developer's Manual or Intel's perfmon
website.
- ld_blocks.store_forward
- This event counts how many times the load operation got the true
Block-on-Store blocking code preventing store forwarding. This includes
cases when:
- preceding store conflicts with the load (incomplete overlap);
- store forwarding is impossible due to u-arch limitations;
- preceding lock RMW operations are not forwarded;
- store has the no-forward bit set (uncacheable/page-split/masked stores);
- all-blocking stores are used (mostly, fences and port I/O); and others.
The most common case is a load blocked due to its address range
overlapping with a preceding smaller uncompleted store. Note: This event
does not take into account cases of out-of-SW-control (for example,
SbTailHit), unknown physical STA, and cases of blocking loads on store due
to being non-WB memory type or a lock. These cases are covered by other
events. See the table of not supported store forwards in the Optimization
Guide.
- ld_blocks.no_sr
- This event counts the number of times that split load operations are
temporarily blocked because all resources for handling the split accesses
are in use.
- misalign_mem_ref.loads
- This event counts speculative cache-line split load uops dispatched to the
L1 cache.
- misalign_mem_ref.stores
- This event counts speculative cache line split store-address (STA) uops
dispatched to the L1 cache.
- ld_blocks_partial.address_alias
- This event counts false dependencies in MOB when the partial comparison
upon loose net check and dependency was resolved by the Enhanced Loose net
mechanism. This may not result in high performance penalties. Loose net
checks can fail when loads and stores are 4k aliased.
- dtlb_load_misses.miss_causes_a_walk
- This event counts load misses in all DTLB levels that cause page walks of
any page size (4K/2M/4M/1G).
The following errata may apply to this: BDM69
- dtlb_load_misses.walk_completed_4k
- This event counts load misses in all DTLB levels that cause a completed
page walk (4K page size). The page walk can end with or without a fault.
The following errata may apply to this: BDM69
- dtlb_load_misses.walk_completed_2m_4m
- This event counts load misses in all DTLB levels that cause a completed
page walk (2M and 4M page sizes). The page walk can end with or without a
fault.
The following errata may apply to this: BDM69
- dtlb_load_misses.walk_completed_1g
- This event counts load misses in all DTLB levels that cause a completed
page walk (1G page size). The page walk can end with or without a fault.
The following errata may apply to this: BDM69
- dtlb_load_misses.walk_completed
- Demand load Miss in all translation lookaside buffer (TLB) levels causes a
page walk that completes of any page size.
The following errata may apply to this: BDM69
- dtlb_load_misses.walk_duration
- This event counts the number of cycles while PMH is busy with the page
walk.
The following errata may apply to this: BDM69
- dtlb_load_misses.stlb_hit_4k
- Load misses that miss the DTLB and hit the STLB (4K).
- dtlb_load_misses.stlb_hit_2m
- Load misses that miss the DTLB and hit the STLB (2M).
- dtlb_load_misses.stlb_hit
- Load operations that miss the first DTLB level but hit the second and do
not cause page walks.
- int_misc.recovery_cycles
- Cycles checkpoints in Resource Allocation Table (RAT) are recovering from
JEClear or machine clear.
- int_misc.recovery_cycles_any
- Core cycles the allocator was stalled due to recovery from earlier clear
event for any thread running on the physical core (e.g. misprediction or
memory nuke).
- int_misc.rat_stall_cycles
- This event counts the number of cycles during which Resource Allocation
Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for
the current thread. This also includes the cycles during which the
Allocator is serving another thread.
- uops_issued.any
- This event counts the number of Uops issued by the Resource Allocation
Table (RAT) to the reservation station (RS).
- uops_issued.stall_cycles
- This event counts cycles during which the Resource Allocation Table (RAT)
does not issue any Uops to the reservation station (RS) for the current
thread.
- uops_issued.flags_merge
- Number of flags-merge uops being allocated. Such uops considered perf
sensitive
added by GSR u-arch.
- uops_issued.slow_lea
- Number of slow LEA uops being allocated. A uop is generally considered
SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as
a result of LEA instruction or not.
- uops_issued.single_mul
- Number of Multiply packed/scalar single precision uops allocated.
- arith.fpu_div_active
- This event counts the number of the divide operations executed. Uses
edge-detect and a cmask value of 1 on ARITH.FPU_DIV_ACTIVE to get the
number of the divide operations executed.
- l2_rqsts.demand_data_rd_miss
- This event counts the number of demand Data Read requests that miss L2
cache. Only not rejected loads are counted.
- l2_rqsts.rfo_miss
- RFO requests that miss L2 cache.
- l2_rqsts.code_rd_miss
- L2 cache misses when fetching instructions.
- l2_rqsts.all_demand_miss
- Demand requests that miss L2 cache.
- l2_rqsts.l2_pf_miss
- This event counts the number of requests from the L2 hardware prefetchers
that miss L2 cache.
- l2_rqsts.miss
- All requests that miss L2 cache.
- l2_rqsts.demand_data_rd_hit
- This event counts the number of demand Data Read requests that hit L2
cache. Only not rejected loads are counted.
- l2_rqsts.rfo_hit
- RFO requests that hit L2 cache.
- l2_rqsts.code_rd_hit
- L2 cache hits when fetching instructions, code reads.
- l2_rqsts.l2_pf_hit
- This event counts the number of requests from the L2 hardware prefetchers
that hit L2 cache. L3 prefetch new types.
- l2_rqsts.all_demand_data_rd
- This event counts the number of demand Data Read requests (including
requests from L1D hardware prefetchers). These loads may hit or miss L2
cache. Only non rejected loads are counted.
- l2_rqsts.all_rfo
- This event counts the total number of RFO (read for ownership) requests to
L2 cache. L2 RFO requests include both L1D demand RFO misses as well as
L1D RFO prefetches.
- l2_rqsts.all_code_rd
- This event counts the total number of L2 code requests.
- l2_rqsts.all_demand_references
- Demand requests to L2 cache.
- l2_rqsts.all_pf
- This event counts the total number of requests from the L2 hardware
prefetchers.
- l2_rqsts.references
- All L2 requests.
- l2_demand_rqsts.wb_hit
- This event counts the number of WB requests that hit L2 cache.
- longest_lat_cache.miss
- This event counts core-originated cacheable demand requests that miss the
last level cache (LLC). Demand requests include loads, RFOs, and hardware
prefetches from L1D, and instruction fetches from IFU.
- longest_lat_cache.reference
- This event counts core-originated cacheable demand requests that refer to
the last level cache (LLC). Demand requests include loads, RFOs, and
hardware prefetches from L1D, and instruction fetches from IFU.
- cpu_clk_unhalted.thread_p
- This is an architectural event that counts the number of thread cycles
while the thread is not in a halt state. The thread enters the halt state
when it is running the HLT instruction. The core frequency may change from
time to time due to power or thermal throttling. For this reason, this
event may have a changing ratio with regards to wall clock time.
- cpu_clk_unhalted.thread_p_any
- Core cycles when at least one thread on the physical core is not in halt
state.
- cpu_clk_thread_unhalted.ref_xclk
- This is a fixed-frequency event programmed to general counters. It counts
when the core is unhalted at 100 Mhz.
- cpu_clk_thread_unhalted.ref_xclk_any
- Reference cycles when the at least one thread on the physical core is
unhalted (counts at 100 MHz rate).
- cpu_clk_unhalted.ref_xclk
- Reference cycles when the thread is unhalted (counts at 100 MHz
rate).
- cpu_clk_unhalted.ref_xclk_any
- Reference cycles when the at least one thread on the physical core is
unhalted (counts at 100 MHz rate).
- cpu_clk_thread_unhalted.one_thread_active
- Count XClk pulses when this thread is unhalted and the other thread is
halted.
- cpu_clk_unhalted.one_thread_active
- Count XClk pulses when this thread is unhalted and the other thread is
halted.
- l1d_pend_miss.pending
- This event counts duration of L1D miss outstanding, that is each cycle
number of Fill Buffers (FB) outstanding required by Demand Reads. FB
either is held by demand loads, or it is held by non-demand loads and gets
hit at least once by demand. The valid outstanding interval is defined
until the FB deallocation by one of the following ways: from FB
allocation, if FB is allocated by demand; from the demand Hit FB, if it is
allocated by hardware or software prefetch. Note: In the L1D, a Demand
Read contains cacheable or noncacheable demand loads, including ones
causing cache-line splits and reads due to page walks resulted from any
request type.
- l1d_pend_miss.pending_cycles
- This event counts duration of L1D miss outstanding in cycles.
- l1d_pend_miss.pending_cycles_any
- Cycles with L1D load Misses outstanding from any thread on physical
core.
- l1d_pend_miss.fb_full
- Cycles a demand request was blocked due to Fill Buffers
inavailability.
- dtlb_store_misses.miss_causes_a_walk
- This event counts store misses in all DTLB levels that cause page walks of
any page size (4K/2M/4M/1G).
The following errata may apply to this: BDM69
- dtlb_store_misses.walk_completed_4k
- This event counts store misses in all DTLB levels that cause a completed
page walk (4K page size). The page walk can end with or without a fault.
The following errata may apply to this: BDM69
- dtlb_store_misses.walk_completed_2m_4m
- This event counts store misses in all DTLB levels that cause a completed
page walk (2M and 4M page sizes). The page walk can end with or without a
fault.
The following errata may apply to this: BDM69
- dtlb_store_misses.walk_completed_1g
- This event counts store misses in all DTLB levels that cause a completed
page walk (1G page size). The page walk can end with or without a fault.
The following errata may apply to this: BDM69
- dtlb_store_misses.walk_completed
- Store misses in all DTLB levels that cause completed page walks.
The following errata may apply to this: BDM69
- dtlb_store_misses.walk_duration
- This event counts the number of cycles while PMH is busy with the page
walk.
The following errata may apply to this: BDM69
- dtlb_store_misses.stlb_hit_4k
- Store misses that miss the DTLB and hit the STLB (4K).
- dtlb_store_misses.stlb_hit_2m
- Store misses that miss the DTLB and hit the STLB (2M).
- dtlb_store_misses.stlb_hit
- Store operations that miss the first TLB level but hit the second and do
not cause page walks.
- load_hit_pre.sw_pf
- This event counts all not software-prefetch load dispatches that hit the
fill buffer (FB) allocated for the software prefetch. It can also be
incremented by some lock instructions. So it should only be used with
profiling so that the locks can be excluded by asm inspection of the
nearby instructions.
- load_hit_pre.hw_pf
- This event counts all not software-prefetch load dispatches that hit the
fill buffer (FB) allocated for the hardware prefetch.
- ept.walk_cycles
- This event counts cycles for an extended page table walk. The Extended
Page directory cache differs from standard TLB caches by the operating
system that use it. Virtual machine operating systems use the extended
page directory cache, while guest operating systems use the standard TLB
caches.
- l1d.replacement
- This event counts L1D data line replacements including opportunistic
replacements, and replacements that require stall-for-replace or
block-for-replace.
- tx_mem.abort_conflict
- Number of times a TSX line had a cache conflict.
- tx_mem.abort_capacity_write
- Number of times a TSX Abort was triggered due to an evicted line caused by
a transaction overflow.
- tx_mem.abort_hle_store_to_elided_lock
- Number of times a TSX Abort was triggered due to a non-release/commit
store to lock.
- tx_mem.abort_hle_elision_buffer_not_empty
- Number of times a TSX Abort was triggered due to commit but Lock Buffer
not empty.
- tx_mem.abort_hle_elision_buffer_mismatch
- Number of times a TSX Abort was triggered due to release/commit but data
and address mismatch.
- tx_mem.abort_hle_elision_buffer_unsupported_alignment
- Number of times a TSX Abort was triggered due to attempting an unsupported
alignment from Lock Buffer.
- tx_mem.hle_elision_buffer_full
- Number of times we could not allocate Lock Buffer.
- move_elimination.int_eliminated
- Number of integer Move Elimination candidate uops that were
eliminated.
- move_elimination.simd_eliminated
- Number of SIMD Move Elimination candidate uops that were eliminated.
- move_elimination.int_not_eliminated
- Number of integer Move Elimination candidate uops that were not
eliminated.
- move_elimination.simd_not_eliminated
- Number of SIMD Move Elimination candidate uops that were not
eliminated.
- cpl_cycles.ring0
- This event counts the unhalted core cycles during which the thread is in
the ring 0 privileged mode.
- cpl_cycles.ring0_trans
- This event counts when there is a transition from ring 1,2 or 3 to
ring0.
- cpl_cycles.ring123
- This event counts unhalted core cycles during which the thread is in rings
1, 2, or 3.
- tx_exec.misc1
- Counts the number of times a class of instructions that may cause a
transactional abort was executed. Since this is the count of execution, it
may not always cause a transactional abort.
- tx_exec.misc2
- Unfriendly TSX abort triggered by a vzeroupper instruction.
- tx_exec.misc3
- Unfriendly TSX abort triggered by a nest count that is too deep.
- tx_exec.misc4
- RTM region detected inside HLE.
- tx_exec.misc5
- Counts the number of times an HLE XACQUIRE instruction was executed inside
an RTM transactional region.
- rs_events.empty_cycles
- This event counts cycles during which the reservation station (RS) is
empty for the thread. Note: In ST-mode, not active thread should drive 0.
This is usually caused by severely costly branch mispredictions, or
allocator/FE issues.
- rs_events.empty_end
- Counts end of periods where the Reservation Station (RS) was empty. Could
be useful to precisely locate Frontend Latency Bound issues.
- offcore_requests_outstanding.demand_data_rd
- This event counts the number of offcore outstanding Demand Data Read
transactions in the super queue (SQ) every cycle. A transaction is
considered to be in the Offcore outstanding state between L2 miss and
transaction completion sent to requestor. See the corresponding Umask
under OFFCORE_REQUESTS. Note: A prefetch promoted to Demand is counted
from the promotion point.
The following errata may apply to this: BDM76
- offcore_requests_outstanding.cycles_with_demand_data_rd
- This event counts cycles when offcore outstanding Demand Data Read
transactions are present in the super queue (SQ). A transaction is
considered to be in the Offcore outstanding state between L2 miss and
transaction completion sent to requestor (SQ de-allocation).
The following errata may apply to this: BDM76
- offcore_requests_outstanding.demand_data_rd_ge_6
- Cycles with at least 6 offcore outstanding Demand Data Read transactions
in uncore queue.
The following errata may apply to this: BDM76
- offcore_requests_outstanding.demand_code_rd
- This event counts the number of offcore outstanding Code Reads
transactions in the super queue every cycle. The Offcore outstanding state
of the transaction lasts from the L2 miss until the sending transaction
completion to requestor (SQ deallocation). See the corresponding Umask
under OFFCORE_REQUESTS.
The following errata may apply to this: BDM76
- offcore_requests_outstanding.demand_rfo
- This event counts the number of offcore outstanding RFO (store)
transactions in the super queue (SQ) every cycle. A transaction is
considered to be in the Offcore outstanding state between L2 miss and
transaction completion sent to requestor (SQ de-allocation). See
corresponding Umask under OFFCORE_REQUESTS.
The following errata may apply to this: BDM76
- offcore_requests_outstanding.cycles_with_demand_rfo
- This event counts the number of offcore outstanding demand rfo Reads
transactions in the super queue every cycle. The Offcore outstanding state
of the transaction lasts from the L2 miss until the sending transaction
completion to requestor (SQ deallocation). See the corresponding Umask
under OFFCORE_REQUESTS.
The following errata may apply to this: BDM76
- offcore_requests_outstanding.all_data_rd
- This event counts the number of offcore outstanding cacheable Core Data
Read transactions in the super queue every cycle. A transaction is
considered to be in the Offcore outstanding state between L2 miss and
transaction completion sent to requestor (SQ de-allocation). See
corresponding Umask under OFFCORE_REQUESTS.
The following errata may apply to this: BDM76
- offcore_requests_outstanding.cycles_with_data_rd
- This event counts cycles when offcore outstanding cacheable Core Data Read
transactions are present in the super queue. A transaction is considered
to be in the Offcore outstanding state between L2 miss and transaction
completion sent to requestor (SQ de-allocation). See corresponding Umask
under OFFCORE_REQUESTS.
The following errata may apply to this: BDM76
- lock_cycles.split_lock_uc_lock_duration
- This event counts cycles in which the L1 and L2 are locked due to a UC
lock or split lock. A lock is asserted in case of locked memory access,
due to noncacheable memory, locked operation that spans two cache lines,
or a page walk from the noncacheable page table. L1D and L2 locks have a
very high performance penalty and it is highly recommended to avoid such
access.
- lock_cycles.cache_lock_duration
- This event counts the number of cycles when the L1D is locked. It is a
superset of the 0x1 mask (BUS_LOCK_CLOCKS.BUS_LOCK_DURATION).
- idq.empty
- This counts the number of cycles that the instruction decoder queue is
empty and can indicate that the application may be bound in the front end.
It does not determine whether there are uops being delivered to the Alloc
stage since uops can be delivered by bypass skipping the Instruction
Decode Queue (IDQ) when it is empty.
- idq.mite_uops
- This event counts the number of uops delivered to Instruction Decode Queue
(IDQ) from the MITE path. Counting includes uops that may bypass the IDQ.
This also means that uops are not being delivered from the Decode Stream
Buffer (DSB).
- idq.mite_cycles
- This event counts cycles during which uops are being delivered to
Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops
that may bypass the IDQ.
- idq.dsb_uops
- This event counts the number of uops delivered to Instruction Decode Queue
(IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops
that may bypass the IDQ.
- idq.dsb_cycles
- This event counts cycles during which uops are being delivered to
Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.
Counting includes uops that may bypass the IDQ.
- idq.ms_dsb_uops
- This event counts the number of uops initiated by Decode Stream Buffer
(DSB) that are being delivered to Instruction Decode Queue (IDQ) while the
Microcode Sequencer (MS) is busy. Counting includes uops that may bypass
the IDQ.
- idq.ms_dsb_cycles
- This event counts cycles during which uops initiated by Decode Stream
Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while
the Microcode Sequencer (MS) is busy. Counting includes uops that may
bypass the IDQ.
- idq.ms_dsb_occur
- This event counts the number of deliveries to Instruction Decode Queue
(IDQ) initiated by Decode Stream Buffer (DSB) while the Microcode
Sequencer (MS) is busy. Counting includes uops that may bypass the
IDQ.
- idq.all_dsb_cycles_4_uops
- This event counts the number of cycles 4 uops were delivered to
Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.
Counting includes uops that may bypass the IDQ.
- idq.all_dsb_cycles_any_uops
- This event counts the number of cycles uops were delivered to Instruction
Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting
includes uops that may bypass the IDQ.
- idq.ms_mite_uops
- This event counts the number of uops initiated by MITE and delivered to
Instruction Decode Queue (IDQ) while the Microcode Sequenser (MS) is busy.
Counting includes uops that may bypass the IDQ.
- idq.all_mite_cycles_4_uops
- This event counts the number of cycles 4 uops were delivered to
Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops
that may bypass the IDQ. This also means that uops are not being delivered
from the Decode Stream Buffer (DSB).
- idq.all_mite_cycles_any_uops
- This event counts the number of cycles uops were delivered to Instruction
Decode Queue (IDQ) from the MITE path. Counting includes uops that may
bypass the IDQ. This also means that uops are not being delivered from the
Decode Stream Buffer (DSB).
- idq.ms_uops
- This event counts the total number of uops delivered to Instruction Decode
Queue (IDQ) while the Microcode Sequenser (MS) is busy. Counting includes
uops that may bypass the IDQ. Uops maybe initiated by Decode Stream Buffer
(DSB) or MITE.
- idq.ms_cycles
- This event counts cycles during which uops are being delivered to
Instruction Decode Queue (IDQ) while the Microcode Sequenser (MS) is busy.
Counting includes uops that may bypass the IDQ. Uops maybe initiated by
Decode Stream Buffer (DSB) or MITE.
- idq.ms_switches
- Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode
pipeline) to the Microcode Sequencer.
- idq.mite_all_uops
- This event counts the number of uops delivered to Instruction Decode Queue
(IDQ) from the MITE path. Counting includes uops that may bypass the IDQ.
This also means that uops are not being delivered from the Decode Stream
Buffer (DSB).
- icache.hit
- This event counts the number of both cacheable and noncacheable
Instruction Cache, Streaming Buffer and Victim Cache Reads including UC
fetches.
- icache.misses
- This event counts the number of instruction cache, streaming buffer and
victim cache misses. Counting includes UC accesses.
- icache.ifdata_stall
- This event counts cycles during which the demand fetch waits for data
(wfdM104H) from L2 or iSB (opportunistic hit).
- itlb_misses.miss_causes_a_walk
- This event counts store misses in all DTLB levels that cause page walks of
any page size (4K/2M/4M/1G).
The following errata may apply to this: BDM69
- itlb_misses.walk_completed_4k
- This event counts store misses in all DTLB levels that cause a completed
page walk (4K page size). The page walk can end with or without a fault.
The following errata may apply to this: BDM69
- itlb_misses.walk_completed_2m_4m
- This event counts store misses in all DTLB levels that cause a completed
page walk (2M and 4M page sizes). The page walk can end with or without a
fault.
The following errata may apply to this: BDM69
- itlb_misses.walk_completed_1g
- This event counts store misses in all DTLB levels that cause a completed
page walk (1G page size). The page walk can end with or without a fault.
The following errata may apply to this: BDM69
- itlb_misses.walk_completed
- Misses in all ITLB levels that cause completed page walks.
The following errata may apply to this: BDM69
- itlb_misses.walk_duration
- This event counts the number of cycles while PMH is busy with the page
walk.
The following errata may apply to this: BDM69
- itlb_misses.stlb_hit_4k
- Core misses that miss the DTLB and hit the STLB (4K).
- itlb_misses.stlb_hit_2m
- Code misses that miss the DTLB and hit the STLB (2M).
- itlb_misses.stlb_hit
- Operations that miss the first ITLB level but hit the second and do not
cause any page walks.
- ild_stall.lcp
- This event counts stalls occured due to changing prefix length (66, 67 or
REX.W when they change the length of the decoded instruction). Occurrences
counting is proportional to the number of prefixes in a 16B-line. This may
result in the following penalties: three-cycle penalty for each LCP in a
16-byte chunk.
- br_inst_exec.nontaken_conditional
- This event counts not taken macro-conditional branch instructions.
- br_inst_exec.taken_conditional
- This event counts taken speculative and retired macro-conditional branch
instructions.
- br_inst_exec.taken_direct_jump
- This event counts taken speculative and retired macro-conditional branch
instructions excluding calls and indirect branches.
- br_inst_exec.taken_indirect_jump_non_call_ret
- This event counts taken speculative and retired indirect branches
excluding calls and return branches.
- br_inst_exec.taken_indirect_near_return
- This event counts taken speculative and retired indirect branches that
have a return mnemonic.
- br_inst_exec.taken_direct_near_call
- This event counts taken speculative and retired direct near calls.
- br_inst_exec.taken_indirect_near_call
- This event counts taken speculative and retired indirect calls including
both register and memory indirect.
- br_inst_exec.all_conditional
- This event counts both taken and not taken speculative and retired
macro-conditional branch instructions.
- br_inst_exec.all_direct_jmp
- This event counts both taken and not taken speculative and retired
macro-unconditional branch instructions, excluding calls and
indirects.
- br_inst_exec.all_indirect_jump_non_call_ret
- This event counts both taken and not taken speculative and retired
indirect branches excluding calls and return branches.
- br_inst_exec.all_indirect_near_return
- This event counts both taken and not taken speculative and retired
indirect branches that have a return mnemonic.
- br_inst_exec.all_direct_near_call
- This event counts both taken and not taken speculative and retired direct
near calls.
- br_inst_exec.all_branches
- This event counts both taken and not taken speculative and retired branch
instructions.
- br_misp_exec.nontaken_conditional
- This event counts not taken speculative and retired mispredicted macro
conditional branch instructions.
- br_misp_exec.taken_conditional
- This event counts taken speculative and retired mispredicted macro
conditional branch instructions.
- br_misp_exec.taken_indirect_jump_non_call_ret
- This event counts taken speculative and retired mispredicted indirect
branches excluding calls and returns.
- br_misp_exec.taken_return_near
- This event counts taken speculative and retired mispredicted indirect
branches that have a return mnemonic.
- br_misp_exec.taken_indirect_near_call
- Taken speculative and retired mispredicted indirect calls.
- br_misp_exec.all_conditional
- This event counts both taken and not taken speculative and retired
mispredicted macro conditional branch instructions.
- br_misp_exec.all_indirect_jump_non_call_ret
- This event counts both taken and not taken mispredicted indirect branches
excluding calls and returns.
- br_misp_exec.all_branches
- This event counts both taken and not taken speculative and retired
mispredicted branch instructions.
- idq_uops_not_delivered.core
- This event counts the number of uops not delivered to Resource Allocation
Table (RAT) per thread adding 4 x when Resource Allocation Table (RAT) is
not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource
Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not
cover cases when:
a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread;
b. Resource Allocation Table (RAT) is stalled for the thread (including uop
drops and clear BE conditions);
c. Instruction Decode Queue (IDQ) delivers four uops.
- idq_uops_not_delivered.cycles_0_uops_deliv.core
- This event counts, on the per-thread basis, cycles when no uops are
delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core
=4.
- idq_uops_not_delivered.cycles_le_1_uop_deliv.core
- This event counts, on the per-thread basis, cycles when less than 1 uop is
delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core
>=3.
- idq_uops_not_delivered.cycles_le_2_uop_deliv.core
- Cycles with less than 2 uops delivered by the front end.
- idq_uops_not_delivered.cycles_le_3_uop_deliv.core
- Cycles with less than 3 uops delivered by the front end.
- idq_uops_not_delivered.cycles_fe_was_ok
- Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was
stalling FE.
- uop_dispatches_cancelled.simd_prf
- This event counts the number of micro-operations cancelled after they were
dispatched from the scheduler to the execution units when the total number
of physical register read ports across all dispatch ports exceeds the read
bandwidth of the physical register file. The SIMD_PRF subevent applies to
the following instructions: VDPPS, DPPS, VPCMPESTRI, PCMPESTRI,
VPCMPESTRM, PCMPESTRM, VFMADD*, VFMADDSUB*, VFMSUB*, VMSUBADD*, VFNMADD*,
VFNMSUB*. See the Broadwell Optimization Guide for more information.
- uops_dispatched_port.port_0
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 0.
- uops_executed_port.port_0_core
- Cycles per core when uops are exectuted in port 0.
- uops_executed_port.port_0
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 0.
- uops_dispatched_port.port_1
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 1.
- uops_executed_port.port_1_core
- Cycles per core when uops are exectuted in port 1.
- uops_executed_port.port_1
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 1.
- uops_dispatched_port.port_2
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 2.
- uops_executed_port.port_2_core
- Cycles per core when uops are dispatched to port 2.
- uops_executed_port.port_2
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 2.
- uops_dispatched_port.port_3
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 3.
- uops_executed_port.port_3_core
- Cycles per core when uops are dispatched to port 3.
- uops_executed_port.port_3
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 3.
- uops_dispatched_port.port_4
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 4.
- uops_executed_port.port_4_core
- Cycles per core when uops are exectuted in port 4.
- uops_executed_port.port_4
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 4.
- uops_dispatched_port.port_5
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 5.
- uops_executed_port.port_5_core
- Cycles per core when uops are exectuted in port 5.
- uops_executed_port.port_5
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 5.
- uops_dispatched_port.port_6
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 6.
- uops_executed_port.port_6_core
- Cycles per core when uops are exectuted in port 6.
- uops_executed_port.port_6
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 6.
- uops_dispatched_port.port_7
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 7.
- uops_executed_port.port_7_core
- Cycles per core when uops are dispatched to port 7.
- uops_executed_port.port_7
- This event counts, on the per-thread basis, cycles during which uops are
dispatched from the Reservation Station (RS) to port 7.
- resource_stalls.any
- This event counts resource-related stall cycles. Reasons for stalls can be
as follows:
- *any* u-arch structure got full (LB, SB, RS, ROB, BOB, LM, Physical
Register Reclaim Table (PRRT), or Physical History Table (PHT) slots)
- *any* u-arch structure got empty (like INT/SIMD FreeLists)
- FPU control word (FPCW), MXCSR and others. This counts cycles that the
pipeline backend blocked uop delivery from the front end.
- resource_stalls.rs
- This event counts stall cycles caused by absence of eligible entries in
the reservation station (RS). This may result from RS overflow, or from RS
deallocation because of the RS array Write Port allocation scheme (each RS
entry has two write ports instead of four. As a result, empty entries
could not be used, although RS is not really full). This counts cycles
that the pipeline backend blocked uop delivery from the front end.
- resource_stalls.sb
- This event counts stall cycles caused by the store buffer (SB) overflow
(excluding draining from synch). This counts cycles that the pipeline
backend blocked uop delivery from the front end.
- resource_stalls.rob
- This event counts ROB full stall cycles. This counts cycles that the
pipeline backend blocked uop delivery from the front end.
- cycle_activity.cycles_l2_pending
- Counts number of cycles the CPU has at least one pending demand* load
request missing the L2 cache.
- cycle_activity.cycles_l2_miss
- Cycles while L2 cache miss demand load is outstanding.
- cycle_activity.cycles_ldm_pending
- Counts number of cycles the CPU has at least one pending demand load
request (that is cycles with non-completed load waiting for its data from
memory subsystem).
- cycle_activity.cycles_mem_any
- Cycles while memory subsystem has an outstanding load.
- cycle_activity.cycles_no_execute
- Counts number of cycles nothing is executed on any execution port.
- cycle_activity.stalls_total
- Total execution stalls.
- cycle_activity.stalls_l2_pending
- Counts number of cycles nothing is executed on any execution port, while
there was at least one pending demand* load request missing the L2
cache.(as a footprint) * includes also L1 HW prefetch requests that may or
may not be required by demands.
- cycle_activity.stalls_l2_miss
- Execution stalls while L2 cache miss demand load is outstanding.
- cycle_activity.stalls_ldm_pending
- Counts number of cycles nothing is executed on any execution port, while
there was at least one pending demand load request.
- cycle_activity.stalls_mem_any
- Execution stalls while memory subsystem has an outstanding load.
- cycle_activity.cycles_l1d_pending
- Counts number of cycles the CPU has at least one pending demand load
request missing the L1 data cache.
- cycle_activity.cycles_l1d_miss
- Cycles while L1 cache miss demand load is outstanding.
- cycle_activity.stalls_l1d_pending
- Counts number of cycles nothing is executed on any execution port, while
there was at least one pending demand load request missing the L1 data
cache.
- cycle_activity.stalls_l1d_miss
- Execution stalls while L1 cache miss demand load is outstanding.
- lsd.uops
- Number of Uops delivered by the LSD.
- lsd.cycles_4_uops
- Cycles 4 Uops delivered by the LSD, but didn't come from the decoder.
- lsd.cycles_active
- Cycles Uops delivered by the LSD, but didn't come from the decoder.
- dsb2mite_switches.penalty_cycles
- This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty
cycles. These cycles do not include uops routed through because of the
switch itself, for example, when Instruction Decode Queue (IDQ)
pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full.
SBD-to-MITE switch true penalty cycles happen after the merge mux (MM)
receives Decode Stream Buffer (DSB) Sync-indication until receiving the
first MITE uop. MM is placed before Instruction Decode Queue (IDQ) to
merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths.
Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode
Stream Buffer (DSB)-to-MITE switch occurs. Penalty: A Decode Stream Buffer
(DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six
cycles in which no uops are delivered to the IDQ. Most often, such
switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost
02 cycles.
- itlb.itlb_flush
- This event counts the number of flushes of the big or small ITLB pages.
Counting include both TLB Flush (covering all sets) and TLB Set Clear
(set-specific).
- offcore_requests.demand_data_rd
- This event counts the Demand Data Read requests sent to uncore. Use it in
conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency
in the uncore.
- offcore_requests.demand_code_rd
- This event counts both cacheable and noncachaeble code read requests.
- offcore_requests.demand_rfo
- This event counts the demand RFO (read for ownership) requests including
regular RFOs, locks, ItoM.
- offcore_requests.all_data_rd
- This event counts the demand and prefetch data reads. All Core Data Reads
include cacheable Demands and L2 prefetchers (not L3 prefetchers).
Counting also covers reads due to page walks resulted from any request
type.
- uops_executed.thread
- Number of uops to be executed per-thread each cycle.
- uops_executed.stall_cycles
- This event counts cycles during which no uops were dispatched from the
Reservation Station (RS) per thread.
- uops_executed.cycles_ge_1_uop_exec
- Cycles where at least 1 uop was executed per-thread.
- uops_executed.cycles_ge_2_uops_exec
- Cycles where at least 2 uops were executed per-thread.
- uops_executed.cycles_ge_3_uops_exec
- Cycles where at least 3 uops were executed per-thread.
- uops_executed.cycles_ge_4_uops_exec
- Cycles where at least 4 uops were executed per-thread.
- uops_executed.core
- Number of uops executed from any thread.
- uops_executed.core_cycles_ge_1
- Cycles at least 1 micro-op is executed from any thread on physical
core.
- uops_executed.core_cycles_ge_2
- Cycles at least 2 micro-op is executed from any thread on physical
core.
- uops_executed.core_cycles_ge_3
- Cycles at least 3 micro-op is executed from any thread on physical
core.
- uops_executed.core_cycles_ge_4
- Cycles at least 4 micro-op is executed from any thread on physical
core.
- uops_executed.core_cycles_none
- Cycles with no micro-ops executed from any thread on physical core.
- offcore_requests_buffer.sq_full
- This event counts the number of cases when the offcore requests buffer
cannot take more entries for the core. This can happen when the superqueue
does not contain eligible entries, or when L1D writeback pending FIFO
requests is full. Note: Writeback pending FIFO has six entries.
- page_walker_loads.dtlb_l1
- Number of DTLB page walker hits in the L1+FB.
The following errata may apply to this: BDM69, BDM98
- page_walker_loads.dtlb_l2
- Number of DTLB page walker hits in the L2.
The following errata may apply to this: BDM69, BDM98
- page_walker_loads.dtlb_l3
- Number of DTLB page walker hits in the L3 + XSNP.
The following errata may apply to this: BDM69, BDM98
- page_walker_loads.dtlb_memory
- Number of DTLB page walker hits in Memory.
The following errata may apply to this: BDM69, BDM98
- page_walker_loads.itlb_l1
- Number of ITLB page walker hits in the L1+FB.
The following errata may apply to this: BDM69, BDM98
- page_walker_loads.itlb_l2
- Number of ITLB page walker hits in the L2.
The following errata may apply to this: BDM69, BDM98
- page_walker_loads.itlb_l3
- Number of ITLB page walker hits in the L3 + XSNP.
The following errata may apply to this: BDM69, BDM98
- tlb_flush.dtlb_thread
- This event counts the number of DTLB flush attempts of the thread-specific
entries.
- tlb_flush.stlb_any
- This event counts the number of any STLB flush attempts (such as entire,
VPID, PCID, InvPage, CR3 write, and so on).
- inst_retired.any_p
- This event counts the number of instructions (EOMs) retired. Counting
covers macro-fused instructions individually (that is, increments by two).
The following errata may apply to this: BDM61
- inst_retired.prec_dist
- This is a precise version (that is, uses PEBS) of the event that counts
instructions retired.
The following errata may apply to this: BDM11, BDM55
- inst_retired.x87
- This event counts FP operations retired. For X87 FP operations that have
no exceptions counting also includes flows that have several X87, or flows
that use X87 uops in the exception handling.
- other_assists.avx_to_sse
- This event counts the number of transitions from AVX-256 to legacy SSE
when penalty is applicable.
The following errata may apply to this: BDM30
- other_assists.sse_to_avx
- This event counts the number of transitions from legacy SSE to AVX-256
when penalty is applicable.
The following errata may apply to this: BDM30
- other_assists.any_wb_assist
- Number of times any microcode assist is invoked by HW upon uop
writeback.
- uops_retired.all
- This is a precise version (that is, uses PEBS) of the event that counts
all actually retired uops. Counting increments by two for micro-fused
uops, and by one for macro-fused and other uops. Maximal increment value
for one cycle is eight.
- uops_retired.stall_cycles
- This event counts cycles without actually retired uops.
- uops_retired.total_cycles
- Number of cycles using always true condition (uops_ret < 16) applied to
non PEBS uops retired event.
- uops_retired.retire_slots
- This is a precise version (that is, uses PEBS) of the event that counts
the number of retirement slots used.
- machine_clears.cycles
- This event counts both thread-specific (TS) and all-thread (AT)
nukes.
- machine_clears.count
- Number of machine clears (nukes) of any type.
- machine_clears.memory_ordering
- This event counts the number of memory ordering Machine Clears detected.
Memory Ordering Machine Clears can result from one of the following: 1.
memory disambiguation, 2. external snoop, or 3. cross SMT-HW-thread snoop
(stores) hitting load buffer.
- machine_clears.smc
- This event counts self-modifying code (SMC) detected, which causes a
machine clear.
- machine_clears.maskmov
- Maskmov false fault - counts number of time ucode passes through Maskmov
flow due to instruction's mask being 0 while the flow was completed
without raising a fault.
- br_inst_retired.all_branches
- This event counts all (macro) branch instructions retired.
- br_inst_retired.conditional
- This is a precise version (that is, uses PEBS) of the event that counts
conditional branch instructions retired.
- br_inst_retired.near_call
- This is a precise version (that is, uses PEBS) of the event that counts
both direct and indirect near call instructions retired.
- br_inst_retired.near_call_r3
- This is a precise version (that is, uses PEBS) of the event that counts
both direct and indirect macro near call instructions retired (captured in
ring 3).
- br_inst_retired.all_branches_pebs
- This is a precise version of BR_INST_RETIRED.ALL_BRANCHES that counts all
(macro) branch instructions retired.
The following errata may apply to this: BDW98
- br_inst_retired.near_return
- This is a precise version (that is, uses PEBS) of the event that counts
return instructions retired.
- br_inst_retired.not_taken
- This event counts not taken branch instructions retired.
- br_inst_retired.near_taken
- This is a precise version (that is, uses PEBS) of the event that counts
taken branch instructions retired.
- br_inst_retired.far_branch
- This event counts far branch instructions retired.
The following errata may apply to this: BDW98
- br_misp_retired.all_branches
- This event counts all mispredicted macro branch instructions retired.
- br_misp_retired.conditional
- This is a precise version (that is, uses PEBS) of the event that counts
mispredicted conditional branch instructions retired.
- br_misp_retired.all_branches_pebs
- This is a precise version of BR_MISP_RETIRED.ALL_BRANCHES that counts all
mispredicted macro branch instructions retired.
- br_misp_retired.ret
- This is a precise version (that is, uses PEBS) of the event that counts
mispredicted return instructions retired.
- br_misp_retired.near_taken
- Number of near branch instructions retired that were mispredicted and
taken. (Precise Event - PEBS).
- fp_arith_inst_retired.scalar_double
- Number of SSE/AVX computational scalar double precision floating-point
instructions retired. Each count represents 1 computation. Applies to SSE*
and AVX* scalar double precision floating-point instructions: ADD SUB MUL
DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as
they perform multiple calculations per element.
- fp_arith_inst_retired.scalar_single
- Number of SSE/AVX computational scalar single precision floating-point
instructions retired. Each count represents 1 computation. Applies to SSE*
and AVX* scalar single precision floating-point instructions: ADD SUB MUL
DIV MIN MAX RCP RSQRT SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count
twice as they perform multiple calculations per element.
- fp_arith_inst_retired.scalar
- Number of SSE/AVX computational scalar floating-point instructions
retired. Applies to SSE* and AVX* scalar, double and single precision
floating-point: ADD SUB MUL DIV MIN MAX RSQRT RCP SQRT FM(N)ADD/SUB.
FM(N)ADD/SUB instructions count twice as they perform multiple
calculations per element.
- fp_arith_inst_retired.128b_packed_double
- Number of SSE/AVX computational 128-bit packed double precision
floating-point instructions retired. Each count represents 2 computations.
Applies to SSE* and AVX* packed double precision floating-point
instructions: ADD SUB MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and
FM(N)ADD/SUB instructions count twice as they perform multiple
calculations per element.
- fp_arith_inst_retired.128b_packed_single
- Number of SSE/AVX computational 128-bit packed single precision
floating-point instructions retired. Each count represents 4 computations.
Applies to SSE* and AVX* packed single precision floating-point
instructions: ADD SUB MUL DIV MIN MAX RCP RSQRT SQRT DPP FM(N)ADD/SUB. DPP
and FM(N)ADD/SUB instructions count twice as they perform multiple
calculations per element.
- fp_arith_inst_retired.256b_packed_double
- Number of SSE/AVX computational 256-bit packed double precision
floating-point instructions retired. Each count represents 4 computations.
Applies to SSE* and AVX* packed double precision floating-point
instructions: ADD SUB MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and
FM(N)ADD/SUB instructions count twice as they perform multiple
calculations per element.
- fp_arith_inst_retired.double
- Number of SSE/AVX computational double precision floating-point
instructions retired. Applies to SSE* and AVX*scalar, double and single
precision floating-point: ADD SUB MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB.
DPP and FM(N)ADD/SUB instructions count twice as they perform multiple
calculations per element. ?.
- fp_arith_inst_retired.256b_packed_single
- Number of SSE/AVX computational 256-bit packed single precision
floating-point instructions retired. Each count represents 8 computations.
Applies to SSE* and AVX* packed single precision floating-point
instructions: ADD SUB MUL DIV MIN MAX RCP RSQRT SQRT DPP FM(N)ADD/SUB. DPP
and FM(N)ADD/SUB instructions count twice as they perform multiple
calculations per element.
- fp_arith_inst_retired.single
- Number of SSE/AVX computational single precision floating-point
instructions retired. Applies to SSE* and AVX*scalar, double and single
precision floating-point: ADD SUB MUL DIV MIN MAX RCP RSQRT SQRT DPP
FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they
perform multiple calculations per element. ?.
- fp_arith_inst_retired.packed
- Number of SSE/AVX computational packed floating-point instructions
retired. Applies to SSE* and AVX*, packed, double and single precision
floating-point: ADD SUB MUL DIV MIN MAX RSQRT RCP SQRT DPP FM(N)ADD/SUB.
DPP and FM(N)ADD/SUB instructions count twice as they perform multiple
calculations per element.
- hle_retired.start
- Number of times we entered an HLE region
does not count nested transactions.
- hle_retired.commit
- Number of times HLE commit succeeded.
- hle_retired.aborted
- Number of times HLE abort was triggered (PEBS).
- hle_retired.aborted_misc1
- Number of times an HLE abort was attributed to a Memory condition (See
TSX_Memory event for additional details).
- hle_retired.aborted_misc2
- Number of times the TSX watchdog signaled an HLE abort.
- hle_retired.aborted_misc3
- Number of times a disallowed operation caused an HLE abort.
- hle_retired.aborted_misc4
- Number of times HLE caused a fault.
- hle_retired.aborted_misc5
- Number of times HLE aborted and was not due to the abort conditions in
subevents 3-6.
- rtm_retired.start
- Number of times we entered an RTM region
does not count nested transactions.
- rtm_retired.commit
- Number of times RTM commit succeeded.
- rtm_retired.aborted
- Number of times RTM abort was triggered (PEBS).
- rtm_retired.aborted_misc1
- Number of times an RTM abort was attributed to a Memory condition (See
TSX_Memory event for additional details).
- rtm_retired.aborted_misc2
- Number of times the TSX watchdog signaled an RTM abort.
- rtm_retired.aborted_misc3
- Number of times a disallowed operation caused an RTM abort.
- rtm_retired.aborted_misc4
- Number of times a RTM caused a fault.
- rtm_retired.aborted_misc5
- Number of times RTM aborted and was not due to the abort conditions in
subevents 3-6.
- fp_assist.x87_output
- This event counts the number of x87 floating point (FP) micro-code assist
(numeric overflow/underflow, inexact result) when the output value
(destination register) is invalid.
- fp_assist.x87_input
- This event counts x87 floating point (FP) micro-code assist (invalid
operation, denormal operand, SNaN operand) when the input value (one of
the source operands to an FP instruction) is invalid.
- fp_assist.simd_output
- This event counts the number of SSE* floating point (FP) micro-code assist
(numeric overflow/underflow) when the output value (destination register)
is invalid. Counting covers only cases involving penalties that require
micro-code assist intervention.
- fp_assist.simd_input
- This event counts any input SSE* FP assist - invalid operation, denormal
operand, dividing by zero, SNaN operand. Counting includes only cases
involving penalties that required micro-code assist intervention.
- fp_assist.any
- This event counts cycles with any input and output SSE or x87 FP assist.
If an input and output assist are detected on the same cycle the event
increments by 1.
- rob_misc_events.lbr_inserts
- This event counts cases of saving new LBR records by hardware. This
assumes proper enabling of LBRs and takes into account LBR filtering done
by the LBR_SELECT register.
- mem_uops_retired.stlb_miss_loads
- This is a precise version (that is, uses PEBS) of the event that counts
load uops with true STLB miss retired to the architected path. True STLB
miss is an uop triggering page walk that gets completed without blocks,
and later gets retired. This page walk can end up with or without a
fault.
- mem_uops_retired.stlb_miss_stores
- This is a precise version (that is, uses PEBS) of the event that counts
store uops true STLB miss retired to the architected path. True STLB miss
is an uop triggering page walk that gets completed without blocks, and
later gets retired. This page walk can end up with or without a
fault.
- mem_uops_retired.lock_loads
- This is a precise version (that is, uses PEBS) of the event that counts
load uops with locked access retired to the architected path.
The following errata may apply to this: BDM35
- mem_uops_retired.split_loads
- This is a precise version (that is, uses PEBS) of the event that counts
line-splitted load uops retired to the architected path. A line split is
across 64B cache-line which includes a page split (4K).
- mem_uops_retired.split_stores
- This is a precise version (that is, uses PEBS) of the event that counts
line-splitted store uops retired to the architected path. A line split is
across 64B cache-line which includes a page split (4K).
- mem_uops_retired.all_loads
- This is a precise version (that is, uses PEBS) of the event that counts
load uops retired to the architected path with a filter on bits 0 and 1
applied. Note: This event ?ounts AVX-256bit load/store double-pump memory
uops as a single uop at retirement. This event also counts SW
prefetches.
- mem_uops_retired.all_stores
- This is a precise version (that is, uses PEBS) of the event that counts
store uops retired to the architected path with a filter on bits 0 and 1
applied. Note: This event ?ounts AVX-256bit load/store double-pump memory
uops as a single uop at retirement.
- mem_load_uops_retired.l1_hit
- This is a precise version (that is, uses PEBS) of the event that counts
retired load uops which data source were hits in the nearest-level (L1)
cache. Note: Only two data-sources of L1/FB are applicable for AVX-256bit
even though the corresponding AVX load could be serviced by a deeper level
in the memory hierarchy. Data source is reported for the Low-half load.
This event also counts SW prefetches independent of the actual data
source.
- mem_load_uops_retired.l2_hit
- This is a precise version (that is, uses PEBS) of the event that counts
retired load uops which data sources were hits in the mid-level (L2)
cache.
The following errata may apply to this: BDM35
- mem_load_uops_retired.l3_hit
- This is a precise version (that is, uses PEBS) of the event that counts
retired load uops which data sources were data hits in the last-level (L3)
cache without snoops required.
The following errata may apply to this: BDM100
- mem_load_uops_retired.l1_miss
- This is a precise version (that is, uses PEBS) of the event that counts
retired load uops which data sources were misses in the nearest-level (L1)
cache. Counting excludes unknown and UC data source.
- mem_load_uops_retired.l2_miss
- This is a precise version (that is, uses PEBS) of the event that counts
retired load uops which data sources were misses in the mid-level (L2)
cache. Counting excludes unknown and UC data source.
- mem_load_uops_retired.l3_miss
- Miss in last-level (L3) cache. Excludes Unknown data-source. (Precise
Event - PEBS).
The following errata may apply to this: BDM100, BDE70
- mem_load_uops_retired.hit_lfb
- This is a precise version (that is, uses PEBS) of the event that counts
retired load uops which data sources were load uops missed L1 but hit a
fill buffer due to a preceding miss to the same cache line with the data
not ready. Note: Only two data-sources of L1/FB are applicable for
AVX-256bit even though the corresponding AVX load could be serviced by a
deeper level in the memory hierarchy. Data source is reported for the
Low-half load.
- mem_load_uops_l3_hit_retired.xsnp_miss
- This is a precise version (that is, uses PEBS) of the event that counts
retired load uops which data sources were L3 Hit and a cross-core snoop
missed in the on-pkg core cache.
The following errata may apply to this: BDM100
- mem_load_uops_l3_hit_retired.xsnp_hit
- This is a precise version (that is, uses PEBS) of the event that counts
retired load uops which data sources were L3 hit and a cross-core snoop
hit in the on-pkg core cache.
The following errata may apply to this: BDM100
- mem_load_uops_l3_hit_retired.xsnp_hitm
- This is a precise version (that is, uses PEBS) of the event that counts
retired load uops which data sources were HitM responses from a core on
same socket (shared L3).
The following errata may apply to this: BDM100
- mem_load_uops_l3_hit_retired.xsnp_none
- This is a precise version (that is, uses PEBS) of the event that counts
retired load uops which data sources were hits in the last-level (L3)
cache without snoops required.
The following errata may apply to this: BDM100
- mem_load_uops_l3_miss_retired.local_dram
- This event counts retired load uops where the data came from local DRAM.
This does not include hardware prefetches. This is a precise event.
The following errata may apply to this: BDE70, BDM100
- mem_load_uops_l3_miss_retired.remote_dram
- Retired load uop whose Data Source was: remote DRAM either Snoop not
needed or Snoop Miss (RspI) (Precise Event)
The following errata may apply to this: BDE70
- mem_load_uops_l3_miss_retired.remote_hitm
- Retired load uop whose Data Source was: Remote cache HITM (Precise Event)
The following errata may apply to this: BDE70
- mem_load_uops_l3_miss_retired.remote_fwd
- Retired load uop whose Data Source was: forwarded from remote cache
(Precise Event)
The following errata may apply to this: BDE70
- baclears.any
- Counts the total number when the front end is resteered, mainly when the
BPU cannot provide a correct prediction and this is corrected by other
branch handling mechanisms at the front end.
- l2_trans.demand_data_rd
- This event counts Demand Data Read requests that access L2 cache,
including rejects.
- l2_trans.rfo
- This event counts Read for Ownership (RFO) requests that access L2
cache.
- l2_trans.code_rd
- This event counts the number of L2 cache accesses when fetching
instructions.
- l2_trans.all_pf
- This event counts L2 or L3 HW prefetches that access L2 cache including
rejects.
- l2_trans.l1d_wb
- This event counts L1D writebacks that access L2 cache.
- l2_trans.l2_fill
- This event counts L2 fill requests that access L2 cache.
- l2_trans.l2_wb
- This event counts L2 writebacks that access L2 cache.
- l2_trans.all_requests
- This event counts transactions that access the L2 pipe including snoops,
pagewalks, and so on.
- l2_lines_in.i
- This event counts the number of L2 cache lines in the Invalidate state
filling the L2. Counting does not cover rejects.
- l2_lines_in.s
- This event counts the number of L2 cache lines in the Shared state filling
the L2. Counting does not cover rejects.
- l2_lines_in.e
- This event counts the number of L2 cache lines in the Exclusive state
filling the L2. Counting does not cover rejects.
- l2_lines_in.all
- This event counts the number of L2 cache lines filling the L2. Counting
does not cover rejects.
- l2_lines_out.demand_clean
- Clean L2 cache lines evicted by demand.
- sq_misc.split_lock
- This event counts the number of split locks in the super queue.