Ten Year Anniversary of Core 2 Duo and Conroe: Moore’s Law is Dead, Long Live Moore’s Law
by Ian Cutress on July 27, 2016 10:30 AM EST- Posted in
- CPUs
- Intel
- Core 2 Duo
- Conroe
- ITRS
- Nostalgia
- Time To Upgrade
Core: Decoding, and Two Goes Into One
The role of the decoder is to decipher the incoming instruction (opcode, addresses), and translate the 1-15 byte variable length instruction into a fixed-length RISC-like instruction that is easier to schedule and execute: a micro-op. The Core microarchitecture has four decoders – three simple and one complex. The simple decoder can translate instructions into single micro-ops, while the complex decoder can convert one instruction into four micro-ops (and long instructions are handled by a microcode sequencer). It’s worth noting that simple decoders are lower power and have a smaller die area to consider compared to complex decoders. This style of pre-fetch and decode occurs in all modern x86 designs, and by comparison AMD’s K8 design has three complex decoders.
The Core design came with two techniques to assist this part of the core. The first is macro-op fusion. When two common x86 instructions (or macro-ops) can be decoded together, they can be combined to increase throughput, and allows one micro-op to hold two instructions. The grand scheme of this is that four decoders can decode five instructions in one cycle.
According to Intel at the time, for a typical x86 program, 20% of macro-ops can be fused in this way. Now that two instructions are held in one micro-op, further down the pipe this means there is more decode bandwidth for other instructions and less space taken in various buffers and the Out of Order (OoO) queue. Adjusting the pipeline such that 1-in-10 instructions are fused with another instruction should account for an 11% uptick in performance for Core. It’s worth noting that macro-op fusion (and macro-op caches) has become an integral part of Intel’s microarchitecture (and other x86 microarchitectures) as a result.
The second technique is a specific fusion of instructions related to memory addresses rather than registers. An instruction that requires an addition of a register to a memory address, according to RISC rules, would typically require three micro-ops:
Pseudo-code | Instructions |
read contents of memory to register2 | MOV EBX, [mem] |
add register1 to register2 | ADD EBX, EAX |
store result of register2 back to memory | MOV [mem], EBX |
However, since Banias (after Yonah) and subsequently in Core, the first two of these micro-ops can be fused. This is called micro-op fusion. The pre-decode stage recognizes that these macro-ops can be kept together by using smarter but larger circuitry without lowering the clock frequency. Again, op fusion helps in more ways than one – more throughput, less pressure on buffers, higher efficiency and better performance. Alongside this simple example of memory address addition, micro-op fusion can play heavily in SSE/SSE2 operations as well. This is primarily where Core had an advantage over AMD’s K8.
AMD’s definitions of macro-ops and micro-ops differ to that of Intel, which makes it a little confusing when comparing the two:
However, as mentioned above, AMD’s K8 has three complex decoders compared to Core’s 3 simple + 1 complex decoder arrangement. We also mentioned that simple decoders are smaller, use less power, and spit out one Intel micro-op per incoming variable length instruction. AMD K8 decoders on the other hand are dual purpose: it can implement Direct Path decoding, which is kind of like Intel’s simple decoder, or Vector decoding, which is kind of like Intel’s complex decoder. In almost all circumstances, the Direct Path is preferred as it produces fewer ops, and it turns out most instructions go down the Direct Path anyway, including floating point and SSE instructions in K8, resulting in fewer instructions over K7.
While extremely powerful in what they do, AMD’s limitation for K8, compared to Intel’s Core, is two-fold. AMD cannot perform Intel’s version of macro-op fusion, and so where Intel can pack one fused instruction to increase decode throughput such as the load and execute operations in SSE, AMD has to rely on two instructions. The next factor is that by virtue of having more decoders (4 vs 3), Intel can decode more per cycle, which expands with macro-op fusion – where Intel can decode five instructions per cycle, AMD is limited to just three.
As Johan pointed out in the original article, this makes it hard for AMD’s K8 to have had an advantage here. It would require three instructions to be fetched for the complex decoder on Intel, but not kick in the microcode sequencer. Since the most frequent x86 instructions map to one Intel micro-op, this situation is pretty unlikely.
158 Comments
View All Comments
Dobson123 - Wednesday, July 27, 2016 - link
I'm getting old.3ogdy - Wednesday, July 27, 2016 - link
That's what I thought about when I read "TEN year anniversary". It certainly doesn't feel like it was yesterday...but it certainly feels as old as "last month" is in my mind and that's mostly thanks to i7s, FXs, IPS, SSDs and some other things that proved to be more or less of a landmark in tech history.close - Thursday, July 28, 2016 - link
I just realized I have an old HP desktop with a C2D E6400 that will turn 10 in a few months and it's still humming along nicely every day. It ran XP until this May when I switched it to Win10 (and a brand new SSD). The kind of performance it offers in day to day work even to this day amazes me and sometimes it even makes me wonder why people with very basic workloads would buy more expensive stuff than this.junky77 - Thursday, July 28, 2016 - link
marketing, misinformation, lies and the need to feel secure and have something "better"Solandri - Friday, July 29, 2016 - link
How do you think those of us old enough to remember the 6800 and 8088 feel?JimmiG - Sunday, July 31, 2016 - link
Well my first computer had a 6510 running at 1 MHz.Funnily enough, I never owned a Core 2 CPU. I had an AM2+ motherboard and I went the route of the Athlon X2, Phenom and then Phenom II before finally switching to Intel with a Haswell i7.
Core 2 really changed the CPU landscape. For the first time in several years, Intel firmly beat AMD in efficiency and raw performance, something AMD has still not recovered from.
oynaz - Friday, August 19, 2016 - link
We miss or C64s and AmigasArtShapiro - Tuesday, August 23, 2016 - link
What about those of us who encountered vacuum tube computers?AndrewJacksonZA - Wednesday, July 27, 2016 - link
I'm still using my E6750... :-)just4U - Thursday, July 28, 2016 - link
I just retired my dads E6750. It was actually still trucking along in a Asus Nvidia board that I had figured would be dodgy because the huge aluminum heatsink on the chipset was just nasty.. Made the whole system a heatscore. Damned if that thing didn't last right into 2016. Surprised the hell out of me.