Comments Locked

37 Comments

Back to Article

  • plopke - Thursday, June 25, 2020 - link

    the size of those dies :O.
    I am very skeptical about the one architecture approach. They putting a lot of categories of workloads all on the same architecture. I am worried it will be ending up with a lot of just good enough in many of them.
  • Yojimbo - Thursday, June 25, 2020 - link

    The one architecture approach is natural. A GPU is like a general purpose parallel processor. The underlying architecture can be applied to different domains efficiently. When they say one architecture they don't mean they won't make changes in individual implementations to differentiate the chips. For example, FP64 vs. no FP64, RT cores vs. no RT cores, cache sizes, which SIPs are included such as encoders and decoders, number and type of memory controllers, etc. With AMD, the CDNA/RDNA is most likely still one architecture, as Intel and NVIDIA would define it. The basic underlying architecture is unified and moves together generation by generation.
  • Duraz0rz - Thursday, June 25, 2020 - link

    That's how CPU/GPU design has been for a while now. Every generation uses a single architecture, just scaled differently depending on the application.
  • mode_13h - Thursday, June 25, 2020 - link

    Nvidia has differences between their HPC GPUs and consumer models. Intel has differences between their server and consumer cores (things like interconnect and AVX-512, at least). Even AMD has split their HPC GPUs from their consumer architecture (CDNA vs. RDNA).

    So, I wouldn't say that everyone simply tweaks a knob to instantiate more blocks or less, to address any given market segment. Of course, there are commonalities in just about every case I mentioned besides AMD's GPUs.
  • sing_electric - Monday, June 29, 2020 - link

    Ironically, Nvidia said that Ampere will be both a compute and graphics solution, just as AMD is dividing theirs up.

    Within an architecture, it's normal to have different chips to hit different size, price and feature points; it's normal to say, offer support for different memory architectures or increase some pipelines while decreasing others.

    Plus, even when we're talking about the same silicon there can be real differences: The Radeon VII and Instinct MI50 are the same chip, but with some features turned on/off (no ECC on the VII, for example), and performance severely limited for the double precision FP performance (though they did relent, and went from 1/16 the performance of the MI50 to 1/4 with the flip of a switch).
  • DanNeely - Thursday, June 25, 2020 - link

    Those are the sizes of the chips/IHSes; the actual dies under them are going to be smaller. The package sizes for even the 1x variant are large enough I'm assuming that much of the area under the IHS is taken up by HBM stacks.
  • MikeMurphy - Thursday, June 25, 2020 - link

    DanNeely I think you are correct: the HBM stacks would be placed under that heatspreader, which contributes to such a large size.
  • JKflipflop98 - Sunday, June 28, 2020 - link

    They're still huge die. They're the biggest you can fab with an EUV litho reticle.
  • Santoval - Sunday, June 28, 2020 - link

    EUV has nothing to do with this. Intel will introduce EUV at 7nm. These are still fabbed at 10nm; some of them might even be fabbed at 14nm. Intel's 7nm has not taped out yet so there are no engin. samples (yet).
  • brucethemoose - Thursday, June 25, 2020 - link

    The "peta op" tweet almost certainly implies that there's a focus AI performance. And that strategy worked pretty well for Volta/Ampere.
  • IntelUser2000 - Thursday, June 25, 2020 - link

    Those are packages, not dies.

    The smallest one is about the size of the socket on your desktop i7 motherboard.
  • jbrukardt - Thursday, June 25, 2020 - link

    Its chiplets, so not nearly as scary as the big 800 mm2 phi days
  • JayNor - Friday, June 26, 2020 - link

    The tag on the board in the lab says DG1.
  • yeeeeman - Friday, June 26, 2020 - link

    It is not one architecture. There are three, Xe-LP, for lower power graphics usages, Xe-HP for gaming and workstations and Xe-HPC which is for GPGPU, like CDNA and Ampere A100.
  • extide - Friday, June 26, 2020 - link

    Yeah but it's really not just literally one architecture. It's one base architecture tuned to three different markets -- which is pretty much exactly how nvidia does it. That could mean different ratios/balances of different units in the HPC/HP/LP versions.

    I just hope Intel knows they need to make a splash here and freaking brings it. I am tired of this nvidia monopoly we've had for a while. (On that matter I hope AMD knocks NAVI2 cards out of the park as well!)
  • domboy - Thursday, June 25, 2020 - link

    "BFP - big 'fabulous' package". Riiight.... I think we all know what they really say off social media, and it's not 'fabulous'... ;)
  • Deicidium369 - Thursday, June 25, 2020 - link

    The F in SpaceX's BFR was Falcon, right?
  • quadra - Thursday, June 25, 2020 - link

    I saw the code name and chuckled. Sure, I love the beautiful Ponte Vecchio over the River Arno in Florence, Italy, especially as a student of art history.

    But I was thinking of Intel’s code names: Sandy Bridge, Ivy Bridge...do you know what the literal translation of Ponte Vecchio is from the Italian?

    Old Bridge.

    :)
  • Rookierookie - Sunday, June 28, 2020 - link

    Now you know where all that R&D budget went.
  • rrinker - Thursday, June 25, 2020 - link

    Two references to Osborning in as many days!
  • Deicidium369 - Thursday, June 25, 2020 - link

    IKR - all walking along and BAM! bringing Osborning into the lexicon.
  • edzieba - Thursday, June 25, 2020 - link

    A lot of publications talk about how much Intel has been saying on Xe, but... very little of that seems to trickle down to the public.
    We know vanishingly little about the Xe architecture as it will be seen in the consumer world (beyond the Xe arch we're explicitly NOT getting in Xe-HPC, and even Xe-HPC is vague at best), nothing about what is inside the 'execution units', nothing whatsoever as to how the consumer devices will be stratified or targeted, etc. We basically know "XE is coming, and it is made up of some components".
  • mode_13h - Thursday, June 25, 2020 - link

    Intel's Linux GPU drivers are in-tree (meaning open source, obviously) and upstreamed often 6 months or more in advance of official silicon launches. Phoronix has pretty good coverage of their changes, at least at the level of ISA, codecs, graphics drivers, tools, and oneAPI.

    https://www.phoronix.com/scan.php?page=news_item&a...
    https://www.phoronix.com/scan.php?page=news_item&a...
    https://www.phoronix.com/scan.php?page=news_item&a...
    https://www.phoronix.com/scan.php?page=news_item&a...
    https://www.phoronix.com/scan.php?page=news_item&a...
    https://www.phoronix.com/scan.php?page=news_item&a...
    https://www.phoronix.com/scan.php?page=news_item&a...
    https://www.phoronix.com/scan.php?page=news_item&a...

    https://www.phoronix.com/scan.php?page=search&...

    Of course, other than what can be gleaned from PCIe IDs, you're not going to get details about specific products and specs, but let's be realistic--nobody is disclosing that stuff in advance.
  • lmcd - Friday, June 26, 2020 - link

    Do any of those indicate whether consumer models will still have access to GPU virtualization?
  • Oxford Guy - Thursday, June 25, 2020 - link

    Let's hope it's better than the Atari XE keyboard was.
  • yeeeeman - Friday, June 26, 2020 - link

    I think this is a better usage of chiplets than in CPUs. GPUs being naturally parallel computing machines will much more from chiplets than CPUs.
  • Oxford Guy - Saturday, June 27, 2020 - link

    It depends on how much latency will be a problem. That is the main issue with chiplets.
  • mode_13h - Saturday, June 27, 2020 - link

    Uh, for graphics, locality is the main issue. GPUs are very good at hiding latency, but graphics workloads tend to have fairly scattered data access patterns. So, you don't want to introduce bottlenecks by having a NUMA architecture.
  • PeterCollier - Friday, June 26, 2020 - link

    What happened to Larabee??⁉️
  • ilt24 - Friday, June 26, 2020 - link

    @PeterCollier ... It became Phi

    https://en.wikipedia.org/wiki/Xeon_Phi
  • mode_13h - Friday, June 26, 2020 - link

    And then it died.
  • extide - Sunday, June 28, 2020 - link

    Yeah, basically Intel added AVX-512 to their regular Xeons and made it redundant.
  • xrror - Friday, June 26, 2020 - link

    Anyone else here getting a bad feeling that Raja is going to do a repeat of what he did at AMD?

    His designs really do seem to be compute first, graphics second.

    He got lucky that the cryptocurrency bubble saved his bacon when at AMD.

    But all the of things he keeps exhibiting about Xe at Intel he really doesn't seem very excited about actual graphics performance - it's all about future AI scaling and compute.

    And now he's showing these huge packages that honestly I don't see ever being on any affordable video card. They are technically impressive but do you seriously see these being mass produced for $300 video cards?

    Last I fear if he keeps taking longer and longer the whole thing will languish like what happened to Larrabee/Phi.
  • Oxford Guy - Saturday, June 27, 2020 - link

    Well, let's look at the state of Intel's business.

    1. Can't compete with AMD in terms of the "low-end" desktop enthusiast platform due to the 10nm flop and the lack of a good post-Skylake architecture.

    2. Is getting lots of sales from enterprise thanks to COVID.

    Which would you target right now if you were Intel?
  • mode_13h - Saturday, June 27, 2020 - link

    I don't think crypto saved his bacon. I think when Vega launched late and wasn't competitive in either graphics or AI, the writing was on the wall.
  • stadisticado - Saturday, June 27, 2020 - link

    The products in this article are for HPC/AI use cases. The closest these will get to being used by consumers is if you subscribe to Stadia or similar type service. Intel hasn't really talked specifically about client GPUs as yet.
  • TheJian - Thursday, July 2, 2020 - link

    "Absent from the discrete GPU space for over 20 years, this year Intel is set to see the first fruits from their labors to re-enter that market."

    Uh, Larrabee. It was supposed to kill Nvidia and AMD, but failed to even convince people because you had to learn to program all over again (which is almost impossible to get programmers to do). You act like that was NOT an attempt at a GPU. It is still alive in many formats, just couldn't convince game devs it was worth the time. But don't act like they didn't attempt a comeback that already failed in gaming.

    It still remains to be seen if they have a driver team that can keep up with either of the other giants, not to mention apple is coming with what I suspect is a better driver team than Intel (I'd bet on that) and gpu experience for ages on mobile gaming.

    Look at win10. They gave it away for years (still a loophole today), and it still sucks, people still hate it, and given a choice would leave in seconds. Most stuck with it simply don't know how to change to something else. So you buy a win10 PC (like many of my parents friends, all retired of course), and end up ONLY using your phone to communicate with the world...ROFL. Windows 10 is killing PC users. It doesn't help that every patch breaks at least 4 versions of the OS...ROFL. That to me was larrabee but the hardware version. The difference is, you don't have 2 other options if you drop windows (well, not ones that will be painless), where the vid card had no chance to win vs. regular NV/AMD stuff that required no new learning. Windows 10 is just lucky Linux still sucks in gaming and most apps vs. windows (1 choice, vs, 10 on windows etc usually). If I was NV I'd be pouring millions into linux/android gaming and moving to my own PC system sans WINTEL completely. Apple is about to do it, sans everyone else. If they pull an EPIC like deal, giving away games for 5yrs or something (they can afford 10-20billion to dominate...LOL), don't act like it wouldn't attract millions of users and a huge base. If the phones are 5nm, even they will be decent gaming devices with apple aiming at the gpu. AppleTV just got upgraded, and expect a new version yearly I'd bet to advance you to death (how will msft/sony respond to yearly updates of hardware?). There are a number of ways to attack this with apple's cash. Cheap trade-ins for a few gens to get the whole base up to a certain level after leaving them behind yearly to kill everyone else, free games, etc, or a combo of tricks. Epic got 60m+ users in short order and they are broke compared to APPLE's level of shenanigans.

    Please re-release win7 or xp and put dx12 or vuklan in (heck Vulkan already works on XP). Don't tell me you can't run it there...BS. Vulkan runs on anything pretty much, so does DX12 REALLY need win10...NOPE. If you still can't admit the obvious, ok, kill DX12, go vulkan and make a better OS made for DESKTOPS only. We are sick of your Mobile OS called win10. People are still waiting for a patch for the current patch that breaks all current versions (4?, almost all? whatever, win10 sucks, EVERY version). We have been beta testers for years on this crap OS. I digress...Apple might win our family (all homes) if win11 doesn't come soon and is radically OLDer, and at most 1 service pack a year or two, no features for 3yrs. Then it won't be broke for it's whole life, because you actually have time to fix crap.

Log in

Don't have an account? Sign up now