NVIDIA Announces GeForce RTX 2050, MX570, and MX550 For Laptops: 2022's Entry Level GeForce
by Ryan Smith on December 17, 2021 2:00 PM ESTNVIDIA this morning has made an unexpected news drop with the announcement of a trio of new GeForce laptop GPUs. Joining the GeForce family next year will be a new RTX 2000 series configuration, the GeForce RTX 2050, as well as an update to the MX lineup with the addition of the GeForce MX550 and GeForce MX570. The combination of parts effectively provide a refresh to the low-end/entry-level segment of NVIDIA’s laptop product stack, overhauling these products in time for new laptops to be released next year.
While the announcement of these parts makes sense in context, the timing of this announcement is a bit odd, to say the least. NVIDIA is dropping this news not only on a Friday (a day normally reserved for bad news), but the last working Friday of the year at that – most of the tech industry has the remaining two Fridays off. So make of that what you will.
Still, with CES 2022 starting bright and early on January 3rd, 2022, there’s little time left to announce anything that won’t be announced at the show itself or needs to be announced ahead of it. So it looks like NVIDIA is laying the groundwork for some partner laptop announcements next month, as OEMs will want new(er) laptop video card SKUs to pair with the new laptop CPUs that we’re expecting to come out in the first part of 2022.
More curious, perhaps, is that NVIDIA’s announcement today is for laptop parts that the company doesn’t expect to hit retail shelves until the Spring of 2022. It’s not unusual for OEM parts such as laptop GPUs to be announced a fair bit in advance, since laptops themselves tend to get announced ahead of time. However as it’s still (barely) Fall – we haven’t even reached Winter yet – Spring of 2022 is still over 3 months off, so NVIDIA is announcing these low-end parts well in advance of when they’re expected to reach consumer hands. We’ll see how things play out at CES, but this may be a sign that the industry is going to jump the gun a bit and use the show to announce laptops that won’t be available until a few months later (CES is a fixed point in time; hardware dev cycles aren’t).
GeForce RTX 2050: Named By Turing, Powered by Ampere
Diving into the hardware itself, let’s start with the GeForce RTX 2050. The fastest of the new laptop accelerators announced today, it’s also the most curious, for two reasons. First and foremost, this is the second time this month that NVIDIA has issued a new GeForce RTX 2000 series SKU, following the 12GB desktop RTX 2060 last week. Secondly, this is a very odd configuration, as it’s fully RTX-enabled, RT cores and all; but NVIDIA never produced a low-end Turing TU10x chip with those features.
At this point we believe RTX 2050 to be based on TU106, given the feature set and RTX 2000 series branding.
Update: NVIDIA has since responded to us and informed us that, despite the RTX 2000 series naming, RTX 2050 is based on Ampere, not Turing. Specifically, GA107, the same GPU used in the RTX 3050 series.
From a high-level perspective, NVIDIA is clearly doing the tech industry equivalent of rummaging through the pantry and cooking up whatever is left over. The chip crunch means that NVIDIA is already producing every Ampere (GA10x) chip they can on Samsung’s 8nm process, so they are completely capacity constrained there. This means that NVIDIA is looking to salvage virtually every last die that they can, and even though RTX 3050 is already itself based on a less-than-fully-functional GA107 die, NVIDIA is now going one step further by creating a further cut-down GA107 SKU. Meanwhile this also helps to keep the low-end of NVIDIA's product stack competitive, both against AMD's dGPUs and the ever-increasing performance of integrated GPUs.
Much odder, however, is NVIDIA's decision to classify the new SKU as an RTX 2000 series part. Which, for the previous 3 years, has been exclusively comprised of Turing TU10x parts. NVIDIA has not provided an explanation for this decision (and I doubt they ever will), but for now it means that the RTX 20 series will live on for another generation in some fashion thanks to the inclusion of an Ampere-based product. And while this just made talking about NVIDIA's laptop GPU lineup a whole lot harder, it's hard to particularly chide NVIDIA here since this is an Ampere part playing down a level (as it were), as opposed to the more common (and much loathed) tradition of rebadging older parts in order to satisfy OEM demands.
(But if we're taking bets on NVIDIA's rationale, my money is on the company not wanting a 40-class RTX part, i.e. RTX 3040)
NVIDIA GeForce RTX 20/30 Series Laptop Specifications | ||||||
RTX 3060 Laptop GPU |
RTX 3050 Ti Laptop GPU |
RTX 3050 Laptop GPU |
RTX 2050 Laptop GPU |
|||
CUDA Cores | 3840 | 2560 | 2048 | 2048 | ||
ROPs | 48 | 32 | 32 | 32? | ||
Boost Clock | 1283 - 1703MHz | 1035 - 1695MHz | 1057 - 1740MHz | 1155 - 1477MHz | ||
Memory Clock | 14Gbps GDDR6 | 14Gbps GDDR6 | 14Gbps GDDR6 | 14Gbps GDDR6 | ||
Memory Bus Width | 192-bit | 128-bit | 128-bit | 64-bit | ||
VRAM | 6GB | 4GB | 4GB | 4GB | ||
TDP Range | 60 - 115W | 35 - 80W | 35 - 80W | 30 - 45W | ||
GPU | GA106 | GA107 | GA107 | GA107 | ||
Architecture | Ampere | Ampere | Ampere | Ampere | ||
Manufacturing Process | Samsung 8nm | Samsung 8nm | Samsung 8nm | Samsung 8nm | ||
Launch Date | 01/26/2021 | 05/11/2021 | 05/11/2021 | Spring 2022 |
So what is the RTX 2050? We're essentially looking at an additional salvage bin of the GA107 GPU that is designed to occupy a slot below the Ampere-based RTX 3050 for Laptops. The RTX 2050 comes with 2048 CUDA cores – the same as the RTX 3050 – but is aimed at low-power configurations, so it has relatively low clockspeeds as a result. NVIDIA’s official figures for the boost clock are just 1155MHz to 1477MHz. On paper, this gives it a total throughput of between 4.7 TFLOPS and 6.0 TFLOPS.
And being that this is an RTX-branded part, it is fully RTX enabled, as NVIDIA likes to put it. That means it ships with both tensor cores and ray tracing (RT) cores enabled, and making the RTX 2050 the lowest-power/lowest-performing SKU to come with these features. In practice I have significant doubts that such a slow Ampere part can do ray tracing fast enough to be useful. However the inclusion of the tensor cores should be good news for anyone who’s good with using DLSS to enhance sub-native resolution rendering.
And you’ll be doing a lot of lower-resolution gaming with the RTX 2050, as the biggest cutback to the underlying GA107 GPU is to its memory controllers. NVIDIA has disabled half of GA107's memory controllers, leaving it with a very narrow 64-bit memory bus. So while RTX 2050 has what we believe to be 32 ROPs to push pixels (the same as RTX 3050), it’s quite light on the memory bandwidth that would be needed to keep those ROPs fully fed. Bus width aside, NVIDIA is using 14Gbps GDDR6 here, so the SKU gets (up to) 112GB/second of memory bandwidth, which is a straight one-half the bandwidth of the RTX 3050. Meanwhile the narrow memory bus means that laptop vendors can get away with a rather small physical implementation – they’d need just 2 GDDR6 chips to pair with the already fairly small GA107 GPU.
The cut-down memory bus also is a big component in keeping power utilization of the RTX 2050 down. NVIDIA is aiming it for 30W to 45W configurations, which is close to their traditional power window for the MX series of laptop GPUs, and at the upper bound is about half of what RTX 3050 scales up to. Typically, these lower-power SKUs get paired with low-power CPUs (e.g. Intel H35 series), but given the chip shortage, I wouldn’t be surprised to see laptop vendors pair them with just about anything that needs a dGPU.
And for those pairings, laptop vendors will need to make sure they’re using a chip with an integrated GPU. According to NVIDIA’s specifications, they have disabled all of the chip’s display outputs, so there are no HDMI or DisplayPort headers available to drive laptop or external displays. Put another way, Optimus is the only configuration supported, as the iGPU will be needed to drive the screen. In practice this isn’t a very big shift since most laptops use Optimus configurations to begin with, but let it be noted that NVIDIA has explicitly removed any option to use the RTX 2050 alone. Unfortunately, this also means that G-Sync is not supported in any fashion.
Ultimately, if nothing else, RTX 2050 is going to go down in history as one of the weirder laptop SKUs that NVIDIA has ever released. But with the chip crunch seemingly remaining with us for some time yet, NVIDIA and other chip vendors are going to get increasingly creative to get any chips they can on the market. The resulting performance of this SKU won’t be anything worth writing home about, I suspect – it’s very much a low-end SKU – but if it’s meaningfully faster than integrated graphics, then it’s filling a much-needed role right now.
GeForce MX550 & GeForce MX570: Mixing Turing and Ampere
Compared to the RTX 2050, NVIDIA’s new entry-level laptop GPUs are decidedly more traditional. NVIDIA’s MX series of laptop GPUs take up the very bottom of NVIDIA’s product stack, and are the first step above integrated graphics in most laptop designs. These parts are also commonly used for adding dGPUs to thin and light laptops, given the relatively small sizes of the chips used, as well as their low TDPs.
Last updated back in August of 2020 with the GeForce MX450, for this generation of the MX series NVIDIA has brought back a two-chip stack. This gives us the MX550, and the faster MX570.
Unfortunately, NVIDIA is notoriously mum when it comes to officially disclosing the specifications of the MX series, and this year is no different. So we don’t have any specifications to share with you with respect to CUDA cores, ROPs, clockspeeds, or memory configurations. NVIDIA isn’t even offering a “GeForce Performance Score” estimate for these parts, which in previous generations was a basic estimate of performance versus Intel’s most recent integrated GPU. So there’s very little we can definitively tell you besides the fact that these parts exist – or at least, will exist in the Spring of 2022.
NVIDIA GeForce MX Series Laptop Specifications | |||||
MX570 | MX550 | MX450 | |||
CUDA Cores | >1024? | 1024? | 896 | ||
ROPs | 32? | 32? | 32 | ||
Memory Clock | 12Gbps GDDR6? | 12Gbps GDDR6? | 10Gbps GDDR6 | ||
Memory Bus Width | 64-bit | 64-bit | 64-bit | ||
TDP Range | <30W? | <30W? | <30W | ||
GPU | GA107 | TU117 | TU117 | ||
Architecture | Ampere | Turing | Turing | ||
Manufacturing Process | Samsung 8nm | TSMC 12nm | TSMC 12nm | ||
Launch Date | Spring 2022 | Spring 2022 | August 2020 |
Inevitably, once these parts start shipping someone will be about to query them and document their configuration details. But for the moment we can at least take a couple of educated guesses based on a few choice disclosures from NVIDIA, as well as their published photos.
First and foremost, NVIDIA’s literature confirms that MX550 is based on the Turing architecture. The last-generation MX450 was based on the TU117 GPU, and this is the logical choice for this generation as well. NVIDIA is not disclosing the number of CUDA cores enabled on MX550, but they are disclosing that it’s more CUDA cores than the MX450, which came in at 896 CUDA cores. So it would seem that MX550 is 1024 CUDA cores – a full TU117 configuration. In the strictest sense this avoids the MX550 being a pure rebadge, but given the relatively small changes, I wouldn't expect it to perform significantly higher than the MX450 before it.
Meanwhile MX570 makes no mention of Turing. Instead, NVIDIA mentions “more power-efficient CUDA Cores.” And, after checking with the company, NVIDIA has since confirmed that, unlike MX550, MX570 is an Ampere-based part using GA107. This effectively makes it the smaller sibling to the new RTX 2050. This comes as a bit of a surprise, since NVIDIA tends to stick to a single GPU for the MX series – but it's clearly has been (and is going to be) that kind of year.
Both SKUs are also slated to come with faster memory. MX450 was typically seen with 10Gbps GDDR6, so it’s a relatively safe assumption that NVIDIA is pairing the new MX500 parts with 12Gbps GDDR6. The MX series has always been equipped with a 64-bit memory bus, so that is almost certainly the same situation here. In which case we’re looking at 96GB/sec of memory bandwidth.
Past that, NVIDIA is offering no further details. The MX series is not sold on the basis of features, and NVIDIA is offering no indication that the Ampere-based MX570 has any of its advanced features enabled, such as the RT or tensor cores. For all practical purposes, it's functionally closer a feature level 12_1 SKU, rather than 12_2/DirectXU as the hardware is capable of.
NVIDIA also isn’t disclosing official TDPs here. But keeping in line with the target market for the MX500 series and previous parts’ TDPs, we’re almost certainly looking at sub-30W configurations here.
Source: NVIDIA
18 Comments
View All Comments
Alistair - Friday, December 17, 2021 - link
It's fun going backwards. 12nm in 2019, moving to 8nm in 2021, and 7nm for AMD, 5nm for Apple... then back to 12nm in 2022... NO thanksRU482 - Friday, December 17, 2021 - link
Intel be like "Hey Guys, wait up"Alistair - Friday, December 17, 2021 - link
Imagine what a 5nm RTX 3060 would be like? Call it the GTX 4050 and charge $200.meacupla - Saturday, December 18, 2021 - link
low end nvidia products never used the best process node available.In fact, I'm surprised they used 12nm at all.
I wouldn't have expected 16nm on Fermi architecture.
Kangal - Tuesday, December 21, 2021 - link
I would hardly call the RTX-3050-Ti low-end.This position has always been Nvidia's high-end offering for cards that are 70W / don't need external power supply. It's been the goto option for HTPCs and Office PCs, and they generally have a price-premium. Since the original GTX-650 in 2012, then GTX 750-Ti in 2014, then the GTX-1050Ti in 2016, and lastly the GTX-1650 back in 2019.
Kangal - Tuesday, December 21, 2021 - link
This market has been underserved for a long while now by the OEMs. It's nice seeing the RTX-3050Ti looking competitive on-paper. Hopefully they will be available to buy, come in a Low-Profile (or Half-Height) variant, and cost a fair price. Though I doubt it. Otherwise, this market will continue to stagnate, or possibly be served by Intel's Xe, and dominated by Apple's own SoC/APU in the market.Would've been great to grab an ex-Office PC like a Dell OptiPlex 7070 SFF, with an i7-8700, for something like USD $270, throw in a $80 nVme, and extra RAM for $20. Then complete the system with an RTX-3050Ti for USD $130, for a total of USD $500 give or take. Not too bad for roughly the performance of the Xbox Series S. That is, if the Pcie x4 slot doesn't cause problems since the x16 slot is placed wrongly/too low.
...but I'm dreaming here, those types of deals are gone.
Samus - Saturday, December 18, 2021 - link
It seems like putting 4GB VRAM on a 64-bit bus is a total waste on Ampere. My GTX970 has DOUBLE that memory bandwidth using ancient memory technology on a 7 year old card.Slyr762 - Saturday, December 18, 2021 - link
Agreed. Actually, 970 was what I had before a 1080, lol.Oxford Guy - Saturday, December 18, 2021 - link
Over 3.5 GB.Samus - Sunday, December 19, 2021 - link
We used to think the 3.5GB thing mattered, especially when BF2042 was in beta and there was clear stutter when more than 3.5GB was being utilized. But like ALL games released over the years that the GTX970 has had the known segmented memory issue, ALL games have taken it into account, including 2042. Which says a lot considering 2042 doesn't even officially support cards prior to the GTX 1000-series.I've been running 2042 at 1080P (technically 1200P) in low detail just fine on my 970. And monitoring its memory usage in GPUZ, there is no stutter when more than 3.5GB is used.
Posts on forums and reddit from beta testers quantify something happened during the beta that rectified this: even when the 512MB segment is used (which is only allocated a 32-bit memory interface to the crossbar) it is still exponentially faster than going to system RAM. What needed to be done to prevent the stutter was for DICE to make sure textures didn't get cached into this segment. You can monitor the memory allocation in detail using GPUZ to see the layout or whatever, and the game does what some other recent games do: two memory partitions. This is actually done for ALL GPU's in BF2042 and presumably other games, but there appears to be a specific memory allocation for GTX970 that, you guessed it, makes a 3.5GB and 512MB partition, and the 512MB partition is utilized by non-texture data that still benefits from having high speed memory. Keep in mind here that many videocards (like those in THIS AT POST) only have 64-bit memory buses so 32-bit is totally fine for logging, z-buffering, pre-rendering, meshes, shades\colors, even HUD\static data. The goal is to keep the data quickly accessible by the crossbar. DICE obviously didn't prioritized supporting unofficially supported hardware until the game released as that wasn't really the focus of the beta.
Basically the only time the 0.5GB segment is ever an issue is when devs don't use it right, and textures are stored indiscriminately across the entire memory bus. Basically all games do, as they should, because the GTX970 was one of the best selling GPU's in history and there are a great deal of them still in use. Pretty crazy when you consider they are 7 years old, but it's also worth mentioning why some people like myself hold onto them: one of the last GPU's with a compact blower, the last gen of GPU's with analog output and native HD15, low power consumption, still relevant for 1080P gaming, actively supported drivers.