Comments Locked

135 Comments

Back to Article

  • Wreckage - Saturday, February 1, 2014 - link

    Weren't there people blaming EA for the delay?

    " A 7-10% performance increase is not a dramatic difference"

    Can any of that increase be from the new driver? I mean you would expect that from a new driver.
  • Ryan Smith - Saturday, February 1, 2014 - link

    "Can any of that increase be from the new driver? I mean you would expect that from a new driver."

    This is Cat 14.1 versus Cat 14.1. So it's exclusively from enabling Mantle.
  • Wreckage - Saturday, February 1, 2014 - link

    Thanks for the quick response. I did not realize there were two versions of the 14.1 driver. (Or was is just a enable/disable thing?
  • Ryan Smith - Saturday, February 1, 2014 - link

    It's an enable/disable thing. The option is in Battlefield 4.

    http://battlelog.battlefield.com/bf4/news/view/bf4...
  • blanarahul - Saturday, February 1, 2014 - link

    Can you post 2C/2T 3 GHz results as well???
  • kwrzesien - Monday, February 3, 2014 - link

    Or better yet 2C/2T 2 GHz results, something to really simulate an older/weaker CPU.
  • chekk42 - Tuesday, February 4, 2014 - link

    Perhaps interesting from a "what if" perspective, but unrealistic as no serious gamer would run a 290x on a CPU that weak. One does not pair a $500 GPU with a $100 CPU unless it's for mining.
  • mikato - Wednesday, February 5, 2014 - link

    Why not, if Mantle makes that playable? I've been playing COD Ghosts on a Core 2 Duo 2.66GHz E8500, and an E7300 before that. Do you know why? Because that's one of the gaming machines I have! Both were above minimum requirements but I do want to make a new build for that now as it is certainly underpowered. I had already upgraded the video card which is much easier, but now I'll have to replace CPU, mobo, memory, reinstall OS and all software.
  • Owls - Saturday, February 1, 2014 - link

    If he was talking about nVidia then the 7-10% performance increase would be dramatic. But since it's AMD it doesn't count. Objectivity, get some.
  • dragonsqrrl - Saturday, February 1, 2014 - link

    Cool it fanboy, cool it. We all know AMD gets the free pass on things way more often than any other tech company out there because they're the underdog, and well... because of people like you. If Nvidia or Intel had announced up to 50% performance boosts in the months leading up the release of their new API, then missed their target release window twice, and then these results came out, I'm pretty sure it would've been declared a failure by the community on the spot. Hell, if Nvidia had even announced Mantle the same way AMD did it would've been over before it began. But since it's AMD, it doesn't count. Objectivity, get some.
  • Plesman - Saturday, February 1, 2014 - link

    Objectively: I think it was EA who caused the delay. They wanted to fix the other bugs in Battlefield 4, before they wanted to finish implementing Mantle.

    Objectively: Everybody promises the biggest "up to" performance boosts all of the time don't they?
  • TheJian - Saturday, February 1, 2014 - link

    No, AMD's beta driver came 2nd, the BF4 Mantle patch was first. And since it's STILL a beta driver we're still waiting on AMD's finished product here right?

    I'm unimpressed. I have no intention of buying anything less than quad core Intel today, and a high end card on top so would yield about what every driver rev gives in games.

    Johan Andersson said they worked for 2years with AMD on Mantle for Frostibe3, so I doubt many will use this if AMD isn't paying them. Such a small market, and earns you NOTHING above everyone else. You can't charge more for an AMD game buyer vs. everyone else. You are essentially coding to speed up some people that won't give you a dime. Maybe if they start selling Mantle patches for games for $5-10 we'll see it used. AMD can't afford $8mil per game, or even $2mil.

    I'm guessing Nvidia is prepping their response to AMD's proprietary Mantle as we speak. I don't think they want it used really, but it's what you'd do to get Mantle killed. Just put something out there so nobody wants to code for either then let them both get killed. We need games in OpenGL, WebGL, HTML5, Java etc that are portable to everywhere in days or weeks. Not MONTHS or years. Making anything Mantle is the same as DirectX. It isn't easy to port from either to anywhere else. Nobody wants to code for DirectX (xbox1), Mantle, then to OpenGL (ps3/ps4), then to whatever on mobile, then port to PC (a game made on consoles is hard to port from Directx, but easier from PS3/PS4 OpenGL now that K1 is coming out for mobile).

    Any time wasted on Mantle junk, takes away from the GAME itself. Think about it. If you could possibly code ONCE (say openGL) and run everywhere you get a ton more sales, and have a lot more resources to say, making a 30hr game vs. today's 7-15hr games. You might have the money to put into a decent AI for your game etc.

    Now that steam will be pushing linux, you have yet another reason to use OpenGL etc that can port easily to anywhere. This was a waste of money by AMD. They should have spent this on avoiding the phase1,2,3 (more?) drivers this year and last, and on making better GPU's (290/290x was a botched launch and hardocp says this chip wasn't really designed for 28nm and needs 20) and also enhancing the CPU's. Instead we got console crap (which also stole from gpu/cpu/drivers) and Mantle that they just can't afford to support, along with Free-Sync which may never even come out as AMD even said it isn't a product for market yet, if ever.
  • Inteli - Saturday, February 1, 2014 - link

    Last I checked, AMD offered Mantle to Nvidia for use. It really isn't proprietary.
  • chizow - Saturday, February 1, 2014 - link

    It's proprietary, the SDK and source code isn't available for download and AMD has said it might be open to releasing/licensing it to other IHVs at some point in the future.

    AMD's talk about opening up to other IHVs is just lip service so they don't sound hypocritical after beating the Open standard drum for so long and to help alleviate industry concerns by likening it to an "open standard".
  • rarson - Sunday, February 2, 2014 - link

    Mantle can run on anyone's hardware, it's just that no one else has an architecture design like GCN that would actually support it. It's the same reason that Mantle doesn't run on AMD's older GPUs.
  • chizow - Sunday, February 2, 2014 - link

    So in other words, it's proprietary to GCN, just as I stated.
  • Mstngs351 - Sunday, February 2, 2014 - link

    Andy why wouldn't they offer it to Nvidia to use? It would be like handing someone who only reads/writes English an instruction manual written in German. They cant use it.

    If Mantle offered the widespread support of DX or Ogl then I'd be more excited. Sadly I fear that with only moderate gains (when you need them) it will fall by the wayside like other fast API's already have. I suppose if it's build is close enough to console API's then we might see enough support (developers tend to do the least work they can get away with) for it to be successful.
  • Mstngs351 - Sunday, February 2, 2014 - link

    Andy should have been "And"... Please tell me your name is Andy, because that would likely be the most serendipitous moment of my day!
  • daniel142005 - Saturday, February 1, 2014 - link

    Can OpenGL implement these optimizations? If it's abstracted by a good graphics engine/api then it wouldn't affect the game designer. Frostbite is a full blown game "engine" for game designers, but if they can do it why can't it be implemented into OpenGL for game developers?
  • jwcalla - Saturday, February 1, 2014 - link

    John Carmack made the claim that you can get similar (to Mantle) draw call performance in OpenGL with certain NVIDIA extensions... FWIW.

    One of the problems for developers is that there's a lot of pressure to target the lowest common denominator. Even in OpenGL we see this; e.g., targeting something like OpenGL 3.x, etc.
  • DarkXale - Sunday, February 2, 2014 - link

    Mantle is a separate renderer, not something "tacked on". You have D3D, OpenGL, and Mantle.

    But yes, you can have OpenGL & Mantle as options just as readily as D3D & Mantle, or D3D, OpenGL, and Mantle. Its all dependent on how much development time you are willing to invest.
  • rarson - Sunday, February 2, 2014 - link

    "If it's abstracted by a good graphics engine/api then it wouldn't affect the game designer."

    That's the entire point of Mantle, to remove the abstractions from the API that prevent the developers from getting the most out of the hardware.
  • testbug00 - Saturday, February 1, 2014 - link

    1. Mantle cost is 2-3% of a games budget.
    2. Nvidia has no response, that is why they talk about how openGL can do stuff like enable more frame rates.
    3. even if Nvidia has a response it won't do anything. __Mantle only works because the consoles both use AMD GCN GPUs__ which mean programmers are already programing GCN close to the metal.
    4. Mantle closely remembers options the PS4 uses, it is not as close to the Xbox One due to the ESRAM. You also realize that the K1 is not going to be on any real mobile devices? What did Nvidia talk up with the K1? Oh, yeah, __CARS__ and how those are the future. Remember the Tegra 4 comparisons to the PS3/Xbox 360... Oh, yeah. Not to mention power draw.
    5. Time "wasted" on Mantle is small compared to everything else. Especially when you realize mantle is extremely similar to the coding done for the PS4 and fairly similar to the Xbox One.
    6. You do realize that "console crap" is such a stupid thing to say.
    7. You do know that "free-sync" isn't some proprietary technology? It is supported by VESA. You know, the group that is the de facto leader in what monitors and such use.
    8. You do realize that AMD didn't make Mantle because it decided to. It did so because developers asked for it.
  • testbug00 - Saturday, February 1, 2014 - link

    Oh, I should have added that Mantle is still in Beta. The release date for Mantle being fully fleshed out is sometime 2015 iirc.
  • loguerto - Saturday, February 1, 2014 - link

    Actually Mantle is very similar to ps4 low level APIs so porting games form the console to Mantle should be much easier than porting them to direct x, or at least it wont take much more time for compiling the libraries in both APIs. Console low lever APIs however are completely different from high level pc APIs, directx and opengl need a lot of optimization when games are ported from either consoles. Besides, Mantle was demanded by the developers because as they have said, direct x crap is heavily blocking them from expanding their vision for newer video games, they have made the same demand to nvidia but amd was the only one to take action, for now. You should really take a look at the mantle presentation made by oxide, because you have a lot of wrong/contrasted ideas if compared with what i heard form expert developers.
  • djentleman - Sunday, February 2, 2014 - link

    Wow, you program for the consoles? I'm intrigued...
    From what I remember the ps4 uses opengl and the Xbox uses DX. Hmm, seems obvoise to me that they are not the same.
  • PacificNic - Saturday, February 1, 2014 - link

    Mantle is definitely not proprietary. nVidia has the option to implement it so.... Stop fanboying and do some research?
  • chizow - Saturday, February 1, 2014 - link

    @PacificNic: Maybe you should take your own advice? But no need to believe me, take it from AMD's Director of Devrel:

    http://www.xbitlabs.com/news/graphics/display/2013...

    "The plan is, long term, once we have developed Mantle into a state where it’s stable and in a state where it can be shared openly [we will make it available]. The long term plan is to share and create the spec and SDK and make it widely available. Our thinking is: there’s nothing that says that someone else could develop their own version of Mantle and mirror what we’ve done in how to access the lower levels of their own silicon. I think what it does is it forges the way, the easiest way,” explained Mr. Corpus."

    Summary: Mantle is proprietary. I know AMD has been feeding you the Open Standard mantra far too long, but it does help to read and think critically for once.
  • djentleman - Sunday, February 2, 2014 - link

    Do you believe everything and says.
    I know sure as hell I don't believe everything nvidia says.
  • JDG1980 - Saturday, February 1, 2014 - link

    AMD is selling every 290/290X they can make, and these cards are so much in demand that they are going for well above MSRP. If that's a failed launch, what would a successful one look like?
  • Omegaclawe - Sunday, February 2, 2014 - link

    I think you might have a few misconceptions, here. First of all, AMD isn't going to pay per game. That would just be silly. The developers aren't going to, either. Instead, AMD pays the people who make the engines (e.g. EPIC for UE3/4, DICE for Frostbite). They do the work once, and then the vast majority of developers don't have to touch it. That's kinda the whole point of an Engine, so you don't have to do that work and can focus on the game.

    Incidentally, changing just the rendering backend really isn't difficult at all. It's only about 6000 lines of code. Once you throw in cross platform methods for input, sound, multitasking, etc, it becomes a significant amount of work, but mantle by itself is something like a man-week's worth of work. Maybe a man-month, if significant debugging is required.

    On top of that, for heavy CPU-based games (like what oxide studio's trying to do) the CPU resources freed by Mantle enable you to do things you couldn't with DirectX / OpenGL. It also can vastly improve the play experience by preventing CPU spikes from interfering with the rendering and can help with frame pacing. Ultimately, the games probably feel like they're getting more than an 8% FPS boost... which is still more than the difference between an R9 290 and a Titan, but people were willing to pay an extra $600 for that.

    Mantle isn't perfect, though. The CPU usage boost is nice, but that 8% or so GPU increase absolutely locks AMD to the GCN architecture, which means they can't make significant strides, like Nvidia has between, say, Fermi and Keppler, without losing the mantle advantage over Nvidia.

    Honestly, what needs to be done, probably, is just taking OpenGL and take the heavy lifting off the driver's hands and shifting it to the program itself. You'd get the same CPU boost but with wider hardware support.

    As far as WebGL, HTML5, and (ugh) Java go, these platforms tend to offer less security and tend to be absolutely terrible at multithreading. Java especially, the buggy piece of junk it is. Frankly, you could not make Crysis 3 in Java. If you wanted to make it run on Java, you'd probably need a high-end machine from circa 2020. Do you want all your games to look like they're from 2008 right now? Nooo? Then dont' develop for Java!

    I also seriously doubt Nvdia is developing a "Mantle killer" in any large capacity. No doubt they've looked into it, but considering how much better their drivers tend to be at managing the CPU and wot, they would not get as large of boost from Mantle, Nvidia almost never does open type designs willingly, and they're not the first to this market. Mantle would need to provide a consistent 20%+ bonus before I can see Nvidia really trying to tackle it.

    As far as Free-Sync vs G-Sync is concerned, I imagine it'll be something along the lines of CUDA, where the Nvidia-specific solution is actually inferior, but popular because of marketing and because it was "first". Still, Nvidia cards will undoubtedly be able to support both, whereas G-Sync monitors will only work with Nvidia cards, so they'll make some money that way... and I wouldn't exactly call G-Sync market ready, either.
  • djentleman - Sunday, February 2, 2014 - link

    Amds PR has brain washed you. Beyond fixing
  • djentleman - Sunday, February 2, 2014 - link

    Tl;dr
    Have you heard of the maxwell arm chip?
  • rarson - Sunday, February 2, 2014 - link

    Nvidia ARM anything makes me laugh.
  • Kaleid - Sunday, February 2, 2014 - link

    "hardocp says this chip wasn't really designed for 28nm and needs 20) and also enhancing the CPU's. "
    It's fine at 28nm, but needs a better default cooler.
  • rvalencia - Sunday, February 2, 2014 - link

    PS4 uses GNM API/PSSL which claimed to be similar to Mantle.
  • djentleman - Sunday, February 2, 2014 - link

    By amd right?
    And just because it is similar, doesn't mean it is easy to program for.
    That's like saying I wrote a program in c; so it must be like copy and paste to java?
    They're similar, but not the same.
  • rarson - Sunday, February 2, 2014 - link

    Every driver rev produces a 7-10% performance increase? And I've been replacing my hardware all these years! Silly me!

    Developers have been asking for a lower-level API for years. OpenGL isn't any lower than DirectX. I fail to see what kind of major advantage SteamOS is going to have over DirectX or Mantle. In fact, the stuff that you're saying about Mantle actually applies MORE to SteamOS: developers have to waste time and resources porting their code over Linux and OpenGL in order to support a software platform that is fragmenting their PC userbase. OpenGL has been around for years and still isn't commonly used for PC games. Ever wonder why that is?

    Dice releases a patch for a DirectX game that improves performance anywhere from 7-30%, or possibly more for slower CPUs. Seems like a no-brainer. Especially when considering that the paradigm that this benefits matches the new consoles exactly. To me, it seems that SteamOS is has much more to overcome than Mantle.

    If Mantle gains support, then there won't be any reason to buy expensive Intel CPUs for gaming computers that don't need them. Not that there is a good reason as it is, since most games aren't really CPU-bound with modern CPUs anyway, as these benchmarks clearly indicate.
  • mikato - Wednesday, February 5, 2014 - link

    @TheJian - uhhhh did you notice the performance increase? Because your comment made it sound like there is none. That could be a reason why developers would want it. It's a competitive advantage, not something like hypothetically trying to charge more for AMD users vs everyone else. And you probably could charge more if your game runs better, and you can make the graphics look better... or you could keep it the same price and just enjoy the broader gamer base it enables.
  • chizow - Saturday, February 1, 2014 - link

    How could you possibly think that when EA launched Mantle update on time, while AMD has repeatedly delayed their driver launch the last few days, constantly changing support levels and acknowledging "nasty bugs" that prevented their Mantle driver from releasing?
  • Gigaplex - Saturday, February 1, 2014 - link

    On time? EA was late with the patch, it was due last year.
  • chizow - Sunday, February 2, 2014 - link

    And EA/DICE had builds of Mantle running then, clearly AMD's drivers were the ones that weren't ready, as evidenced by huge outstanding Mantle buglist and feature-support vacancies.
  • Black Obsidian - Monday, February 3, 2014 - link

    No doubt EA had builds of BF4 running before its own launch, and yet despite that it was far from ready (or stable, or functional) when it launched.

    Has builds running =! Has builds running WELL
  • chizow - Monday, February 3, 2014 - link

    Which is neither here nor there, given EA hit their targeted launch date of January while AMD was scrambling through the weekend and still only managed to launch a very shaky, buggy driver.
  • looncraz - Tuesday, February 4, 2014 - link

    "And EA/DICE had builds of Mantle running then, clearly AMD's drivers were the ones that weren't ready, as evidenced by huge outstanding Mantle buglist and feature-support vacancies."

    By that logic, AMD also had a driver running then - since that is a prerequisite for EA/DICE having BF4-Mantle ready.

    Hmm.. guess not. Anyway, the real story is that BF4 bugs were more problematic than expected, so man-power was dedicated to repairing those bugs. Mantle was delayed by EA/DICE in case there were compatibility issues with the bug fixes. In this time, Mantle and AMD were in close communications, and AMD took the time to more carefully evaluate the drivers.

    It's what joint product development always experiences. One party has a delay, so the other takes the time to improve their product. This happens inside each company as well as between the two companies.
  • chizow - Tuesday, February 4, 2014 - link

    Yes of course AMD had a driver ready, and as we have seen, it is buggy and full of support holes. None of which changes the fact EA hit the targeted release of January and AMD did not. Not sure how this is so hard to understand. EA released on Thursday with Mantle support, AMD did not release until Saturday (February 1) due to their own admissions of "a nasty bug". And that's before you get into their own huge outstanding buglist and reports of performance regression in other titles besides BF4.

    http://support.amd.com/en-us/kb-articles/Pages/Man...
  • grndzro77 - Saturday, February 1, 2014 - link

    Um...Wood screws? Driver crashes? Burning cards? Mass rebranding & lies?
    NV gets it's share of free passes too.
  • eSyr - Sunday, February 2, 2014 - link

    Fermi announce, anyone? "It will likely be on a Tuesday."
  • chizow - Saturday, February 1, 2014 - link

    No I think most Nvidia users would discourage Nvidia from developing a whole new API for a meager 7-10% gain in GPU limited situations, resources better used elsewhere (G-Sync, GameStream, GameWorks, PhysX, 3D Vision etc.)

    I think Nvidia is listening to the devs they had on their GeForce panel back in October. While Johan Andersson clearly wanted a low-level API like Mantle, no one really seems to want to turn back the clock to the days they had to support multiple renderers for every IHV on the same platform.
  • rarson - Sunday, February 2, 2014 - link

    What a joke. Nvidia users crow about any kind of performance advantage they can. I've even heard a few of them try to explain why a performance disadvantage was actually a GOOD thing. Anyone with a brain knows they'd jump at the chance to brag about something like this.
  • chizow - Sunday, February 2, 2014 - link

    Not really, 7-10% is what one might expect from a big driver update or optimization, not a 2 year project to build an API from scratch for a small % of vendor-specific hardware.

    If AMD feels this is how to best use their resources to the benefit of their customers, more power to them. I think as an Nvidia user I am much more interested in Nvidia's efforts to work within existing APIs like DX and OpenGL and focusing their close-to-metal efforts in hardware, ie. their embedded Denver core in upcoming Maxwell. Another example would be G-Sync, which focuses on improving frame quality especially at low FPS, which would provide a benefit in every game instead of just a handful that require specific hardware, API, or dev support of that API.

    I certainly don't want the industry to move to multiple vendor specific APIs, that does no one any good and would certainly be step backwards to the days every game had to support multiple vendor-specific APIs and codepaths.
  • mikato - Wednesday, February 5, 2014 - link

    Would Nvidia users discourage Nvidia from developing a new API for a 30% gain in CPU limited situations? Hell no. Would Nvidia do it? Yes probably since that's a pretty big performance advantage. However they don't make CPUs so they don't have the demand on both sides like AMD does.

    You just cut out a small slice of the picture (of course the minimum performance advantage), and used that to argue against it, and even then I don't agree.
  • chizow - Wednesday, February 5, 2014 - link

    Would I discourage Nvidia from developing an API for larger gains in unlikely scenarios??? Absolutely! It's a big performance gain in scenarios that are unlikely to occur (slow CPU + fast GPU) or unlikely to majorly benefit in the real world (low resolution/settings or multi-GPU already at high framerates).

    I would certainly prefer the alternative that Nvidia took, developing new technologies like G-Sync that can improve frame and image quality at ANY FPS in ANY game that uses Nvidia hardware.

    There's also growing evidence that Nvidia took the steps to improve their drivers in existing APIs that AMD did not, mainly with support of Deferred Contexts and Command Lists in their DX11 Multi-threaded Rendering implementation. It has been known for some time AMD does not support these features while Nvidia does and the evidence is clear, even with Mantle, comparable Nvidia hardware running DX11.2 MTR (Win8.1) is faster.

    http://techreport.com/review/25995/first-look-amd-...
  • Gasaraki88 - Thursday, February 6, 2014 - link

    Hate fanboys like you. So far it seems like Mantle works best in certain scenarios. If I have to run my games in low settings to get a 25%+ increase in FPS, who cares. 180FPS+ to 210FPS+ is nothing anyone should care about. What matters is good cpu with good card and if they can increase performance by 25+% then that good. 7-10% is not good enough to have the developers just spend time to make a Mantle API. Developers not not even using DX11 fully yet, it will take them forever to use Mantle. If AMD can't make Mantle API also run on nVidia hardware then it's useless.
  • Slomo4shO - Saturday, February 1, 2014 - link

    Thank you Ryan, the Battlefield 4 Tashgar benchmarks at 4.2GHz seems to be an anomaly... Did you transpose the numbers?
  • Ryan Smith - Saturday, February 1, 2014 - link

    I'm assuming you're referring to the performance regression with low quality settings. No, that number has been double checked.
  • Slomo4shO - Saturday, February 1, 2014 - link

    Yes, I was referring to the low quality settings. Hmm those are odd results...
  • rtsurfer - Saturday, February 1, 2014 - link

    Can we expext a proper Review with AMD FX line CPUs & APUs & Nvidia 780 &780 Ti sometime later this week..??
  • bj_murphy - Saturday, February 1, 2014 - link

    Can't do a 780/780Ti Mantle review, it doesn't run on Nvidia cards. However, I agree with you on hoping for some AMD FX series CPUs as well as APUs compared here.
  • Fetzie - Saturday, February 1, 2014 - link

    You can, however, say "nVidia cards got x result in this benchmark. AMD cards with DX11 got y result in this benchmark. AMD cards with Mantle got z result in this benchmark". So long as the benchmarks remain the same, you can still compare performance numbers just the same as comparisons up until now between AMD and nVidia cards..
  • mikato - Wednesday, February 5, 2014 - link

    +1
  • davidgoscinny - Saturday, February 1, 2014 - link

    This reminds me of 3DNow! on their CPUs years back. Lots of potential, most of it unfulfilled unfortunately :-(
  • JDG1980 - Saturday, February 1, 2014 - link

    3DNow was basically an earlier version of SSE, with the disadvantage that it re-used the MMX register file instead of adding new registers. Once SSE became standard and AMD adopted it, 3DNow became redundant - why make something AMD-specific when you can as easily make a version that has performance just as good and works on both Intel and AMD chips?
  • chizow - Sunday, February 2, 2014 - link

    Yep great point, and to finish off that corollary as it relates to Mantle, why make an API that is AMD-specific when you can come up with a solution that works on Intel, AMD, and Nvidia chips?

    I think the upcoming DX11.x enhancements from Microsoft may very well make Mantle irrelevant for the same reasons you've already outlined:

    https://blogs.windows.com/windows/b/appbuilder/arc...

    "We’re also working with our ISV and IHV partners on future efforts, including bringing the lightweight runtime and tooling capabilities of the Xbox One Direct3D implementation to Windows, and identifying the next generation of advanced 3D graphics technologies."
  • davidgoscinny - Sunday, February 2, 2014 - link

    Well when AMD adopted SSE in their CPUs, they were offering parts that were as fast if not faster than equivalent Intel CPUs. It took great performance and made it better.

    3DNow! was in the K6/K6-2/K6-3 days and their FPU performance was so bad, 3DNow! was the only way to get decent performance in many games. It's a moot point now anyways but I'm just reminded of that situation when I read about Mantle.
  • YukaKun - Sunday, February 2, 2014 - link

    Because companies won't invest on new stuff unless they see the actual usefulness of it.

    In this case, 3D Now! was a spit to MMX, so Intel developed SSE1 to counter it. AMD has like 1/8th the push factor Intel has, so no one adopted 3D Now! and the rest is just history.

    Same with MANTLE. We had Glide back in the day, 3DFX proprietary, but devs loved it. Now dev ask for the same, MANTLE appears and everybody seems to give it the backslash, even when AMD says anyone can use it if they want. Not open, but at least AMD won't deny access to it to whoever wants to use it. I don't know if they'll charge a petty penny for it, but at least I guess they need to keep the fee very low so it catches up. If Intel or nVidia don't want to adopt it, it will be because they are suckers. Irony is that 3DFX was bought by nVidia and they never got a proper Glide initiative, because it wasn't profitable to push their own standard even if devs asked for it.

    Cheers!
  • Margalus - Monday, February 17, 2014 - link

    glide died because it was proprietary, just like mantle, it was superb, but too costly to develop for it and non proprietary api's. developers did NOT want to develop for multiple api's. now amd is trying to go backwards so that developers will have to spend more time and money developing for multiple api's again, not a good thing
  • Juppi - Saturday, February 1, 2014 - link

    I think Mantle-performance gonna shine in real 64-players Multi player CPU-bound scenarios where GPU is waiting om CPU.
    look at these numbers :
    Direct3D 11.1 Mantle
    Siege Of Shanghai (720p) 65,4 avg / 52 min 116,1 avg / 88 min
    Siege Of Shanghai (1080p) 63,8 avg / 49 min 112,2 avg / 81 min
    Siege Of Shanghai (1080p, 4x MSAA) 57,7 avg / 44 min 78,2 avg / 63 min
    Siege Of Shanghai (2160p) 48,9 avg / 38 min 52,2 avg / 41 min

    64 Spieler, Ultra Details (Core i7-3770K, 16 GB DDR3-1333, Radeon R9 290X @ 1.000/2.500 MHz, Windows 8.1 x64, Catalyst 14.1 Beta)

    http://www.golem.de/news/amds-mantle-api-im-test-d...
  • hfm - Saturday, February 1, 2014 - link

    I had the same thought, it's going to be a boon for games with a ton of players such as large online matches, hectic MMO battles or strategy games with tons of units. Good to see the theory validated.

    There's nothing but good things that can happen optimizing the rendering pipeline in this manner, and this is just the first step.
  • Alexvrb - Saturday, February 1, 2014 - link

    Agreed. Busy multiplayer titles will see a large benefit, but most benches will focus on single player scenarios. Low end CPUs will also benefit. So it gives affordable i3 and AMD chips a boost in budget gaming. Nothing wrong with that.

    I'd like to see support for Mantle in strategy games, for certain.
  • rarson - Sunday, February 2, 2014 - link

    "Low end CPUs will also benefit. So it gives affordable i3 and AMD chips a boost in budget gaming."

    And I think most importantly, consoles as well. Although Microsoft has already stated that Xbox One won't support Mantle. But they do sport the same kind of low-CPU/high-GPU setup.
  • Pantsu - Sunday, February 2, 2014 - link

    The consoles have down to the metal APIs, they don't need Mantle. All the benefits have already been in use there. That's why you hear people like Carmack saying consoles are 2-3x faster with the same hardware.
  • mikato - Wednesday, February 5, 2014 - link

    Agree, I want to see this too as it directly affects me, multiplayer FPS games, and I have been CPU bound. I'm upgrading now but I obviously see a need for it as it happened to me. I could have waited out one more generation of games if I had this CPU bound improvement!
  • colinisation - Saturday, February 1, 2014 - link

    Ryan,

    Will Mantle serve to reduce microstuttering, have you seen any such indication from this preview or will we have to wait till the full article.

    Thanks
  • Ryan Smith - Saturday, February 1, 2014 - link

    For that you're going to need to wait for the full article. That analysis is not yet complete.
  • Pantsu - Sunday, February 2, 2014 - link

    The devs will have more control over things like that with Mantle so you should expect that supporting games would have better control over their performance problems like micro stutter.
  • whyso - Saturday, February 1, 2014 - link

    Need to test with a Nvidia card too on those settings to see if Mantle is really a solution to a problem or its simply AMD drivers that hate DX.
  • tipoo - Saturday, February 1, 2014 - link

    As expected, what this helps most is the weaker CPUs. This should be good for AMD APUs, if Mantle takes off. It also shows what not using high level APIs in consoles can do to the lower end Jaguar cores they use.
  • Viewgamer - Saturday, February 1, 2014 - link

    Ryan have you enabled/disabled the cores via the BIOS or inside the windows environment (with Task Manager).
    Is it possible that Mantle can bypass the core affinity settings in Task Manager?
    I'm asking because the performance scaling with cores & frequency seems strange.

    I would also love to see a performance comparison with more CPUs, to see if Mantle levels the playing field between AMD & Intel CPUs.
  • Ryan Smith - Saturday, February 1, 2014 - link

    It's disabled at the BIOS level.
  • MugatoPdub - Saturday, February 1, 2014 - link

    Thanks for the prelim, I have to add however; I don't see how a 4960x and a 290x are the most "wanted" items to benchmark. I can wait, but I would believe most people would like to see benchs on mainstream hardware, i.e., R9 270x and an FX-6300 w/ 8GB, SSD. Or R9 280x and i5-4670k w/ 16GB. These are the most common setups.
  • junky77 - Saturday, February 1, 2014 - link

    I think you should try and test BF4 on a multiplier scenes with a lot of players in one place. It might be more interesting for many and it stress the CPU more in some cases

    Also, please test BF4 with some midrange AMD CPU
  • Ian Cutress - Saturday, February 1, 2014 - link

    How do you propose we make that into a repeatable and honest benchmark, then do it over a dozen (or dozens) of setups and settings?
  • blanarahul - Saturday, February 1, 2014 - link

    Offline Multiplayer?? You know, like Counter Strike.
  • junky77 - Saturday, February 1, 2014 - link

    you obviously can't repeat it identically (unless there is some replay option) and it will take maybe too much time. but :

    1. You can test the same map with same number of players, as I'm sure you would
    2. Run your character through the same path in the map, running into a lot of people. You should be dead quickly enough and be respawned, so maybe it will require reasonable time to test
    3. Even getting a consistent minimal FPS could be nice

    4. I know it is hard and not accurate, but if the results are consistent enough (like getting the minimal FPS) in some maps, at least it can give players some idea about BF4 multiplayer performance

    I also guess that if the difference between Mantle and non-Mantle gaming is big enough in case of multiplayer, we'll probably see it

    Personally, I've tried to benchmark the multiplayer and though it was hard due to a lot of bugs, the FPSs were quite consistent for the same map
  • junky77 - Saturday, February 1, 2014 - link

    also, I didn't mean to offend if it sounded like that
  • chizow - Saturday, February 1, 2014 - link

    Run a closed server.

    Benchmark baseline control with 0 players.

    Benchmark server populated with 10-20 editors or volunteers, just parked in vehicles and fixed locations. Even have them fire on fixed, non-destructible locations. If the results are different enough to show a solid CPU load and difference in performance from the control run, just go with that.

    The server should still have to account for the players even thought they aren't really doing anything, but once you have them acting and destroying things that starts introducing variability.

    The argument to validate these results is that even in real multiplayer games, your framerate should still remain relatively consistent when the round opens and you are just running to the first objective vs in the heat of combat, ie. you don't necessarily get more CPU load just by having more actions onscreen.
  • yannigr - Saturday, February 1, 2014 - link

    These results look interesting

    http://pclab.pl/art55953-3.html

    Now cutting cores on a high end Intel processor is not the best way to come to conclusions. An Intel core is much different to an AMD core and maybe faster than a core in a Pentium processor (more cache available etc.). In the link above Mantle looks really really interesting, much more than what you found here with your preview.
  • mmrezaie - Saturday, February 1, 2014 - link

    That was a great article +yannigr
  • blanarahul - Saturday, February 1, 2014 - link

    I threw up after seeing the Multiplayer results. 90% performance gain in MP?? I seriously doubt that.
  • junky77 - Sunday, February 2, 2014 - link

    seems like others had good results too:

    http://www.guru3d.com/articles_pages/amd_mantle_pr...
  • eanazag - Saturday, February 1, 2014 - link

    I think it is great that AMD is getting this out finally. I'm curious how much it helps in APU only scenarios like laptops. This looks like a good software workaround for pairing AMD cards with their performance disadvantaged CPUs.

    I'm in agreement that just disabling cores on Intel's finest CPU is not exactly equivalent to simulating those power SKUs. Specifically cache is still there. Handicapping the 4C/4T to only 2 GHz is an outlier when the other two scenarios keeps things at 3GHz or higher. I completely understand the benefits of handicapping the CPU in speeding evaluation.

    All in all, great preview. I'm looking forward to the rest.
  • Manch - Saturday, February 1, 2014 - link

    Im curious as well. I bought a MSI GX70 last year. The A10 5750M APU in it isnt all that great and bottlenecks the 8970m quite often. A little disappointed that the driver is limited to a select few GCN cores. Just got to wait a little while longer. Anandt ech did a review on the GX60 a while back and talked about how the APU bottlenecked the GPU. Im really curious to see what kind of performance improvement Ill get. Im happy with the laptop as is, but I wont turn down a free performance upgrade.
  • Gnerma - Saturday, February 1, 2014 - link

    I'd love to see some numbers on something stupid like a Core 2 Duo, Kabini or Bay Trail. So we can see the limits of this new toy.
  • rarson - Sunday, February 2, 2014 - link

    I'm still running a Core 2 Quad with a 7870, so something Core 2-based would most interest me. And actually, I've got an old 3.8 gHz Pentium 4 with HT. I may have to get a copy of BF4 and throw the 7870 into it just for fun.
  • mikato - Wednesday, February 5, 2014 - link

    Yes! I have a Core 2 Duo in one of my systems and game in Call of Duty Ghosts. It's still above the system requirements and has a new GPU, but obviously is CPU limited. If I could get 30% improvement with this CPU limited scenario, it would basically prevent me from having to buy new CPU, motherboard, memory, and reinstall OS and all software for at least another year. As it is, I am doing all of that now but this situation exists! My other system is a Phenom II X4 965, which is still playing games great, and something like this with a newer GCN video card would likely keep that one going a long time.
  • blanarahul - Saturday, February 1, 2014 - link

    Honestly speaking. I am quite disappointed about improvements by Mantle in GPU limited scenarios. 7-10% better results could have been achieved with a well developed driver too. Why waste resources on building a new API? AMD has more important issues to focus upon like Frame Pacing and better 4K support.

    OTOH I am extremely impressed with CPU bound results. It's a slap in the face for Intel since gamers won't need to upgrade CPUs if and only if Mantle takes off.

    I strongly believe that Mantle's sole target should have been to alleviate API Overhead and reduce CPU bottleneck. GPU Performance should have been improved via drivers after properly implementing Frame Pacing.
  • blanarahul - Saturday, February 1, 2014 - link

    Even though that would have meant that GPU Performance never improved. :P
  • blanarahul - Saturday, February 1, 2014 - link

    Correction: "to alleviate CPU Overhead"
  • xdesire - Saturday, February 1, 2014 - link

    Good job on quick preview. I'd also like to see some benchies with GCN 1.0 GPUs and various CPUs like an i3, FX 6300 and A10 if possible. As much as there is the fact that this is exaggerated by AMD, it's awesome that GCN users can get additional performance for free, be it 7% or 40%. I hope they improve mantle even further
  • Dr_SnM - Saturday, February 1, 2014 - link

    You may have this planned already so I'm sorry if this is telling you something you already know.
    I think it may be quite useful to report the standard deviation (or variance, take your pick) of the frame rates. Averages and minimums may hide an overall smoothing effect which could actually count as a performance increase. In other words, Mantle, by removing CPU bottlenecks, may reduce the frequency of frame rate dips. The best way to quantify this is with a measure of spread such as the standard deviation.

    Thanks guys
  • grndzro77 - Saturday, February 1, 2014 - link

    From what I'v seen the framerate dips are greatly improved.
  • Dr_SnM - Saturday, February 1, 2014 - link

    Yeah, that would be my expectation. Everyone focuses on Average FPS but there is more to a game play experience than that number.
  • chizow - Saturday, February 1, 2014 - link

    Can't see this being worthwhile for devs to implement for a 10% gain on a minority % of hardware (AMD only has 40% of hardware, and only GCN cards are supported). Anyone who has a Intel Quad core since 1st-gen i7 will likely see minimal gains from Mantle and that's before we see how Mantle fares against it's Nvidia counterparts.

    I think the real advantage will be for those who use AMD CPUs, but the chances of someone using an AMD CPU and owning a high-end GCN graphics card are pretty slim based known data sets. Steam for example has Intel at a commanding 74% share with only 24% AMD.
  • Manch - Saturday, February 1, 2014 - link

    But AMD has 100% of the consoles. I know Iknow, Mantle is not what they use on the consoles, but it is similar. Makes porting for the developers a lot easier. Also, relieving that CPU overhead allows developers to now use that freed resource for other things. This is just a quick preview. Wait for the full article.
  • chizow - Saturday, February 1, 2014 - link

    Mantle is as similar to consoles as they are to each other, and as we know, each platform still takes development time and resources. Can't see how it makes porting easier when they have to port 1 more build for the same platform as DirectX is still not going away.

    Relieving CPU overhead is of questionable benefit given many of these games and hardware use cases are already GPU limited, meaning the CPU overhead was never really an issue to begin with. We see significant benefits (>10%) from Mantle only in fringe CPU limited scenarios that are relatively unlikely to begin with (multi-GPU at low resolutions, single GPU at low resolutions/settings).
  • testbug00 - Saturday, February 1, 2014 - link

    Considering Mantle closely resembles PS4 coding and also uses the same GPU as the Xbox One (the coding is probably a bit different due to esram and such) it will take off.

    The big thing is that it costs a very small part of development, about 2-3% from what I have heard, and brings large performance benefits to a noticeable part of the market.

    Also, what counterpart does Nvidia have again?
  • chizow - Saturday, February 1, 2014 - link

    Nvidia's counterpart would be their competing GPUs for each price/performance category, if Nvidia performs as well or better in BF4 and as well or better in other titles, again, is it really a worthwhile endeavor? Would anyone change their buying tendencies for a single game, or a few games per year if it meant crippling performance in the myriad titles that do not use Mantle?

    Mantle resembles PS4 coding as much as PS4 coding resembles XB1 coding, they are all different and as such, take time and resources to produce and validate. The difference of course, is that the PC already has an API that supports 100% of IHVs (AMD, Nvidia, Intel) vs. Mantle which supports just a small fraction (GCN only for a minority AMD share of hardware).

    As for 2-3% being a small part of development? I wouldn't consider that a small part, it just introduces more variables that can go wrong during the development process. We have already seen Mantle was delayed nearly 2 months in BF4 and took nearly 2 years to develop in the first place. For what? 10% gains in most use-case scenarios to a small fraction of the user-base?

    I guess we will see how it goes, but I can't see widespread adoption for these types of gains.
  • djentleman - Sunday, February 2, 2014 - link

    That would only apply if they were only programming for mantle. They still have to program for DX, giving them no incentive except for money to take advantage of it.

    And it really isn't similar to console programming, no developer has confirmed that. That's just internet yap for people who think of themselves as programmers who know.
  • mikato - Wednesday, February 5, 2014 - link

    "giving them no incentive except for money"
    Oh darn, who wants that anyway
  • loguerto - Saturday, February 1, 2014 - link

    The real performance gains will be evident when game engines (like nitrous made by oxide) will have their roots deeply into mantle. They said that the amd api has up to 300% performance gain if compared to directx on their star swarm pre-alfa release (downloadable on steam).
  • djentleman - Sunday, February 2, 2014 - link

    That's because of draw calls. Don't expect anything but amd benchmarks to have something like that.
  • Dribble - Monday, February 3, 2014 - link

    That's because they are implementing everything as a separate draw call instead of instancing it like everyone else does. If they do that then they don't have too many draw calls and it all works fine like every other game out there. They are doing it for publicity obviously, but in the end they'll have to program it properly and the benefits will be similar to BF4.
  • chizow - Saturday, February 1, 2014 - link

    @ Ryan

    I'm sure it's on your list of things to do for the full review, but just want to make sure there's a comprehensive image comparison done. Not saying there's any foul play in terms of IQ but it's been an issue in the past from both Nvidia and ATI/AMD under the guise of "optimizations".
  • Ryan Smith - Sunday, February 2, 2014 - link

    Yes, we'll be doing some IQ comparisons. In fact we've already done some; so far we have not turned up any differences (nor do to we expect to find any).
  • chizow - Sunday, February 2, 2014 - link

    Thanks Ryan, will be interested in seeing the results. There have already been numerous reports however of graphical artifacts, anomalies and bugs however while running Mantle codepath. Mainly a "fog" that DICE has already said is a bug, but looks an awful lot like the washed-out console "optimizations" that could be the result of reduced shader precision, fewer lighting passes, or lower detail settings. Will be interested to see what you find if/when these bugs are fixed.
  • chizow - Monday, February 3, 2014 - link

    Ryan you're probably pretty far along in testing already, but are you doing any regression testing with the 13.12 drivers? There have been a lot of reports of reduced performance in virtually every other game, might be worthwhile to check to make sure BF4 performance hasn't dropped from 13.12 to 14.1 Beta. Ideally I'd like to see at least these 4 tests per CPU and resolution:

    i7 4960 + 290X-dx11 (13.12 driver)
    i7 4960 + 290X-dx11 (14.1 driver)
    i7 4960 + 290X-mantle (14.1 driver)
    i7 4960 + GTX 780Ti (latest beta)

    FX 8350 + 290X-dx11 (13.12 driver)
    FX 8350 + 290X-dx11 (14.1 driver)
    FX 8350 + 290X-mantle (14.1 driver)
    FX 8350 + GTX 780Ti (latest beta)

    FX 8350 + 280X-dx11 (13.12 driver)
    FX 8350 + 280X-dx11 (14.1 driver)
    FX 8350 + 280X-mantle (14.1 driver)
    FX 8350 + GTX 770 (latest beta)

    So on, so forth, mixing in different CPUs with competing GPUs in the same price:performance category. If not this review, maybe in a future one.
  • capawesome9870 - Saturday, February 1, 2014 - link

    will you be doing Crossfire R9 290Xs or even Triple?
  • Ryan Smith - Sunday, February 2, 2014 - link

    Yes on the CF 290X.
  • abianand - Sunday, February 2, 2014 - link

    Ryan, would you be including the various processors like FX, latest A10s, i3 and i5?
  • Mathos - Sunday, February 2, 2014 - link

    @ The Rabid Nvidia Fan boys. And even the AMD fan boys. Get your heads out of your butts. NVidia has already been doing this for years, which is why they didn't take up the offer for mantle. Mantle, and NVApi are effectively the exact same thing.

    Nvidia controlled the Xbox, xbox 360, and PS3 on the GPU side. Therefore they had NVapi there to bring out extra performance on cross platform games, as it was set up to automatically bypass certain parts of direct3d and opengl for better performance. Battlefield 3 is a classic example of this, CryEngine games are another one. Nvapi was also part of making it easier to port console to pc on the graphics end.

    AMD Now controls the console market. Therefor they're bringing out their own driver low level api to do the same thing Nvidia did with nvapi. This will continue for the next 7 or so years, average console refresh cycle. There will be plenty of games that support mantle, just like there have been plenty of nvidia the way it's meant to be played games over the lives of the previous gen consoles.

    On a side note, this does mean AMD CPU's cores will be less of a bottleneck on their own GPU's when it comes to gaming with mantle enabled games. I'm not for example, going to be upset over a free 20ish % performance increase on my own HD7850 Phenom II 1090T setup.
  • lmcd - Sunday, February 2, 2014 - link

    Dude umm could you revisit the Xbox 360 comment pl0x
  • funkforce - Sunday, February 2, 2014 - link

    We all love to see those benchmarks on high end systems, but how many gamers nowadays sits home with 8 core CPU:s and 1-3 High end GPU:s? Usually 1-5% play on those monster machines. Ofc. a lot of those are probably reading Anandtech. Anyway...

    Lets say the average player has an i5/i7 2500 - 4770 at around 3.4-4.2Ghz and either an AMD Radeon 7870/270X or a Nvidia GTX 670/760 GPU.

    Let's say they play BF4. How many play on Ultra Settings when they play Multiplayer? Only the people I would say that are oblivious to their sluggish performance at ~20-30 fps at a worst case scenario on a large Multiplayer map.
    Any serious and knowledgeable gamer today tries to have minimum 60fps in a worst case scenario and more and more gamers have a 120hz/144hz monitor and aims for 100+fps.

    So playing on low-high is the only viable option if you want to have good performance on all maps at all times.

    And even if all readers here have better systems than that and see only 8% gains cause you all play on ultra... what will happen when you buy a 4k monitor this or next year with 4x the pixels for the GPU to push out? Will you still play on Ultra?

    So it all adds up. I think 8-30% is huge, because this will help 90% of the gamers out there, the rest probably already have 100+ fps at ultra and don't need the boost.

    Hard OCP did a test on BF4 with an Radeon R9 270X and got: Average 44.7 fps at 1920x1080 - 4XMSAA - Highest In-game settings.
    But this is on a Intel 3770k Overclocked to a 4.6GHZ! And Min FPS is 26.

    Just like a console that has average hardware compared to high end pc's, Mantle will benefit the largest portion of all gamers and in the future you can probably buy a cheaper CPU and perhaps put more money on the GPU and get even more bang for the buck.
  • Roland00Address - Sunday, February 2, 2014 - link

    Do you guys still have the MSI GX60 with the AMD A10-5750M and the AMD Radeon HD 7970M? (The 7970M is a pitcarn part similar to a downclocked desktop 7870) If so this would be a very good choice to see if mantle actually improved the performance of that mix matched laptop and if it allowed it to compete more similarly to the intel chips with the 7970M.
  • LeftSide - Sunday, February 2, 2014 - link

    Is there any way we could get the lowest frame rate numbers, or a graph. Average frame rate is such a mixed bag for performance reviews.
  • tuklap - Sunday, February 2, 2014 - link

    Still... Good job AMD...
  • wizyy - Monday, February 3, 2014 - link

    According to this article:
    http://hothardware.com/News/AMD-Mantle-vs-DirectX-...
    in conjunction with Kaveri systems, Mantle boost is larger, more than 20%.
  • lilmoe - Monday, February 3, 2014 - link

    Meh.
  • dwade123 - Monday, February 3, 2014 - link

    Yeah... A minor gain at the cost of developers working twice as hard just to bring in their console ports to the PC. Not gonna happen. Dead before arrival.
  • formulav8 - Monday, February 3, 2014 - link

    Why use a high end cpu? Mantle is to help increase performance for the lower end cpu's iirc. If you already have a cpu that's that fast mantle wouldn't do much for you. Put in some quad kaveri type cpu or Intel i3 or something and test it and see. Mantle was to benfefit lower-mid end systems iirc and not top end systems which will already have high-end fps performance and eye candy and bottlenecks the gpu.

    My understanding of mantle may not be correct since I have an NVidia video card (and thus I would get no benefit), so I didn't do a lot of research personally on mantle. But would still like to see it reviewed with a lower/mid-end cpu that is actually in more persons computers than an i7 like that.
  • Iceburg Lettuce - Monday, February 3, 2014 - link

    Serious benefits for a big CPU bottleneck.
    Just got over 100% increase in framerate in starswarm.
    AMD Phenom(tm) II X4 B45 Processor (Althon II x3 445 with core unlocked)
    with 2Gb XFX 7870LE (Tahiti)

    Based on scenario "follow" and "extreme" settings:
    D3D
    == Results ================================================
    Test Duration: 360 Seconds
    Total Frames: 5693

    Average FPS: 15.81
    Average Unit Count: 3780
    Maximum Unit Count: 5472
    Average Batches/MS: 317.16
    Maximum Batches/MS: 661.27
    Average Batch Count: 22797
    Maximum Batch Count: 85189
    ===========================================================

    Mantle
    == Results ================================================
    Test Duration: 360 Seconds
    Total Frames: 12121

    Average FPS: 33.67
    Average Unit Count: 4213
    Maximum Unit Count: 5409
    Average Batches/MS: 562.09
    Maximum Batches/MS: 1319.19
    Average Batch Count: 19947
    Maximum Batch Count: 93633
    ===========================================================

    only did a single run, so take this with a pinch of salt for now, I'll try again tomorrow.
  • Miroslav - Tuesday, February 4, 2014 - link

    Excuse me Ryan, but is it not a mistake to think that the Core i7 in 2C/4T configuration is equival to Core i3 processor, because Core i7 in 2C/4T configuration still have 15MB of L3 cache, and Core i3 only 3MB of L3 cache?
    (Sorry for my bad English, it is not my native language)
  • Vayra - Thursday, February 6, 2014 - link

    For consoles this is nothing new as there is little to no abstraction layer needed. A console API can be custom built from the onset because it only supports a single hardware config.
  • Wolfpup - Thursday, February 6, 2014 - link

    What's causing the DirectX bottlenecks? Shouldn't this effort be put into getting rid of THOSE? Or are they inevitable because Mantle is a lower level API than Direct 3D?

    I'm very uncomfortable with Mantle, particularly since AMD STILL needs to get their drivers in order. They're still years (decades?) beyond Nvidia, and...? And instead of doing that, we're doing this presumably marketing driven mantle stuff?

    Even if some real games support it for a while, what happens when AMD releases a new architecture that doesn't fit with Mantle? Do they continue supporting it and at that point it's just a high level API? Do they continue supporting it but alter it, so games have to effectively support another version of it?

    I'm feeling like this is a bad idea all around. If AMD had 5 years of rock solid driver releases under their belt that would be one thing, but micro-stuttering, Enduro doesn't work, notebook support generally is terrible, they don't support hardware very long, etc. while Nvidia's managing to actually do Optimus well, SLI well, etc., let alone the basics... Get the basics right before you do this...
  • piiman - Saturday, February 8, 2014 - link

    "I'm very uncomfortable with Mantle, particularly since AMD STILL needs to get their drivers in order. They're still years (decades?) beyond Nvidia"

    Really? decades? Just how do you figure that?

Log in

Don't have an account? Sign up now