By now enough time has passed that I can come back here and hopefully answer/not answer a few questions :)
In the 8+ years I've been running this place, I don't think I've ever pulled an article before. I can't be too specific here, but some folks needed to be kept anonymous and I had to make a decision for the greater good in the long run. I apologize for remaining quiet about it for so long, but it was necessary.
With that out of the way - there's a lot to talk about.
I finally managed to pry a pair of 7800 GTXs away from Derek's hands and I've been working to answer the question of how fast of a CPU need to feed these things. There are a number of variables that have to be taken into account, most importantly being the resolution you're running at. The thing that's truly new about a card as powerful as the G70 is that you really start being limited by your monitor in what resolutions it supports. While owners of large analog CRTs have a lot of flexibility with what resolutions they can run at, LCD owners can't; so if you've got a G70 hooked up to a 1600 x 1200 panel you'll have to make different CPU decisions than if you have a 1920 x 1200 panel. I'm trying to simplify the decision making as best as possible and for this round I'm only focusing on single card solutions, but if there's demand later I can tackle SLI requirements.
I finally hooked up the G70 to the 30" Cinema Display and gave Doom 3 a whirl at 2560 x 1600. What I find most interesting is that once you start getting far above 1600 x 1200 it's no longer about making the game look good, it's about making the game look good on your monitor. For example, there's not too much difference playing Doom 3 at 1920 x 1200 vs. 2560 x 1600, it's just that the former looks great on a 24" monitor while the latter looks great on a 30" monitor. The quest for perfect image quality stops being about resolution and starts being about screen size; almost in a way similar to how consoles used to be, where your only hope for a "better" picture was to go to a larger screen, since you couldn't control resolution.
The pendulum will swing away from ultra high resolutions as games become more and more demanding. There are still some titles that even the G70 can't handle at above 1280 x 1024.
Monday's Athlon 64 Memory Divider article has got me thinking a lot about multitasking and its impacts on higher speed memory. Theoretically there should be some pretty big differences between DDR400 and DDR500 once we get into the heftier multitasking scenarios, but I want to get an idea of exactly how widespread that need is. My initial tests only revealed one scenario where there was a tangible performance boost, but I think they warrant some additional testing. After I'm done with this memory divider stuff I'll head on to that.
Many of you have asked for a Battlefield 2 CPU scaling article and I'm more than happy to oblige, so I've started working on the planning for such an article. Right now I'm stuck trying to figure out how best to make it a manageable benchmarking task, as I'd like to be able to provide accurate CPU/GPU recommendations for each performance class. I think I'll inevitably have to limit what GPUs I cover, but I'll do my best to include the ones you guys want the most.
I've been stuck on a H.264 kick for a while now, so I figured that doing a CPU comparison involving H.264 would be something interesting to do. My only question, other than Quicktime 7 and Nero, what are you folks using to encode H.264 on the PC?
Remember Gigabyte's i-RAM from Computex? Well, one should be in my hands very soon and given the interest in it, it's going to receive top priority as soon as I've got it. Which begs the question, are there any particular tests you all would like to see? I'll admit, I am a bit surprised by the positive response the i-RAM received; I expected people to be interested in it, just not this interested in it.
In the 8+ years I've been running this place, I don't think I've ever pulled an article before. I can't be too specific here, but some folks needed to be kept anonymous and I had to make a decision for the greater good in the long run. I apologize for remaining quiet about it for so long, but it was necessary.
With that out of the way - there's a lot to talk about.
I finally managed to pry a pair of 7800 GTXs away from Derek's hands and I've been working to answer the question of how fast of a CPU need to feed these things. There are a number of variables that have to be taken into account, most importantly being the resolution you're running at. The thing that's truly new about a card as powerful as the G70 is that you really start being limited by your monitor in what resolutions it supports. While owners of large analog CRTs have a lot of flexibility with what resolutions they can run at, LCD owners can't; so if you've got a G70 hooked up to a 1600 x 1200 panel you'll have to make different CPU decisions than if you have a 1920 x 1200 panel. I'm trying to simplify the decision making as best as possible and for this round I'm only focusing on single card solutions, but if there's demand later I can tackle SLI requirements.
I finally hooked up the G70 to the 30" Cinema Display and gave Doom 3 a whirl at 2560 x 1600. What I find most interesting is that once you start getting far above 1600 x 1200 it's no longer about making the game look good, it's about making the game look good on your monitor. For example, there's not too much difference playing Doom 3 at 1920 x 1200 vs. 2560 x 1600, it's just that the former looks great on a 24" monitor while the latter looks great on a 30" monitor. The quest for perfect image quality stops being about resolution and starts being about screen size; almost in a way similar to how consoles used to be, where your only hope for a "better" picture was to go to a larger screen, since you couldn't control resolution.
The pendulum will swing away from ultra high resolutions as games become more and more demanding. There are still some titles that even the G70 can't handle at above 1280 x 1024.
Monday's Athlon 64 Memory Divider article has got me thinking a lot about multitasking and its impacts on higher speed memory. Theoretically there should be some pretty big differences between DDR400 and DDR500 once we get into the heftier multitasking scenarios, but I want to get an idea of exactly how widespread that need is. My initial tests only revealed one scenario where there was a tangible performance boost, but I think they warrant some additional testing. After I'm done with this memory divider stuff I'll head on to that.
Many of you have asked for a Battlefield 2 CPU scaling article and I'm more than happy to oblige, so I've started working on the planning for such an article. Right now I'm stuck trying to figure out how best to make it a manageable benchmarking task, as I'd like to be able to provide accurate CPU/GPU recommendations for each performance class. I think I'll inevitably have to limit what GPUs I cover, but I'll do my best to include the ones you guys want the most.
I've been stuck on a H.264 kick for a while now, so I figured that doing a CPU comparison involving H.264 would be something interesting to do. My only question, other than Quicktime 7 and Nero, what are you folks using to encode H.264 on the PC?
Remember Gigabyte's i-RAM from Computex? Well, one should be in my hands very soon and given the interest in it, it's going to receive top priority as soon as I've got it. Which begs the question, are there any particular tests you all would like to see? I'll admit, I am a bit surprised by the positive response the i-RAM received; I expected people to be interested in it, just not this interested in it.
55 Comments
View All Comments
Dave Cason - Friday, July 22, 2005 - link
iRam suggestions:I would like to see some RAID tests if you receive three or more cards. If youu could run raid 0 and raid 5 performance benchmarks with various SATA Raid cards including SATA, SATA II 150, SATA II 300 on cards that are PCI, PCI-X, PCIe socketed (I'd especially like to see a controller that claims deadicated bandwidth to each drive) . The point of these tests would be the following:
1. Test the effects of ultra-high-speed, sata drives on CPU utilization.
2. Test the actual vs. theoretical throughput of various RAID cards and buses. I'd like to see how much performance you can squeeze out of a nforce4 Pro board with both a 2200 and 2050 chip that has a lot of bandwidth to offer (such as the Thunder K8WE-S2895 or the SuperMicro board that Monarch is selling under their own name). Also, if you can duplex two or more RAID cards running through each chip (2200 & 2050) it might expose something different than we've seen anywhere else on the Web.
3. Test to see if the card's presence can affect some of the numbers in the multiprocessor and dual-core benchmarks you've already run. Especially those that use a database or include a compression utility or video editing.
nevermind4711 - Wednesday, July 20, 2005 - link
Freedom of speech is relative.In Russia they have the secret police which makes your life miserable if you say something they don´t like.
In the US we have Microsoft.
To bad it was an interesting article. I am looking forward to read the follow-up article. I hope it will be as crisp a and specific as the original one without giving out your source to the secret pol.. Microsoft I mean.
TheChefO - Tuesday, July 19, 2005 - link
Developing code that is multithreaded is not the issue. Making sure that this code is running in tandem to meet results at roughly the same time is difficult. Remember, this isn't 3d max. Everything must be running smoothly and in sync (preferably 60 completed frames (audio, physics, ai, control input, & draw all in 1/60th of a second).Anand -
You haven't responded to my question posted above. Is this not a relavent idea/question? Or is it not the appropriate place to post?
Thanks Anand!
A - Monday, July 18, 2005 - link
Censorship sucks. It really would have been nice to know for sure what was going on earlier. I think you owe it to your readers, Anand.Joe - Monday, July 18, 2005 - link
I have to agree. There's a huge push in the market for multithreading (read: Hyperthreading and dual core processors, not to mention the long existing supercomputers). To deal with this, most institutions have been pushing the learning of multithreaded coding.Even if the current developers can't wrap themselves around it, the coming generation will. It may seem like the coming games will only be single threaded for a good deal of time, but I think that, with the pushing being done by Microsoft and Sony, multithreaded games will be coming much sooner than people think.
Particularly, I know for a fact that one of the early games for Xbox360 (Dead or Alive) will certainly be multithreaded. The lead developer has stated several times that the entire team has been playing with the multiple cores to get the best possible performance. I get the feeling that the multicore revolution will be arriving sooner than many people think.
Dmitheon - Monday, July 18, 2005 - link
Somehow I doubt it was the Sony & MS suits that were calling Anand, but rather some of his anonymous sources. If they felt that comments that were in the article could only be attribued to him/her or their organzation they may have asked for a retraction fearing retrobution. Anand, I did get a chance to read the article, and I think it's unforunate that it was pulled, because while I found it interesting, I didn't take it as hard fact. The majorty of it was speculation based on current trends. To me is was a stirring of the pot and I was hoping it would get a discussion going. I'd love to see another analysis in 6 and 12 month to see if the development tools that MS & Sony provided allows programmer to take advantage of the various cores easily. Also, I don't understand why game developers are so intimidated by multi-threaded programming. I've been working as a software engineer for over 9 years and have written multi-threaded code nearly the entire time. Sure it's not as simple as the single threaded event driven model, but it's hardly the enigma some of your sources made it seem.Ajay Desai - Sunday, July 17, 2005 - link
Today companies are very touchy about people dogging publicly on their products. And even more touchy on their own people giving away information.It sucks that it came to the point where an editor has to pull an article like that. But as far as editor's go, Anand has outshined almost every other review site. So I respect his decision, and that includes whatever information he does or does not give out.
Thanks Anand for sticking with it over all these years!
reactor - Saturday, July 16, 2005 - link
For BF2: the one thing i would like to see, how does the different map sizes(16,32,64)affect performance on both cpu and gpu. as for what gpus, on ati side: 9800pro, x800xl and x850xtpe on nvidia side: 6600gt, 6800gt/U and 7800gtx. try to use what most people would have(by most people i mean gamers/enthusiasts that read this site).The Gigabyte iram, sounds interesting. as others have said, do we know how long data will stay stored on it? would using ecc be better? other than that tests like boot time, level load times for games, stuff on start up and amount of disk swaping, or pagefiling on apps like PS or 3DSMax (or in general windows use). Also is it easy to use, can you move files around like a hard drive? and info on transfer speeds and other hard drive tests.
and as to what ppl have mentioned, the 'dev corner' sounds like a interesting idea if it were to be put into place. i would be more intested in how they find working with the hardware what they like/dont, what obstacles have they ran into with technology, what they would like to see. nothing really to do with the games specifically, just how they use the tech and what they think about it.
there ya go my 2 cents worth.
Pete - Friday, July 15, 2005 - link
I was not saying he is trying to cover his own rear end by pulling the article but that it will not help the anonymous source by doing so since the article is already public. Once the information is in the public domain there is no benefit to the source to having it pulled.IMHO the only possible reason for such behavior is heavy handed pressure from the console vendors themselves.
I stand by my original comments and wish Anand was not being so elusive about it. Certainly filling in everyone on the actual reasons in detail would not harm the source any more.
NuroMancer - Friday, July 15, 2005 - link
#44 -First: The correct place to post this is the hardware forum, or technical support not a blog :D
Second: Read the manual for how the Dims must be put into the board. It's possible that you are not loading them into the slots correctly and then you crash. Anyway post over in the forums