Comments Locked

152 Comments

Back to Article

  • noxipoo - Sunday, September 4, 2011 - link

    i'm looking for alternatives to drobo or the more expensive NAS devices so some raid card recommendations along with all the things that one needs would of been nice.
  • bamsegrill - Sunday, September 4, 2011 - link

    Yeah, some raidcard-recommendations would be nice.
  • Rick83 - Sunday, September 4, 2011 - link

    MY RAID card recommendation is a mainboard with as many SATA ports as possible, and screw the RAID card.

    For anything but high end database servers, it's a waste of money.
    With desktop boards offering 10 to 12 SATA ports these days, you're unlikely to need additional hardware, if you chose the right board.

    Otherwise, it's probably wisest to go with whatever chipset is best supported by your OS.
  • PCTC2 - Sunday, September 4, 2011 - link

    But there's the fact that software RAID (which is what you're getting on your main board) is utterly inferior to those with dedicated RAID cards. And software RAIDs are extremely fickle when it comes to 5400 RPM desktop drives. Drives will drop out and will force you to rebuild... over 90 hours for 4 1.5TB drives. (I'm talking about Intel Storage Matrix on Windows/ mdadm on Linux).

    You could run FreeNAS/FreeBSD and use RAID-Z2. I've been running three systems for around 5 months now. One running Intel Storage Matrix on Windows, one running RAID-Z2 on FreeBSD, and one running on a CentOS box on a LSI2008-based controller. I have to say the hardware has been the most reliable, with the RAID-Z2 in a close second. As for the Intel softRAID, I've had to rebuild it twice in the 5 months (and don't say it's the drives because these drives used to be in the RAID-Z2 and they were fine. I guess Intel is a little more tight when it comes to drop-out time-outs).

    A good RAID card with an older LSI1068E for RAID 5 is super cheap now. If you want a newer controller, you can still get one with an LSI2008 pretty cheap as well. If you want anything other than a giant RAID 0 stripe (such as RAID 5/6/10), then you should definitely go for a dedicated card or run BSD.
  • Rick83 - Sunday, September 4, 2011 - link

    I've been using 5400 rpm disks and mdadm on linux for quite a while (6 years now?) and never had a problem, while having severly sufficient performance.
    If disks drop, that's the kernel saying that the device is gone, and could be jut a bad controller.
    I've been on three boards and never had that kind of issue.
    Windows is a bit more annoying.
    Also, your rebuild time is over the top.
    I've resynced much faster (2 hours for 400GB - so 10x faster than what you achieved. While also resyncing another array. Sounds like you may have a serious issue somewhere)

    The compatibility advantage of software RAID outweighs any performance gain, unless you really need those extra 10 percent, or run extreme arrays (RAID-6 over 8 disks, and I might consider going with a dedicated card)

    I think it might be the Intel/Windows combination that is hurting you - you might want to try the native windows RAID over the Intel Matrix stuff. Using that is the worst of both worlds, as you have the vendor lock in off a dedicated card and the slightly lower performance of a software solution.

    Of course, you also mentioned mdadm, but I've never had a problem with that, with a bunch of different disks and controllers and boards.
    I guess in two to three years, when I upgrade my machine again, I will have to look at a SATA controller card, or maybe sooner, should one of my IDE disks fail without me being able to replace it.

    I think you may just have been unlucky with your issues, and can't agree with your assessment :-/
  • Flunk - Sunday, September 4, 2011 - link

    I agree, I've used the Windows soft raid feature a lot and it trumps even hardware raid for ease of use because if your raid controller dies you can just put your drives in any windows system and get your data off. You don't need to find another identical controller. Performance is similar to matrix raid, good enough for a file server.
  • vol7ron - Monday, September 5, 2011 - link

    Wouldn't a RAID card be limited to the PCI bus anyhow? I would suspect you'd want the full speed that the SATA ports are capable of
  • vol7ron - Monday, September 5, 2011 - link

    Even with 5400RPM drives, if you have a lot of data you're copying/transfering, you could probably saturate the full bandwidth, right?
  • Rick83 - Monday, September 5, 2011 - link

    PCI is wide enough to support gigabit ethernet, so if you don't have too many devices on the bus, you'll be fine until you have to build a RAID array.
    With PCI-X and PCIe these limitations are no longer of relevance.
  • Jaybus - Monday, September 5, 2011 - link

    There are plenty of PCI-E x4 and x8 RAID cards.
  • mino - Tuesday, September 6, 2011 - link

    For a plenty of money :)

    Basically, a SINGLE decent raid card costs ~200+ for which you have the rest of the system.

    And you need at least 2 of them for redundancy.

    Also, with a DEDICATED file server and open sourced ZFS, who needs HW RAID? ...
  • alpha754293 - Tuesday, September 6, 2011 - link

    In most cases, the speed of the drives/controller/interface is almost immaterial because you're going to be streaming it over a 1 Gbps network at most.

    And if you actually HAVE 10GoE or IB or Myrinet or any of the others, I'm pretty sure that if you can afford the $5000 switch, you'd "splurge" on the $1000 "proper" HW RAID card.

    Amusing how all these people are like "speed speed speed!!!!" forgetting that the network will likely be the bottleneck. (And wifi is even worse, 0.45 Gbps is the best you can do with wifi-n.)
  • DigitalFreak - Sunday, September 4, 2011 - link

    I've been using Dell PERC-5i cards for years. You can find them relatively cheap on E-bay, and they usually include the battery backup. I believe they're limited to 2TB drives though.
  • JohanAnandtech - Monday, September 5, 2011 - link

    "But there's the fact that software RAID (which is what you're getting on your main board) is utterly inferior to those with dedicated RAID cards"

    hmm. I am not sure those entry-level firmware thingies that have a R in front of them are so superior. They offload most processing tasks to the CPU anyway, and they tend to create problems if they break and you replace them with a new one with a newer firmware. I would be interested to know why you feel that Hardware RAID (except the high end stuff) is superior?
  • Brutalizer - Monday, September 5, 2011 - link

    When you are saying that software raid is inferior to hardware raid, I hope you are aware that hw-raid is not safe against data corruption?

    You have heard about ECC RAM? Spontaneous bit flips can occur in RAM, which is corrected by ECC memory sticks.

    Guess what, the same spontaneous bit flips occur in disks too. And hw-raid does not detect nor correct such bit flips. In other words, hw-raid has no ECC correction functionality. Data might be corrupted by hw-raid!

    Neither does NTFS, ext3, XFS, ReiserFS, etc correct bit flips. Read here for more information, there are also research papers on data corruption vs hw-raid, NTFS, JFS, etc:
    http://en.wikipedia.org/wiki/ZFS#Data_Integrity

    In my opinion, the only reason to use ZFS is because it detects and corrects such bit flips. No other solution does. Read the link for more information.
  • sor - Monday, September 5, 2011 - link

    Many RAID solutions scrub disks, comparing the data on one disk to the other disks in the array. This is not quite as robust as the filesystem being able to checksum, but as your own link points out, the chances of a hard drive flipping bits is something on the order of 1 in 1.6PB, so combined with a RAID that regularly scrubs the data I don't see home users needing to even think about this.
  • Brutalizer - Monday, September 5, 2011 - link

    You are neglecting something important here.

    Say that you repair a raid-5 array. Say that you are using 2TB disks, and you have an error rate of 1 in 10^16 (just as stated in the article). If you repair one disk, then you need to read 2 000 000 000 000 byte, every time you read a bit, an error can occur.

    The chances of at LEAST ONE ERROR, can be calculated by this wellknown formula:
    1 - (1-P)^n
    where P is the probability of an error occuring, and "n" is the number of times the error can occur.

    If you insert those numbers, then it turns out that during repair, there is something like 25% of hitting at least one read error. It might you have hit two errors, or three errors, etc. Thus, there are 25% chance of you getting read errors.

    If you repair a raid, and then run into read errors - you have lost all your data, if you are using raid-5.

    Thus, this silent corruption is a big problem. Say some bits in a video file is flipped - that is no problem. An white pixel might be red instead. Say your rar file has been affected, then you can not open it anymore. Or a database is affected. This is a huge problem for sysadmins:
    http://jforonda.blogspot.com/2006/06/silent-data-c...
  • Brutalizer - Monday, September 5, 2011 - link

    PS. There is 1 in 10^16 that the disk will not be able to recover the bit. But there are more factors involved: current spikes (no raid can do this):
    http://blogs.oracle.com/elowe/entry/zfs_saves_the_...

    bugs in firmware, loose cables, etc. Thus, the chance is much higher than 10^ 16.

    Also, raid does not scrub disks thoroughly. They only compute parity. That is not checksumming data. See here about raid problems:
    http://en.wikipedia.org/wiki/RAID#Problems_with_RA...
  • alpha754293 - Tuesday, September 6, 2011 - link

    @Brutalizer
    Bit flips

    I think that CERN was testing that and found that it was like 1 bit in 10^14 bits (read/write) or something like that. That works out (according to the CERN presentation) to be 1 BIT in 11.6 TiB.

    If a) you're concerned about silent data corruption on that scale, and b) that you're running ZFS - make sure you have tape backups. Since there ARE no offline data recovery tools available. Not even at Sun/Oracle. (I asked.)
  • sor - Monday, September 5, 2011 - link

    Inferior how? I've been doing storage evaluation for years, and I can say that software raid generally performs better, uses negligible CPU, and is easier to recover from failure (no proprietary hardware). The only reason I'd want a hardware RAID is for ease of use and the battery-backed writeback.
  • HMTK - Monday, September 5, 2011 - link

    Inferior as in PITA for rebuilds and stuff like that. On my little Proliant Microserver I use the onboard RAID because I'm too cheap to buy something decent and it's only my backup machine (and domain controller, DHCP, DNS server) but for lots of really important data I'd look for a true RAID card with an XOR processor and some kind of battery protection: on the card or a UPS.
  • fackamato - Tuesday, September 6, 2011 - link

    I've used Linux MD software RAID for 2 years now, running 7x 2TB 5400 rpm "green" drives, and never had an issue. (except one Samsung drive which died after 6 months).

    This is on an Atom system. It took roughly 24h to rebuild to the new drive (CPU limited of course), while the server was happily playing videos in XBMC.
  • Sivar - Tuesday, September 6, 2011 - link

    This is not true in my experience.
    Hardware RAID cards are far, far more trouble than software RAID when using non-enterprise drives.

    The reason:
    Nearly all hard drives have read errors, sometimes frequently.
    This usually isn't a big deal: The hard drive will just re-read the same area of the drive over and over until it gets the data it needs, then probably mark the trouble spot as bad, remapping it to spare area.

    The problem is that consumer hard drives are happy to spend a LONG time rereading the trouble spot. Far longer than most hardware RAID cards need to decide the drive is not responding and drop it -- a perfectly good drive.

    For "enterprise" SATA drives, often the *only* difference, besides price, is that enterprise drives have a firmware flag set to limit their error recovery time, preventing them from dropping unless they have a real problem. Look up "TLER" for more information.

    Hardware RAID cards generally assume they are using enterprise drives. With RAID software it varies, but in Linux and Windows Server 2008R2 at least, I've never had a good drive drop. This isn't to say it can't happen, of course.

    ------------------------------

    For what it's worth, I recommend Samsung drives for home file servers. The 2TB Samsung F4 has been excellent. Sadly, Samsung is selling its HDD business.

    I expressly do not recommend the Western Digital GP (Green) series, unless you can older models before TLER was expressly disabled in the firmware (even as an option).
  • Havor - Sunday, September 4, 2011 - link

    HighPoint RocketRAID 2680 SGL PCI-Express x4 SATA / SAS (Serial Attached SCSI) Controller Card

    In stock.
    Now: $99.00

    http://www.newegg.com/Product/Product.aspx?Item=N8...

    Screw software raid, and then there are many card with more options like online array expansion.
  • Ratman6161 - Tuesday, September 6, 2011 - link

    For home use, a lot/most people are probably not going to build a file server out of all new components. We are mostly recycling old stuff. My file server is typically whatever my old desktop system was. So when I built my new i7-2600K system, my old Core 2 Quad 6600 desktop system became my new server. But...the old P35 motherboard in this system doesn't have RAID and has only 4 SATA ports. It does have an old IDE Port. So it got my old IDE CD-ROM, and three hard drives that were whatever I had laying around. Had I wanted RAID though, I would probably get a card.

    Also, as to OS; A lot of people for use as a home file server are not going to need ANY "server" os. If you just need to share files between a couple of people, any OS you might run on that machine is going to give you the ability to do that. Another consideration is that a lot of services and utilities have special "server" versions that will cost you more. Example: I use Mozy for cloud backup but if I tried to do that on a Windows Server, it would detect that it was a server and want me to upgrade to the Mozy Pro product which costs more. So by running the "server" on an old copy of Windows XP, I get around that issue. Unless you really need the functionality for something, I'd steer clear of an actual "server" OS.
  • alpha754293 - Tuesday, September 6, 2011 - link

    @Rick83

    "MY RAID card recommendation is a mainboard with as many SATA ports as possible, and screw the RAID card."

    I think that's somewhat of a gross overstatement. And here's why:

    It depends on what you're going to be building your file server for and how much data you anticipate on putting on it, and how important is that data? LIke would it be a big deal if you lost all of it? Some of it? A weeks worth? A day's worth? (i.e. how fault tolerant ARE you?)

    For most home users, that's likely going to be like pictures, music, and videos. With 3 TB drives at about $120 a pop (upwards of $170 a pop), do you really NEED a dedicated file server? You can probably just set up an older, low-powered machine with a Windows share and that's about it.

    @Rick83/PCTC2

    I think that when you're talking about rebuild rates, it depends on what RAID level you were running. Right now, I've got a 27 TB RAID5 server (30 TB raw, 10 * 3TB, 7200 rpm Hitachi SATA-3 on Areca ARC-1230 12-port SATA-II PCIe x8 RAID HBA); and it was going to take 24 hours using 80% background initialization or 10 hours with foreground initialization. So I would imagine that if I had to rebuild the entire 27 TB array; it's going to take a while.

    re: SW vs. HW RAID
    I've had experience with both. First is onboard SAS RAID (LSI1068E) then ZFS on 16*500 GB Hitachi 7200 rpm SATA on Adaptec 21610 (16-port SATA RAID HBA), and now my new system. Each has it's merits.

    SW RAID - pros:
    It's cheap. It's usually relatively easy to set up. They work reasonably well (most people probably won't be able to practically tell the difference in performance). It's cheap.

    SW RAID - cons:
    As I've experienced, twice; if you don't have backups, you can be royally screwed. Unless you've actually TRIED transplanting a SW RAID array, it SOUNDS easy, but it's almost never is. A lot of the times, there are a LOT of things that happen/running in the background that's transparent to the end user so if you tried to transplant it, it doesn't always work. And if you've ever tried transplanting a Windows install (even without RAID); you'll know that.

    There's like the target, the LUN, and a bunch of other things that tell the system about the SW RAID array.

    It's the same with ZFS. In fact, ZFS is maybe a little bit worse because I think there was like a 56-character tag that each hard drive gets as a unique ID. If you pulled a drive out from one of the slots and swapped it with another, haha...watch ZFS FREAK out. Kernel panics are sooo "rampant" that they had a page that told you how to clear the ZFS pool cache to stop the endless kernel panic (white screen of death) loop. And then once you're back up and running, you had to remount the ZFS pool. Scrub it, to make sure no errors, and then you're back up.

    Even Sun's own premium support says that in the event of a catastrophic failure with SW RAID, restore your data from back-ups. And if that server WAS your backup server -- well...you're SOL'd. (Had that happen to me TWICE because I didn't export and down the drives before pulling them out.)

    So that's that. (Try transplanting a Windows SW RAID....haha...I dare you.) And if you transplanted a single Windows install enough times, eventually you'll fully corrupt the OS. It REALLLY hates it when you do that.

    HW RAID - pros:
    Usually it's a lot more resilent. A lot of them have memory caches and some of them even have backup battery modules that help store the write intent operations in the event of a power failure so that at next power-up, it will complete the replay.* (*where/when supported). It's to prevent data corruption in the event that say you are in the middle of copying something onto the server, but then the power dies. It's more important with automated write operations, but since most people kinda slowly pick and choose what they put on the server anyways, that's usually not too bad. You might remember where it left off and pick it up from there manually.

    It's usually REALLY REALLY fast because it doesn't have OS overhead.

    ZFS was a bit of an exception because it waits until a buffer of operations is full before it actually executes the disk. So, you can get a bunch of 175 MB/s bursts (onto a single 2.5" Fujitsu 73 GB 10krpm SAS drive), but your clients might still be reporting 40 MB/s. On newer processors, it effectively was idle. On an old Duron 1800, it would register 14% CPU load doing the same thing.

    HW RAID - cons:
    Cost. Yes, the controllers are expensive. But you can also get some older systems/boards with onboard (HW RAID) (like LSI based controllers), but they work.

    With a PCIe x8 RAID HBA, even with PCIe 1.0 slots, each lane is 2 Gbps (250 MB/s) in each direction. So an 8-lane PCIe 1.0 card can do 16 Gbps (2 GB/s) or 32 Gbps (4 GB/s). SATA-3 is only good to 6 Gbps (750 MB/s including overhead). The highest I'm hitting with my new 27 TB server is just shy of 800 MB/s mark. Sustained read is 381 MB/s (limited by SATA-II connector interface). It's the fastest you can get without PCIe SSD cards. (And as far as I know, you CAN'T RAID the PCIe SSD cards. Not yet anyways.)
  • Brutalizer - Friday, September 9, 2011 - link

    It doesnt sound like I have the same experience of ZFS as you.

    For instance, your hw-raid ARECA card, is it in JBOD mode? You know that hw-raid cards screw ZFS seriously?

    I have pulled disks and replaced them without problems, you claim you had problems? I have never heard of such problems.

    I have also pulled out every disk, and inserted them again in other slots and everything worked fine. No problem. It helps to do a "zpool export" and "import" also.

    I dont understand all your problems with ZFS? Something is wrong, you should be able to pull out disks and replace them without problems. ZFS is designed for that. I dont understand why you dont succeed.
  • plonk420 - Sunday, September 4, 2011 - link

    friend has had good luck with a $100ish 8xSATAII PCI-X Supermicro card (no raid). he uses lvm in ubuntu server. i think they have some PCI-e cards in the same price range, too.

    i got a cheapish server-grade card WITH raid (i had to do some heavy research to see if it was compatible with linux), however it seems there's no SMART monitoring on it (at least in the drive manager GUI; i'm a wuss, obviously).
  • nexox - Wednesday, September 7, 2011 - link

    Well, there are about a million replies here, but I think I've got some information that others have missed:

    1) Motherboard SATA controllers generally suck. They're just no good. I don't know why this site insists on benchmarking SSDs with them. They tend to be slow and handle errors poorly. Yes, I've tested this a fair amount.

    2) Hardware RAID has it's positives and negatives, but generally it's not necessary, at least in Linux with mdraid - I can't speak for Windows.

    So what do you do with these facts? You get a quality Host Bust Adaptor (HBA.) These cards generally provide basic raids (0,1,) but mostly they just give you extra SAS/SATA ports, with decent hardware. I personally like the LSI HBAs (since LSI bought most of the other storage controller companies,) which come in 3gbit and 6gbit SAS/SATA, on PCI-Express x4 and x8, with anywhere from 4 to 16 ports. 8 lanes of PCI-Express 2.0 will support about 4GB/s read, which should be enough. And yes, SAS controllers are compatible with SATA devices.

    Get yourself an LSI card for your storage drives, use on board SATA for your boot drives (software raid1,) and run software raid5 for storage.

    Of course this means you can't use an Atom board, since they generally don't have PCI-e, and even the Brazos boards only offer PCI-e 4x (even if the slots look like a 16x.)

    For some reason SAS HBAs are some kind of secret, but they're really the way to go for a reliable, cheap(ish) system. I have a $550 (at the time) 8 port hardware raid card, which is awesome (Managed to read from a degraded 8 disk raid5, cpu limited at 550MB/s, on relatively old and slow 1TB drives, which isn't going to happen with software raid,) but when I build my next server (or cluster - google ceph) I will be going with sofware raid on a SAS HBA.
  • marcus77 - Saturday, October 6, 2012 - link

    I would recommend you euroNAS http://www.euronas.com as OS because it would provide you more flexibility (you can decide which hw to use and can upgrade it easely).

    Raid controllers don't always make sense - especially when it comes to recovery (multiple drive failures) software raid is much more powerful than most raid controllers.

    If you wish to use many drives you will need an additional controller - LSI makes pretty good HBAs - they don't provide raid functionality but have many ports for the drives. You could use it in combination with software raid. http://www.lsi.com/products/storagecomponents/Page...

    If you are looking for a real HW raid controller - I would recommend Adaptec - they have a very good linux support which is mostly used with storage servers
  • EnzoFX - Sunday, September 4, 2011 - link

    What about something for a mostly Mac environment, and the occasional Windows system.
  • DesktopMan - Sunday, September 4, 2011 - link

    Anyone looking for a setup with many HDDs should take a look at port multipliers. You can get external cases for the HDDs, which saves your PC from a lot of heat. Connect using one ESATA cable per 5 drives and you don't need many ports either. (Just make sure they support port multipliers.)
  • mongo lloyd - Sunday, September 4, 2011 - link

    "It is impossible to hear active HDDs inside this case even when you're sitting just a few feet from it (even the notoriously loud VelociRaptors)."

    Maybe if you're half-deaf. And if you put 10 of them in that case, even your 97-year-old grandma would hear them.

    Don't be fooled, a 5-10 disk cabinet will be rather loud and you probably don't want it near your person if you are sensitive to noise. No way around that.
  • Rick83 - Sunday, September 4, 2011 - link

    Just skimming the CPU page, I see a problem: None of these CPU's accelerate encryption in hardware.
    Encrypting your hard-disks should be standard procedure, especially on a file server. You never know what someone may use against you, and in the case of a disk failure, you won't have to worry about sending in a disk with readable data on it.
    Without hardware acceleration though gigabit ethernet may not end up being saturated, especially on the truly low end zacates and forget about atom...

    My recommendation is the sandy bridge i5 2390T. Should be trivial to cool passively if there's enough fans to keep the chassis below 50°C.
    Alternatively, there's VIA - but those nanos are somewhat harder to obtain in the retail channel (and even the 2390 is pretty hard to get)
    And finally, something not touched on: first gen core i5 CPU's on old socket 1156 boards. As those go EOL good deals can be had, and there's no clear power savings advantage in sandy bridge.

    RAM wise, 2GB is a huuuuge amount for a slim OS.
    I'm running 2GB on my machine, and never hit swap -ever- and that's even though I am runnig gentoo and compiling my own kernels and running multiple other services besides samba and nfs.

    Finally on boards: with a good deal you can get those SATA ports on the board on the cheap. Paid only 150 euro for my p55-ud5 last year, as it as going EOL. Bonus is you generally get a better featured board as well, so I also have IEEE 1394 and dual LAN.
  • DanNeely - Sunday, September 4, 2011 - link

    Encryption is also one more factor to make things harder when everything goes wrong at once and you're trying to recover data. I'd rather write off the cost of the disks if they fail than impair my catastrophe recovery options.
  • Rick83 - Monday, September 5, 2011 - link

    That's why you have backups.
  • DanNeely - Monday, September 5, 2011 - link

    If your backup is still intact, not everything has gone wrong yet...
  • Rick83 - Monday, September 5, 2011 - link

    That's the point of the back up so, so that it's impossible for everything to go wrong.
    If everything goes wrong, there's usually a pretty big design flaw somewhere.
    The impact of encryption is relatively negligible, excepting the performance impact.
  • don_k - Sunday, September 4, 2011 - link

    That is why the article recommends a quality PSU. The enterprise space has 48 disk monstrosities, 10, even 20 drive home file server is perfectly possible - get a good PSU.
  • chbarg - Sunday, September 4, 2011 - link

    Excellent comment.

    Recently I built a W2008 server with an AMD processor without encryption support in hardware and I was surprised by the high CPU utilization by Truecrypt. In contrast, my laptop has an Intel CPU with encryption support in hardware and Truecrypt barely loads the CPU.

    I use LUKS encryption in my file server at home for safety.

    Regards,
  • HMTK - Monday, September 5, 2011 - link

    We don't even use encryption for our enterprise customers. There's just no point. How easy do you think it is recovering useful data from a RAID member? Not talking about a mirror here but RAID 5/6/10
  • qoonik - Sunday, September 4, 2011 - link

    Hardware:
    motherboard: supermicro X7SPA-H-D525 (
    - Intel® Atom™ D525 (Passive cooling)
    - Intel® ICH9R Express Chipset
    - 6x SATA (3.0Gbps)

    ram: 2 x 4GB DDR3 SO-DIMM Kingston
    case: CFI A7879 Mini-ITX NAS/Server Case - 4 Hot Swap Bays
    psu: FSP120-50GNF (FANLESS)
    fan: BeQuiet SilentWings 120 mm PWM
    hd: 1 x WD green 1.5 TB (completing ... )

    OS:
    Amahi Home Server (Fedora) http://www.amahi.org
  • Rick83 - Sunday, September 4, 2011 - link

    Gonna chime in here ;)
    motherboard is a gigabyte P55-UD5
    i5 650
    10xSATA + 2x eSATA + 1xIDE + 2xIDE from PCI card
    2x1 GB some-DDR3
    CM Stacker STC-1 (yes, the original goodness!) with 3 4in3 modules
    psu: seasonic 430W with plenty of power adapters
    fan: stock fans all around (120mm per 4in3, one 120mm exhaust, one 80mm top exhaust, no CPU fan), CPU cooler is a scythe Yasya
    hd: 1x 8GB transcend IDE flash module (SLC), 1x 2.5" transcend IDE ssd (SLC), 1x 40GB seagate IDE, 1x 80 gb WD IDE, 1x 80GB seagate SATA, 3x 400GB Seagate SATA, 5x WD 1TB EARS SATA, 1x Samsung 1TB SATA (12 spinning disks, 2 SSDs)
    an optical drive
    a TV-card (though apparently broken...stream coming out of it is corrupted)
    an IEEE1394 CF Card reader
    Setup is 2x RAID1 (second level back up, and dynamic system files, consisting of the small disks) ad 2x RAID 5 (main and back-up array)
    graphics: nVidia 6200 - looking to replace this with something that idles at lower energy - can't stand the card being as hot as it is, with no screens attached.

    Now having lived with that machine for a few years (previously it was running only 7 disks and a sempron single core on it, before that there was a pre-cursor server that was running on different celerons and as little as 32MB of RAM) I am currently looking at options to make the disks more accessible. Something this article doesn't touch on, is with many disks come many deaths. Which is where a hot-plug cage really comes into its own. So I'm on the look-out for affordable backplanes with 120mm fans which I can replace 1:1 with two of the 4-in-3s (my IDE disks will have less use in a hot swap cage ;) )
  • Death666Angel - Sunday, September 4, 2011 - link

    That's a nice coincidence. I just ordered my file server stuff this week and it got here on Friday. So far I just put it together, haven't turned it on, yet (exam stress).
    I haven't read this article, hope my stuff isn't too useless. ;-)
    Just a fyi, here is the system I have:
    - Sharkoon T9, 9 x 5.25"!!
    - AMD Phenom II X4 840 (fake Phenom btw)
    - Asus M4A88TD-V EVO/USB3, which supports ECC memory
    - 2x4GB Kingston DDR3-ECC 1333MHz RAM
    - 2 x IcyDock 5 in 3 Backplane (MB455SPF-B)
    - Intel Gigabit CT Desktop Adapter for PCI-E x1
    - A300 Couger PSU (staggered spin-up ftw! I just hope it works out ;-))
    - Highpoint Rocketraid 2680SGL + 2 MiniSAS to 4xSATA cables

    I'll use 8 2TB HDDs for a RAID5 with Linux Ubuntu, at least that's the plan. I'll also get a UPS soon. I have enough space to upgrade to a second raid adapter (the motherboard has 2 PCI-E slots and I hope the graphics slot will be accepted) and have 15 HDDs in the 9 5.25" trays and I can cram one additional one in there for sure ;-).
  • Rick83 - Sunday, September 4, 2011 - link

    That's a big RAID-5.
    Those are pretty risky: Rebuilds take a long time, and are dependant upon all other disks surviving that long.
    If you really want to go the way of 'one big RAID 5', I'd propose to go with 7 disks and keep one as hot spare. That way at least the rebuild will start right as the first disk dies, minimizing somewhat that other disks deteriorate gravely until a replacement disk is there.
    In general I'd stop with level 5 at 6 disks though. Consider Going with two 4-disk level 5's or a level 6 also.
    Ideally of course you'd have two level 5's where one is a regular back-up of the other, but using a different file system.
  • qoonik - Sunday, September 4, 2011 - link

    Also consider software protection like flexraid http://wiki.flexraid.com/.
  • Death666Angel - Monday, September 5, 2011 - link

    Yeah, my ideal solution was to go with RAID6 where 2 disks could fail (and of course a real RAID controller with a XOR unit and 16 sata ports) but that would have cost about 4 to 6 times the price of the 2680.
    I haven't heard good things about the spare disks, so I would rather go with 2 4HDD RAID5s. But I will do some testing before setting it up, I have 4 empty 2TB drives and will play around with pulling a disk out, having the array rebuilt etc. and then I'll decide which way to go.
    Luckily, the only sensitive, non-easily recoverable data will be my photos and probably some system images which will be backed up regularly. The music and videos can easily be ripped from my collection again. It will be time lost, but not inrecoverable :-).
    As for software RAID, I haven't heard of flexraid. I looked into FreeNAS and ZFS and that wasn't up my ally. Very powerful filesystem, but FreeNAS is too limited to just providing a NAS. I would like to have the option of going full server with this too, hosting different things. And the linux port of ZFS isn't stable as far as I heard, so that was out of the question.
    With the hardware controller I know that I am not OS dependant and all that I read is that in a case of failure, modern raid controllers can also be switched easily by a model from the same maker and not lose any data (which I heard wasn't the case a few years/decades back). :-)
  • alpha754293 - Tuesday, September 6, 2011 - link

    My current system is 10 * 3TB (30 TB raw, 27 TB RAID5). If I go with anything else, I'd have to probably pile LustreFS or some other kind of distributed FS on top of that, which adds to the complexity, and cost, and another system, and power, and complexity in the parity calculations.

    A second server will likely go online and it will run rsync or something akin to that for incremental backups.
  • Lonyo - Sunday, September 4, 2011 - link

    You seem to have missed the perfect board for an Atom based file server.
    http://pden.zotac.com/index.php?page=shop.product_...

    6 SATA ports, x16 and x1 PCIe slots (for potentially 2 RAID cards).
    DTX means it should fit in an mATX case (assuming there is one with enough HDD space), or if you are making something custom the footprint shouldn't be too big.
  • qoonik - Sunday, September 4, 2011 - link

    also suggest supermicro X7SPA-H-D525 ( Intel® Atom™ D525 (Pineview-D)
    Dual Core, 1.8GHz (13W) processor, 6 sata,2x RJ45 LAN port, Mini ITX)
  • Emantir - Sunday, September 4, 2011 - link

    Im using my file server since october'10 it consists of:
    - Lian Li PC-Q08
    - Zotac NM10 DTX Wifi
    - 200GB Matrox Sata HDD (System)
    - 4x 2TB Western Digital WD20EARS (Storage)
    - Asus EN210 Silent
    Sporting Ubuntu 10.10 with Software Raid 5 and XBmC for HD Playback, works like a charm.
    XBmC Remote Apps exist for iOS and Android so i skipped buying a MCE Remote.
  • Emantir - Sunday, September 4, 2011 - link

    Uh, forgot the Problems:
    - Asus EN210 blocks one Sata Port
    - There are some problems concerning the JMicron Sata Multiplier and Linux. one drive gets miserable write speeds, thus making the whole raid 5 somewhat slow. More: http://goo.gl/cM0gg
  • Lonyo - Sunday, September 4, 2011 - link

    Does the NM10 support staggered spin-up of hard drives?
  • Emantir - Sunday, September 4, 2011 - link

    AFAIK No, Im using a 300W PSU anyway, thus high initial current isn't a Problem.
  • pvdw - Monday, September 5, 2011 - link

    But you should have a good quality PSU to give nice clean, reliable current. Loads of PSUs are just rubbish.
  • Lonyo - Sunday, September 4, 2011 - link

    You may have mentioned needing a good power supply, but when you talk about Atom and Zacate boards, low power solutions, and low power "green" drives, you don't focus on the fact that total system power use in typical conditions could be lower than 30w. If you are buying a beefy 500w power supply, you could be wasting a LOT of power due to efficiency issues.

    The 80PLUS rating only tests as low as 20% of full load. 30w on a 500w PSU is below 10% load, so you could be getting 70% efficiency.
    While it's not a major concern, if you are looking to make things low power to leave it on 24/7, you might want to think about some DC power supplies rather than regular desktop power supplies.
    If you are making a 2~4 drive file server based on an Atom system, you could get a 100~120w picoPSU instead of a "real" PSU, and get potentially much higher efficiency than with a 300w+ normal PSU.

    Of course, not everyone (especially Americans) cares about efficiency, since for them power is so inexpensive, but for a 24/7 box, why not at least discuss things which might improve power efficiency?
  • jtag - Sunday, September 4, 2011 - link

    I have to say that a file server guide that mentions RAID/NAS really should include a discussion on which drives are suitable for using in a RAID. Not all drives are valid for use in a RAID, not because of reliability concerns, but rather because not all manufacturers support Error Recovery Control (see http://www.csc.liv.ac.uk/~greg/projects/erc/ for more info) in their consumer level drives.

    I'd very much appreciate it if AnandTech could run the following command on every drive they test and add it to bench, so we could come up with a list of drives that do support ERC:

    smartctl -l scterc /dev/sdX

    smartctl is available for both Windows and Linux (smartmontools.)

    Of course, this may say it is supported, but the real test would be to set timeouts:

    smartctl -l scterc,70,70 /dev/sdX

    And then cause the drive to have a block error and see if access times out, or causes the drive to drop out of the RAID. This would also be a good test of RAID controller cards, though personally I always use software RAID under Linux.
  • jtag - Sunday, September 4, 2011 - link

    And for the record - I run 6 2TB drives in a RAID-6 (2 drive redundancy) with one hot spare under Gentoo Linux software RAID. My drives are 5 Seagate ST32000542AS and one Samsung EcoGreen F4 HD204UI
  • jwilliams4200 - Sunday, September 4, 2011 - link

    Any tips on how to "cause the drive to have a block error"?
  • Rick83 - Monday, September 5, 2011 - link

    you can use hdparm to mark a block as faulty, IIRC.
  • jtag - Monday, September 5, 2011 - link

    Exactly so: hdparm --make-bad-sector

    This is "Exceptionally dangerous. Do not use this option!!" according to the man page, which goes on to say "This can be useful for testing of device/RAID error recovery mechanisms."
  • pvdw - Sunday, September 4, 2011 - link

    The best case I've found for a 4+ HDD SFF home server is the Lian Li PC-Q08. For me sound and size are most important.

    http://www.silentpcreview.com/Silent_Home_Server_B...

    BTW, Linux has some significant advantages over WHS when used for more than just a file server. But I'll leave those for you look up.
  • pvdw - Sunday, September 4, 2011 - link

    BTW, I forgot to mention some things about my configuration.

    Linux RAID-1+0 far copies
    Automated local backups using hard-links
    Auto-rotation of backups
    Auto-rsync to single remote backup
    VPN Server (not finished setting up yet as has become a lower priority)
    Print Server
    Torrent server (Transmission)
    Webserver (web dev environment)
    SSH

    Most of that is more than you'll need, but I'd definitely recommend at least RAID1 and auto-backups for a file server.
  • bobbozzo - Wednesday, September 7, 2011 - link

    A hard-link 'backup' won't protect you if the file is modified or corrupted.
  • jacob733 - Sunday, September 4, 2011 - link

    I set up a fileserver system based on H67 some time ago. Turns out there is a bottleneck somewhere so the SATA throughput is shared between multiple channels. When filling all the channels with magnetic storage, the total throughput is much lower than expected. My old NForce4-based system is actually much faster in this test, but it only supports 4 SATA channels so I can't use it for this.
    Perhaps Anandtech could add a test that fills all the SATA channels instead of just putting a single SSD on the first channel?

    I would also like to warn about stuffing more than 5-6 disks into the same case. Putting more disks in there will take the combined startup current to the extreme, which will seriously reduce the PSU lifetime. Also, it doesn't help to get a larger PSU for this, since these PSUs are segmented, and all the extra wattage goes to support extra graphics cards and perhaps a large CPU, while the rails used for disks are kept fairly constant.
    Whenever I have had a HDD crash in a system with many disks, I have later been able to track it to voltage fluctuations due to a dying PSU.

    /Jacob
  • pvdw - Sunday, September 4, 2011 - link

    Hard drives use the 12V line(s), which are the same ones used by graphics cards. Western Digital Green 2TB drives (WD20EARX), use a peak of 1.75A each. So you won't exceed the 20A+ current supplied by a good PSU.

    The biggest problem is that so many PSUs are rubbish!! Since most customers look at wattage and not the build quality of a PSU, they're conned into buying the wrong one. A good quality 300W PSU would easily run a 6-disk home server. I'd recommend something like the Seasonic S12II or M12II, or the Nexus Value 430.
  • Powerlurker - Monday, September 5, 2011 - link

    The problem is that it's virtually impossible for a consumer to assess the build quality of a PSU before buying it.
  • bigboxes - Monday, September 5, 2011 - link

    Yup. I use a Seasonic S12 430 to power my 9 drive file server. Rock solid. Runs 24/7. Got tired of having HDDs die due to the bad PSU so I invested in a quality one. I also use a APC 1250VA UPS as well to add to the system's reliability.
  • Iketh - Sunday, September 4, 2011 - link

    you know, reading through these comments, I'm liking my idea more and more...

    use a cheap laptop and daisy chain the hell out of 2.5" hdds... yea
  • Iketh - Sunday, September 4, 2011 - link

    excuse me, I should clarify that by daisy chain = powered hub
  • TheeVagabond - Thursday, September 29, 2011 - link

    Seriously, people's responses... like they must be rolling in the money. Overkill much, what kind of home data do people have.
  • DanNeely - Sunday, September 4, 2011 - link

    With the cheapest 10GB nic's on newegg being almost $500 each it's still far too expensive for the typical home network. You'd also probably want a significantly more powerul system than what was described in the article. Feeding a 10GB NIC generally eats an entire core of a XEON chip.
  • JohanAnandtech - Monday, September 5, 2011 - link

    Our measurement show that a 10GB NIC needs even more than one core. 14% of 12 2 GHz Xeon cores, that is about 3.4 GHz of Xeon power.

    http://www.anandtech.com/show/4014/10g-more-than-a...

    Described in desktop terms, that means that you need at least one of Core I5 2400 system just to power your 10G cards. And you probably need more.
  • DanNeely - Monday, September 5, 2011 - link

    Ooops. Guess i skimmed that article too fast; didn't realize it was a hex-core chip.
  • DanNeely - Sunday, September 4, 2011 - link

    10c/kwh is roughly $1/wattyear, so it'd only take around a year for the lower powered Intel box to save more than the AMD equivalent.
  • yottabit42 - Sunday, September 4, 2011 - link

    No mention of OpenFiler as a NAS distribution?

    It's based on a funky, little-known Linux distribution, but I found it much easier to setup and more advanced than FreeNAS. I've been using it for years to host 10 TB of RAID-5 storage and 2 TB of RAID-6 storage, served via FTP, Samba, and rsync. Both arrays are soft-RAID, too. Virtually no problems ever, even with frequent power outages and using XFS as the filesystem (prone to corruption with power outages due to high degree of caching).
  • HMTK - Monday, September 5, 2011 - link

    Yep. I also like OpenFiler. Easy to set up NFS and iSCSI if that's your thing.
  • whaler_99 - Sunday, September 4, 2011 - link

    I am surprised as you mention WHS and FreeNas, as well as drop the Drobo and such name around, that you did not look at unRAID. This is a solution a lot of us are turning to - a basic system license is free, and a 20 drive data system with parity and cache runs you $150 for the license. And can run on pretty much whatever you have laying around. You can start small and grow big. Definately worth a look.
  • jnmfox - Sunday, September 4, 2011 - link

    +1000

    unRAID is easy to use has great community support and like was mentioned is free and can be put on leftover hardware. For basic media storage it is one of the best options for a file server. unRAID doesn't have the limitations of RAID, you can use any combination of HDs and add more as needed, data is protected via the parity drive so you don't have data duplication giving you more HD space.
  • 3DoubleD - Monday, September 5, 2011 - link

    + another 1000

    Unraid is awesome. It is a shame that it was left out of this review. It has to be one of the most flexible solutions out there. It is super easy to use, yet offers an unlimited amount of customization. The community support is fantastic as well.

    As far as DIY home storage server software goes, I think it's the best around.
  • Rick83 - Monday, September 5, 2011 - link

    I guess it's not open source, and the free version has little in the way of support, as well as little deployment, compared to the standard RAID implementations.
    I'd stick with something proven and stable, especially if I trust it with storing all my data.
  • 3DoubleD - Tuesday, September 6, 2011 - link

    While it is not 100% open source, I would say the meaningful bulk of it is. Unraid is based on a stripped down version of Slackware. Moreover, the abundance of plugin's, are developed by the community and completely free. When compared to WHS, there is really no comparison.

    In terms of support for the free version, I received ample support on the Unraid forums while testing out the free version. This included support from both expert users AND Limetech employees. Payment is not a requirement for browsing or posting on the forums.

    Finally, you question the stability of the system without any justification. Do you justify this based on the size of the user base? In that case, the original WHS was REALLY stable... oh wait what was that about the file corruption problems?

    I've been using Unraid for 2 years and I can attest not only to the stability of the software, but also to the hard work of the Limetech employees and enthusiastic user base. Nothing is released without being heavily tested.

    Still, you should never rely on one system to protect all of your data, regardless of the software or hardware you are using. Unraid protects you against hard drive failure. You still need to make backups of critical data. I wouldn't trust any system with the only copy of my data.
  • arswihart - Sunday, September 4, 2011 - link

    I think a survey of the best pre-configured NAS boxes from the likes of QNAP, Synology, and others, would be a nice follow-up to this great article.
  • Hrel - Sunday, September 4, 2011 - link

    Kind of surprised you didn't mention UPS or surge protection. Yes choose a good power supply. But plug that power supply into a UPS and since UPS's have inadequate surge protection plug that into a high quality surge protector. How good do you think it is for your hard drives to randomly shut down every time the power goes out?

    I personally use a small 60 dollar APC UPS plugged into a Belkin surge protector rated at about 4000 Joules. I have my computer and file server set up to power themselves down properly when being powered by UPS. That was there are never improper shut downs.

    Finally, in my experience FreeNAS works fine on 1GB of RAM. (I've never tried less). Only reason I could think to have more than that is if you're streaming HD video.
  • JohanAnandtech - Monday, September 5, 2011 - link

    "Only reason I could think to have more than that is if you're streaming HD video. "

    Filesystem caching does wonders for the performance of a fileserver. I would definitely use 4 GB.
  • tiro_uspsss - Monday, September 5, 2011 - link

    Why wasn't ECC RAM mentioned?
    Please don't give the excuse 'its too expensive' - it isn't - suck it up.
    Given that the file server stays on 24/7(?) & is holding data, ECC is a no-brainer!

    Oh & for anyone interested I know a way to get 4 cores, 2GB ECC+REG RAM, 2x GbE Intel server mobo for ~USD$60 & it'll take ~100W from the wall socket @ load. Only downside to this rig config is: 32-bit only, E-ATX mobo & only 2 expansion slots (PCIEx8 & PCI-X). PCIEx8 is enough for a RAID card tho. Mobo has 6 SATA ports.
  • imaheadcase - Monday, September 5, 2011 - link

    ECC is not needed is why. Servers don't need error correction since its just for storage and not actually doing work.
  • HMTK - Monday, September 5, 2011 - link

    For something that runs 24/7 you DO consider ECC. Just in case something happens in memory and gets written to disk in a corrupt state.
  • WillR - Monday, September 5, 2011 - link

    Sorry, but the first half of page 3 about CPUs is poorly written.

    "My preferred Atom home server motherboard/cpu combo is the ASUS AT5NM10T-I, a passively-cooled, Atom D525 "

    Links to a deactivated product. And then it's obviously not preferred when you say

    "it is difficult to recommend the Atom-based solution given Zacate's substantial performance advantage."

    Even though AnandTech's own Bench charts for the two contradict this claim. http://www.anandtech.com/bench/Product/328?vs=110

    The two are nearly identical performance wise. I'd personally reiterate the lack of hardware encryption and suggest more power for the user that likes having the future option of utilizing SW RAID with parity. Neither are quite powerful enough to handle that task while trying to saturate a gbit LAN.

    I personally consider purchasing a copy of WHS to perform the function of a simple file server as absurd. For what purpose? To have your XBox 360 work with it out of the box?

    To add to the absurdity, I'd suggest:
    SUPERMICRO MBD-X9SCL+-F
    6 SATA II ports, dual high quality NICs for some redundancy, and IPMI.

    Intel Xeon E3-1220
    as dual core Xeons are nearly extinct.

    $15-40 worth of Kingston or Crucial ECC DDR3. 1 GB is plenty for a file server, but 4 is so cheap these days even with ECC that you might as well buy some OS level cache.

    IMHO, if you're going to suggest parts for a server, suggest server parts even if it's for a home user/office. Also, IMHO, for anything less than what's needed by a rig such as this, the user would probably be well served by a NAS unit or even a router with support for sharing an external hard drive on the LAN and FTP to the world via the router's own firewall and port forwarding. Using the Atom or E-350 is quite possibly the overkill scenario for those just needing to share 1 drive between 2-4 PCs 24/7, yet underpowered when trying to accomplish high performance data transfers and data security. As a RAID 0/5 backup server they'd be adequate.
  • HMTK - Monday, September 5, 2011 - link

    ++
  • thesandbender - Monday, September 5, 2011 - link

    It basically throws up a list of everything you could buy without providing any good reasons about why you should or should not. Can you really have the following in a 'buyer's' guide? "the most important factor in long-term HDD reliability is probably luck."

    Things that would have been nice to see:

    1. Comparison of motherboard performance. Is a Dual 1.8ghz Atom enough to manage my RAID-5 array or do I need to pony up for a faster processor?

    2. A real comparison of OS features and performance. This article just basically listed every OS and said "There are some good things... their are some bad things". Maybe benchmark each OS's performance as a SMB file share?

    3. If one of your drive's reports an error... you should probably replace it. If it reports multiple errors, you should almost certainly replace it as soon as possible. Really? You think?

    Most of the articles on Anandtech are pretty useful but this one smacks of a frantic attempt to finish a paper before class starts.
  • djc208 - Friday, September 9, 2011 - link

    I would agree, especially about the OS section.

    Since a lot of the capabilities and stability of the system will be based on what OS you are running a little more in-depth look at each would have been nice.

    I run and like the original version of WHS. I'm reluctant to move to 2011 since it's lost a lot of it's benefits and added too few new ones as far as I've seen. But if actually discussing OS's it's worth listing what they offer out of the box.

    WHS does still offer easy integration and control from remote PCs, It does still handle client PC backups, and being Windows does allow you to do basically anything a windows PC will do with a little extra work.

    Mine is running SageTV and ComSkip to handle all my DVR/media serving duties, and it has a few other services installed like eye-fi so I don't have to fire up a "normal" PC just to copy the files to the server anyway, just walk in the house and turn on the camera.

    But knowing a little more about what is out there for alternatives would be nice in case I decide not to eventually go to WHS 2011.
  • kake - Monday, September 5, 2011 - link

    What about using a rack mount style case? For example, right now we're looking at moving our current tower (with 8 drives in it) to this one (or something like it):

    http://www.newegg.com/Product/Product.aspx?Item=N8...

    A 24 port hot swap 4U case provides plenty of expansion, ease of access to drives, and it doesn't have to be rack mounted as it comes with feet.

    At 400 dollars, I don't know of anything that provides such a bang/buck combination.
  • Rick83 - Monday, September 5, 2011 - link

    Well, you also need the rack, which itself is going to dump another few hundred dollars on you.
    Consider the Lian-Li PC-V343 (http://skinflint.co.uk/301329 - not sure which markets it's available in) which (with 6 hot swap front ends) also houses 24 hot swap hard drives (or 30, if you use 5-in-3s) and yet costs the same as the rack case, while being able to mount conventional hardware. In the end it will probably be a lot cheaper than going with a rack-mount.

    Of course, if you already have a rack for your switch, router and domain server, then adding 4U's is relatively straightforward. For the home and small office (less than 24 clients), I'm not sure going rack is economical.
  • jrocks84 - Monday, September 5, 2011 - link

    You don't actually need a rack for a single server, you can just put it on the ground. Also the Norco RPC-4224 has the hard drive racks included, with the Lian Li, you would have to buy 4 or 5 in 3 racks, nearly doubling the price. You also have to take into account that the total volume of the Lian Li is nearly 2x as much.
  • MrCromulent - Monday, September 5, 2011 - link

    Good for beginners, but to be honest I expected a little more depth from an Anandtech article. How about questions like:

    - Have all motherboard recommendations been positively tested to run under FreeBSD / FreeNAS? I my experience, FreeBSD is much more picky when it comes to SATA and network controllers than Linux and Windows.

    - How much does (absence of) hardware-accelerated encryption impact transfer speeds on every processor mentioned?

    - How important is ECC RAM? That's the reason I chose an Asus AM3 board for my file server. If you bother setting up a nice checksummed ZFS Raid, I would assume you also make sure your RAM has some parity check as well.
  • Rick83 - Monday, September 5, 2011 - link

    Why?
    If you have ECC on the disk, do you still have to worry about RAM?
    After all, data consistency should be given. At worst some cosmic ray will flip a bit of a kernel page and panic/crash, but that's exceedingly rare, and other hardware/software failures are more likely to send you into a crash than that.

    ECC is nice if you actually fill your memory, for example in numerical simulation for engineering, you really don't want a flipped bit to impact the predicted tolerances, but if you already have an integrity check - why worry about RAM (on a fileserver)

    Also, as you said, ECC is expensive: either you have to go AMD and pay for the extra electricty and not get AES acceleration, or you go Xeon and pay twice as much for mainboard and CPU as you normally would.

    Currently I don't see ECC as an economically viable choice.
  • Death666Angel - Monday, September 5, 2011 - link

    From what I read, if you have RAID and don't have a 500+ bucks RAID controller, your RAM will be used for parity stuff. I have read of 2 cases where people had a RAID5 and everything went fine until they all of a sudden couldn't read their data without any prior indication of a problem. Turned out, one of their RAMs was faulty. Haven't read anything of the sort happening with ECC RAM (though that is hardly a good amount of data, I agree ;-)).

    A lot of things are pretty rare, but that doesn't mean that you shouldn't take action to avoid them, if those actions are not that huge. For me, going the AMD route with ECC didn't cost any more than the Pentium/1155 route described here.
    - Phenom II X4 840 + Asus M4A88TD-V EVO/USB3 + 8GB Kingston DDR3-ECC 1333MHz cost 223€.
    - Pentium G620 + comparable Mainboard (ASUS P8H67) + 8GB DDR3 1333MHz RAM cost 159€

    That's a 64€ difference (my system that costs 700€ without the HDDs, so with the Intel route I could have saved ~10%). That wouldn't have bought me a better RAID controller. And again, all the RAID people I talked to have indicated that doing RAID with parity without ECC RAM is akin to data suicide.
    Performance wise there won't be much difference between the X4 and the Pentium if I tax them laxly. But with the 2 extra cores I have the possibility of running more services off my server in the future.
    I'll also undervolt the CPU, so the power consumption will still be higher for the X4, but not by a lot.

    As for AES, I don't need it and you don't get it in any sub 145€ Intel CPU anyway, so that's not any argument if you talk about ECC being too expensive.
  • Rick83 - Monday, September 5, 2011 - link

    The i5 650 is below 135, actually, the cheapest Xeon is 20 euro more expensive.
    The ECC DDR3 is another ~20 euro more expensive. (in this case for the cheapest 3x1GB triple channel kit)
    And finally, you've got to use a 1366 board, where the same 10 SATA ports are 70 euros more expensive.

    (WTF, you can get an ASRock P55 extreme 4 for 110 euros! That's pretty insane.)

    So, if you do do encryption, and/or want to go 32nm, then the difference is more than a 100 euros.
    On a system that otherwise costs, for these components 275 euros.
    25% more for something that induces a MBTF of over 9000!? (sorry, couldnt resist) - It's okay if you go AMD anyway, but for Intel ECC-ram is prohibitively expensive.
  • Flashfir - Monday, September 5, 2011 - link

    Mine runs at around 26-33C

    Article says best is around 40C?
    Would I be correct in trying to up the hard drive temperatures to that range?
  • chippyminton - Monday, September 5, 2011 - link

    I work with this sort of hardware but have gradually come to the conclusion that this is overly complicated, expensive and utterly pointless at home. In a way it's a bit oldschool in it's thinking.

    I now use 2 extremely cheap Western digital "my book" live 3TB drives. These cost only $20 or so more than the drive itself and all are more or less full linux machines wrapped around your storage; shell access is easy. These 2 drives simply rsync to each other (or any other PC) for redundancy automatically - in fact they are in different rooms so offer better protection than a RAID array in case of theft, fire etc.. These give about 35MB/s on a gigabit network (that's megabytes) each and therefore cope easily with anything at home.

    Best of all they spin down and only draw 2W each (even when operating top out at 12W). The whole system took maybe and hour and a half to set up and has run flawlessly for 6 months. If one fails I can swap it with the same or A.N Other Linux machine.

    And what is it discussing RAID performance? What are you guys doing? This is a domestic guide not a datacenter primer. Just how many full-hd streams do you need? I really don't recommend RAID solutions for long term data storage in the home after a decade or so of using them. RAID is about uptime, not data security (OK and in some terms performance).

    This was brought how to me when I had to recover some data on an NVRAID array. Basically the only way we could do it was find a secondhand mainboard and build up a whole new PC which was a major PITA. I'd stick to software RAID within the OS at home if you really must use it as it's far easier to recover. It's not like the professional environment with a service contract whereby a man turns up with a NOS new raid card that went out of production 15 years ago and saves the day.

    Repeat: RAID is not backup. RAID at home is more often a weakness not a strength.
  • Death666Angel - Monday, September 5, 2011 - link

    I think this article falls short of the standard Anandtech level of professionalism I've come to expect. Maybe it wasn't meant to be that in depth. But for me, this is nothing more than a preview to a file server guide.

    No mention of ECC, no mention of RAID pros/cons, specific RAID hardware, no mention of UPSs and networking technology, no mention of back planes and subsequently no mention of all-5.25" tray cases, no WOL or 24/7 mentioned.

    For my taste, this is an okay first look for people who have never put together a computer system. But for everyone else, you just stated what they already knew. Kinda disappointed now. :-( But I hope you will follow this up with more in depth reviews. :D
  • Reikon - Monday, September 5, 2011 - link

    I thought so too. The content covered is mostly obvious and it seems written for, to put it nicely, a less technically-adept audience. The writing style also seems to be like those lower quality sites that fish for hits instead of providing quality insight.

    And it's not just this article. A lot of the newer authors don't seem to have the writing capability or insight that the main writers that Anandtech had before. I don't want to name names because most (none?) are clearly as bad as this one, but Anand should pick his writers more carefully. It makes the site's quality look like it's slipping.
  • Malih - Monday, September 5, 2011 - link

    maybe Buyer's Guide is better title instead of Builder's Guide.
    This Guide just tells you what components to buy/use.
  • dealcorn - Monday, September 5, 2011 - link

    The consensus view on the Debian and Ubuntu forums is that Atom is a great home server chip. In the rest of the world, few care because no one wants to learn Linux at home.

    I understand why you dismissed the overpriced office NAS devices but a heads up should be given regarding the coming deluge of affordable home NAS devices. A home NAS is an end run around the fact that nobody at home wants to learn Linux. It does everything you want using a browser: nobody has to know its Linux. From a software perspective, the overpriced but cute $140 SilverStone DC01 is a precursor of the coming deluge of affordable home NAS devices. ARM and Intel are about to go to war in the home server market and will do anything to be properly positioned to slit the other's throat in a gentlemanly way. Expect free bundled NAS functionality and a better selection of the ports you want as that is what happens in competitive markets. If ARM has it's act together, my expectation is that comparable functionality in a less attractive case will be available for half the money in your choice ARM or Atom platforms within a year. Life is about to get real good in the bottom third of the home server market.
  • Death666Angel - Monday, September 5, 2011 - link

    Like he said, NAS is a great and easy way to get storage space for your home system. But they don't offer any good upgrade ability if you need more storage (4 HDD NAS systems are about the highest affordable options, after that, DIY storage becomes cheaper), they often don't offer the best performance (still mostly good enough for HD streaming) and they don't offer anything but storage space. Want to run an email server later? No can do. I have also heard a lot of people say that you shouldn't do RAID with NAS systems.

    Since this is a file _server_ guide, I think he made the right decision to not go in depth with regards to NAS. He did mention them and told the viewer to read up on them if they never heard about them. Good enough in my book.
  • rowcroft - Monday, September 5, 2011 - link

    Been buying these for a while and they run great. Nice package and surprisingly quiet.

    http://www.newegg.com/Product/Product.aspx?Item=N8...
  • grg3 - Monday, September 5, 2011 - link

    One of the best operating systems for a file server is Linux. One of the best Linux distributions currently available is Ubuntu. However, one of the best and easiest to configure file server installations, is Turnkey Linux File Server Appliance http://www.turnkeylinux.org/fileserver.

    Based on a minimal installation of Ubuntu, Turnkey Linux File server can be up and going in a matter of minutes. Put it on just about any hardware you like and it will ready to serve files. I have seen it work on a virtual machine, an old desktop and server packed with disk drives. Setting up raid is a breeze using Webmin raid configuration and because it is Linux software raid, you are not dependent on a specific controller.

    The files can be accessed via Samba, SSH, Web based file manager, or Webmin. Try it! You have nothing to lose.
  • HMiller - Monday, September 5, 2011 - link

    Just as an example, I picked up a Dell PowerEdge 2900 with dual 4 core CPUs, 16 GB ECC RAM, Perc5i RAID Controller, 10 hot swap drive bays, dual server grade gigabit NICs, redundant PSUs, and Dell Remote Access Controller for remote screen control outside the OS. Total price on eBay was $790 with shipping. I even got 10 36gb 15,000 rpm SAS drives. 4 of those small drives make an OS drive similar in speed to a low end SSD, leaving space for adding 6 2TB drives for RAID5 data storage. I get 110MB/sec file copies, and 250MB/sec transfer speeds within the RAID volumes. Gigabit Ethernet is my bottleneck.

    It is loud, so you need a basement or place away from people, but you get a lot more for you money than with junky low power consumer parts.

    Windows 2008 R2 is what I am using, but most Linux distros would be fully supported as well. I think this will last longer and perform a lot better for similar or lower cost new hardware.

    Consumer hardware has always seemed to struggle with heavy disk and network load in my experience, regardless of it's stated specification. Mainboard disk controllers with 6 or 8 sata ports mostly behave like junk if you actually populate all their sata ports.
  • crótach - Monday, September 5, 2011 - link

    i thought it was the most widely used NAS raid platform for home users?

    also the choice of motherboards is quite narrow. what about some supermicro itx boards with 6 sata headers, to me that would be a perfect match for the fractal array r2 case with 6 hard drive slots :)
  • praeses - Monday, September 5, 2011 - link

    I'm doubtful it is the most widely used platform for home users, but it does offer some pretty attractive features such as spinning down individual drives in the array, mix and match sizes, and isolated data loss with multiple drive failures. It seems like a better "drobo" and its not necessarily just a NAS as this article is trying to distinguish from.
  • nevertell - Monday, September 5, 2011 - link

    I wouldn't use any cpu that isn't able to transcode 1080p streams, as that would be the best use of such a box. If I use a ps3 to watch movies, I can setup a mediaserver on the box so every media file is available for every device in my network. While you can get ffmpeg to be used as a transcoder, the formats you can transcode using a gpu are limited.
  • Rookie_MIB - Monday, September 5, 2011 - link

    Running two file servers at home right now - one is a media server:

    Cooler Master full tower case
    Antec 580w power supply.
    Gigabyte 785 board (5 sata, 2 ide ports)
    Phenom II x2 550
    2gb memory
    1 x 20GB WD system drive (IDE - basic boot drive)
    5 x 2TB Western Digital green drives (RAID5 SoftRAID)
    FreeNAS 8.0

    Been running for a few months now and no problems whatsoever. I have the drives running the standard RAID5 and haven't had any dropouts, rebuilds, anything. I had to upgrade it as I was running out of space on the previous setup:

    Antec full tower
    Antec 400w power supply
    Gigabyte P4 board
    Intel P4 @ 2.4ghz
    No-name 4 port PCI raid card (SiS chipset)
    1 x 40GB WD drive (system drive)
    3 x 1TB Western Digital Green drives (RAID5 SoftRAID)
    FreeNAS 7.0

    That system was running for a few years with -no- issues beyond some reboot problems due to compatibility with FireWire which I finally tracked down (it would hang on reboot repeatedly if it ever shut down), but I never ran into problems with stability or corruption of the drives.

    All in all, I tend to find that FreeNAS is a very solid solution if you're looking for a budget build. The only downside is that some of the older hardware is nowhere near as power efficient as some of the newer stuff such as the NAS enclosures running ARM hardware or the newer AMD stuff (I rule out Atom builds as well, they're vastly underpowered) running the E350's.

    If I were to build a 'complete' system from the ground up I'd really look at a full enclosure (plenty of room for space) with an AMD ITX board with an E350 chip (5 sata plus an expansion card for 4 more sata ports). That would give you 1 boot drive, 8 file drives, 24tb of space which would probably draw somewhere in the neighborhood of 30-45 watts for the drives and around 30 watts for the system board/processor.
  • Sapan - Monday, September 5, 2011 - link

    Hi guys, I am new to file severs so this may sound like a dumb question but I really need to know the answer:

    Currently I have 4 external hard drives (1TB, 1TB, 2TB, 3TB) each one about 75% filled and growing, and I wish to move all of that data to a home file server so I can access that data wherever I am and without having to plugin a hard drive every time. I plan to use 4x3TB
    for the server but my question is when I setup a file server using windows home server 2011, will the drives show up as just one big drive (like Raid 0) or just 4 separate drives where I still have to manage HDD capacities? Because right now I have a lot of free space on each drive but they are separate and not as useful as they would be together. Also would it be easy to add another drive to my setup? Would it just join the pool of storage or show up as a separate drive? I know I could use RAID but again I am a novice and I worry about RAID's reliability and expandability. I hope my question makes sense? Thanks
  • jtag - Monday, September 5, 2011 - link

    If you use RAID0, then any one drive failure will cause you to lose everything. Essentially RAID0 is the opposite of redundancy, each drive becomes a point of failure, decreasing the reliability of your array. Read this wiki article http://en.wikipedia.org/wiki/RAID before you do anything. RAID0 should only ever be used to maximize performance, such as with swap partitions, never use it to store anything even remotely important.

    Some RAID schemes can be very reliable, for example a RAID6 will survive 2 drive failures, and with hot spares will automatically bring back redundancy by re-building onto those spares while you obtain replacements for the failed devices. That said, a RAID6 won't survive a fire, theft or user error, so you still need to make backups of anything important. Also the more drives you add (that increase capacity) the less reliable your RAID becomes, because each drive adds a new point of failure.

    Software RAID, such as is used in Linux, can allow for expandability. I started my current home RAID as a RAID1 (mirror) of two 2TB drives. I added a third drive and using a handful if commands, grew and reshaped my array into a RAID5. Since then I've added three more drives and a second SATA controller card, and now have a RAID6 with a hot spare - essentially I added three drives to gain one drive of extra capacity.

    I don't know how to do any of this in software under Windows, but I would expect/hope it would be possible. Being a novice means you're going the have to learn a lot before you do anything critical. My advice would be - make backups before you do anything, and run tests on non-critical/spare systems.
  • lamontagne - Wednesday, September 14, 2011 - link

    I was previously planning on buying a Synology ds1511+, but after reading this article, I've been considering the building a WHS2011 file-server route to the tune of almost $300 cheaper.

    I've got 5 1.5TB hard drives and would get an SSD for the operating system.

    I want to run a RAID 6, and build a mini-itx system as described, but I've been trying to figure out if I can have the RAID span across separate controllers (ie. mobo and PCI-E controller card). From your comment, it appears that this is a possibility.

    Before I spend a bunch of cash, can you please confirm that fact for me...
  • billdcat4 - Monday, September 5, 2011 - link

    Did you mean the G620T? It has a 35W TDP and a slim heatsink like the one pictured.
    I have a G620 and it came with the full height heatsink.
  • imaheadcase - Monday, September 5, 2011 - link

    Unless you got something specific WHS won't do, no real reason not to get it. You can get it free if you are in college, and even then its only $50. Drop in the bucket for a server.

    I've built 10+ of them for people and not a single complaint. Restores just work, use it for media streaming, you can back up to cloud from it if you want, great add-ons, etc.

    The MOST important thing of a server imo is the case. They are SO hard to find in the right config you want. Esp since lots of people use WHS next to media centers.
  • lamontagne - Wednesday, September 14, 2011 - link

    WHS... free?

    That's great news. Do you know where to look for that?
  • Eiffel - Monday, September 5, 2011 - link

    A great way to create a file server for those of us in the UK is to purchase an HP Microserver D36L (£130 or less after HP rebate).

    This machine comes with 1 GB of ECC DDR3, an AMD Athlon(tm) II Neo N36L Dual-Core Processor, and can take up to 6 SATA-II drives (or more with PCI cards or USB adapters). It doesn't include an operating system, but comes with a starter disk (250GB only)

    Mine is set up with Ubuntu 10.04 LTS, 4 old 500GB WD500AAKS drives in Raid 5, plus a 2TB drive and an 80GB system drive... Performance is excellent as throughput is very close to Gb Ethernet's specs. There is also a growing base of WHS2011 users (atlhough some more memory -ECC or not- is needed for optimal results)

    For redundancy, the easiest solution is to get a second D36L ;-)
  • HMTK - Monday, September 5, 2011 - link

    I've got one and it happily runs Windows Server 2008 R2, even as a domain controller and Microsoft Security Essentials for AV. Only 1 GB RAM but that's enough for simple file storage. I still have 2 bays free if I need more than the 2 TB mirror I have now. And it has an eSATA port I think.
  • tmensonides - Monday, September 5, 2011 - link

    I have a readynas atom as a file server, but have been toying with building an atom based system as 1: A backup for the readynas and 2: a webserver for my wifes business blog (photography)....her blogsite maybe gets 50-75 hits and a max/good day.....

    Could an atom bases syatem handle that with out super crappy load times?
  • bobbozzo - Wednesday, September 7, 2011 - link

    If you're using wordpress, it can be pretty slow on low-end hardware, but 75 hits is nothing.
  • HMTK - Monday, September 5, 2011 - link

    Who's going to have a 10 Gb switch at home? Please get real. Most SMB's don't even have that. Why not go full 8 Gb fiber while you're at it :-)
  • alpha754293 - Tuesday, September 6, 2011 - link

    @HMTK

    12x IB EDR FTW!!!!

    (what's up? long time no talk.)
  • futurepastnow - Monday, September 5, 2011 - link

    I'm just going to post the specs of the file server I built a little over a year ago:

    Windows Home Server (original edition)
    Athlon LE-1660 45W processor
    2GB DDR2 RAM
    Gigabyte 740G mATX mobo
    Six WD Greenpower 1TB drives
    Antec Sonata case
    Corsair 400CX PSU

    I originally tried to build a lower power WHS box with an Atom processor, using a PCIe card to add more SATA ports so I could run all six drives. Performance was not satisfactory, drive indexing and balancing took way too long, and (because of the PCIe card) I got no warning that one of the drives was filling up with bad sectors before it died. And due to the slowness of the Atom, not all of my 2+TB of data had yet been duplicated across the six drives.

    I replaced it with a very cheap K8 CPU and the cheapest motherboard that had six SATA ports built-in and was from a brand I trust. The Athlon is much faster, although a lower-clocked dual-core would have been better. There are better options for those building today, anyway. I'm still very happy with the server in its current form.
  • Malih - Monday, September 5, 2011 - link

    What about underclocking+undervolting the overheated but cheaper Athlon CPU, that would make it quite power efficient and lower in temps compared with the Pentiums in that regard.
    I'm hoping some tweaking tips like this in a Builder's Guide article from Anandtech.
  • GTaudiophile - Monday, September 5, 2011 - link

    In the OS section of the article, he writes:

    "While there is an Ubuntu Server Edition, one of the easiest ways to turn Ubuntu into a home file server is to install and use Samba. (Samba can be used on not only Ubuntu, but also FreeBSD.)"

    I am confused by this a little bit.

    SAMBA is by no means exclusive to Ubuntu or any other distro. In fact, I use SAMBA shares through FreeNAS and it works quite well. I just think the article alludes (to the lay-person) that SAMBA is somehow exclusive to Ubuntu and it is not.

    Secondly, why does the article not touch on NFS at all? From what I understand, NFS is faster and more reliable than SAMBA.
  • Braumin - Tuesday, September 6, 2011 - link

    For the average person, WHS 2011 is just the easiest way to go. If someone is moving up from a NAS, then they don't just want file storage. WHS offers great and pain free PC backups with image based restores, a great remote access page with full SSL, and it is dead easy to configure and use.

    I think the comments have gotten a bit out of hand with various RAID incarnations that people have. Most people don't need RAID. They need centralized file storage, and then need a backup. WHS does both. It also supports RAID if you really need it.

    I have to say, I would rather see a bit more in depth on this topic. It is important for many people these days since everyone seems to have at least two computers per house, if not more.
  • semo - Tuesday, September 6, 2011 - link

    I think that an article of such nature should go in a lot more depth in regards to backups (strategies, equipment and best practices).
  • GTaudiophile - Tuesday, September 6, 2011 - link

    I would be curious to know which type of RAID you all run. Perhaps we could do a poll?

    I assume that the three most popular types of RAID are RAID 0, RAID 5, RAID 1.

    I personally use RAID 1 and wonder why people poo-hoo it so much. I use it strictly as a backup+file hosting solution.
  • GTaudiophile - Tuesday, September 6, 2011 - link

    I should add ZFS to the list as well...

    RAID 0
    RAID 5
    RAID 1
    ZFS
  • compudaze - Tuesday, September 6, 2011 - link

    ZFS is a file system like EXT4, UFS or NTFS. RAID-Z [or RAID-Z1] could be considered the ZFS equivalent of RAID-5 while RAID-Z2 would be the ZFS equivalent of RAID-6.
  • jtag - Tuesday, September 6, 2011 - link

    For my (Linux) storage server I have a RAID6 with a hot spare. I started with RAID1 of two 2TB drives on this machine, added a third drive and converted to RAID5 with a reshape command a year later, then added two more drives and converted to RAID6 with another reshape, and finally added a hot spare a few months ago. The machine itself had a CPU upgrade (single core to two core) and a SATA card added when I moved to RAID6.

    There's nothing wrong with RAID1, it depends on your application - it didn't really make sense to continue with RAID1 when I started expanding my storage array, but my /boot partition is a small RAID1 at the start of all 5 active drives; if any drive fails, my machine should still boot. Grub (or I suspect any boot-loader) can't boot from a striped software RAID array. My Windows workstation is configured with a (hardware) RAID1, which paid off pretty quickly as one drive failed within weeks of getting it. My latest build has an SSD boot drive, so no RAID at all there.
  • Slaimus - Tuesday, September 6, 2011 - link

    For a server that is expected to have long uptimes, a benefit of running the Athlon II is that it is the only one in the review that supports ECC memory. Intel forces you to buy a Xeon to get ECC support.

    There is a reason all business file servers have ECC memory.
  • Vepsa - Tuesday, September 6, 2011 - link

    I used to use WHS v1, but when DE was removed from WHS 2011 I went looking for an alternative. I settled on Amahi. Runs on top of Fedora 14 (until 16 is final). Great product, fast & does more than just serve files.

    http://amahi.org
  • sligett - Wednesday, September 7, 2011 - link

    A newcomer to the "file server" OS stable is Resara Server. They offer a community version (free) as well as a supported version. See resara.org or resara.com. It's available as a VM or for Ubuntu. Administer it from Windows, Mac, or Linux. I'm using it for my Windows 7 clients.
  • dalmar72 - Wednesday, September 7, 2011 - link

    Unraid is also very simple to setup, it does cost money once you get past 3 drives, but not having to deal with hardware raid, and if you do lose more than one drive in an array, you don't lose everything. Alos you can grow the array at any time.
  • somedude1234 - Wednesday, September 7, 2011 - link

    I wanted to build a proper NAS to displace the expsnding pile of USB and eSATA attached HDDs that was becoming unmanagable. At the same time I needed to build a triple-head workstation. With VMware ESXi I was able to build a single system that does it all.

    Operating System / Storage Platform: VMWare ESXi / NexentaStor Community Edition (VM)
    CPU: Intel Xeon X3440
    Motherboard: SuperMicro X8SIA-F
    Chassis: Antec 900 v2
    Drives: 5x Samsung F4 HD204UI (data), 1x OCZ Vertex2 60GB (ESXi OS drive)
    RAM: 16GB (4x Kingston KVR1066D3Q8R7S/4G)
    Add-in Cards: Promise SATA 300 TX4, AMD Radeon 6850
    Power Supply: Seasonic SS-560KM
    Other Bits: Sound Blaster X-Fi Surround 5.1 Pro (USB Sound Card)
    Usage Profile: Home NAS, streaming media server, video transcoding, primary workstation

    Virtual Machines:
    - NexentaStor Community Edition (VMDirectPath for the on-board SATA controller)
    - Ubuntu 10.10 (32) running PS3 media server
    - Windows 7 Ultimate x64 (VMDirectPath for the AMD Radeon 6850 and one of the on-board USB controllers)

    The 5x Samsung 2TB drives make up a RAIDZ1 in Nexenta which is exported back to ESXi via NFS and to the rest of the network via CIFS. The Antec 900 lets me upgrade to a total of 15x drives over time by using 5-in-3 backplanes. At that point I'll install a SAS controller and pass that through to the storage VM.

    This is well overkill for just a file server, but for my needs it's been perfect. As an added bonus, I can reboot the Windows 7 workstation and/or Ubuntu VM's without affecting network access to the big data shares.
  • masterbm - Thursday, September 8, 2011 - link

    One thing I did is I built my file server into a Media center pc. Actually the original version was for just media center pc. The old version still functions as bedroom media center 1 tv tuner no hd.
    My build which has been running for almost 2 years nonstop is
    amd 620
    Gigabyte board am2+ with amd 780 chipset.s
    4 gb ddr 2 800 (2 2 gb sticks)
    zalman butterfly cooler for cpu.
    1 250 ide 7200 for boot drive( still using the orinigal format from the old machine)
    1 ide dvd drve
    4 2tb sata stoarge drives
    1 750 gb drive for music and tv record data
    2 ati 650 tv tuners both have ditgial cable box connected to them and the rf adatapter to run boxes) also both hd inputs are connected. Thought the rf adapter would be unstable but after some setup issue work fine for 4 months now.
    usb remote control.
    wireless keyboard and mouse
    then connected to 5.1 amp optical
    then connected to 1080 tv via usb using the onboard 4200 graphics (128 side port memory)
    I very much love this setup cpu has plenty of horse power to do media center play hd content effortless.
    I also enable muitlply rdp connections so I can admin box though another terminal. Or setup to encode videos.
  • masterbm - Thursday, September 8, 2011 - link

    If was to build to today I would think about uses i3 or low-end i5 sandy-bridge. The cpu was pick because at the time I thought it was the best bang for the bang for what I need it for. The machine has not let my down yet. My back file server ie older media center box houses the old drives from increase the need for more space. The 750 has stayed because it has the size needed to handle is responsibility/. All drive have certain things that hold and when it get filled it is upgraded.
  • MartenKL - Monday, September 12, 2011 - link

    I would like my WHS2011 do realtime transcode of 1080p streams with 5.1 channel sound to my xboxes and ps3. Would the 2500T suffice?
  • noxplague - Sunday, October 2, 2011 - link

    First, I think most of these comments became focused on SMB/Midmarket type concerns. This guide was clearly aimed on the pro-sumer looking to solve his data proliferation issues. Not everyone needs enterprise style RAID.

    With the help of this guide I build the following WHS 2011:
    Fractal Design Array R2 Black Aluminum Mini-ITX Desktop Computer Case 300W SFX PSU
    Foxconn H67S LGA 1155 Intel H67 HDMI SATA 6Gb/s Mini ITX
    Intel Pentium G620 Sandy Bridge 2.6GHz
    2 X HITACHI Deskstar 7K3000 2TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" (RAID1)
    Kingston HyperX 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3

    Here are some of the scenarios I have enabled with this build:
    Adobe Lightroom 3 on a Mac and on 2 PCs - this is really handy because I can store all photos on the server and have the library on each computer link to the 1 version of the truth. The only limitation is I cannot edit photos on the road that are back on the server.
    iTunes - I have an Apple TV and iPad, carefully configuring iTunes on the server and on my clients I am able to download to have files on the server and still sync/stream everything to my different devices.
    Zune & WP7 - ditto the above. I need to solve the album art issues across all of these devices however.
    Mac - They have no problem accessing the files and RDC works just as well as on Windows.

    Current problem: The only scenario that isn't working well is playing any WMV from the server. The "streaming" is painfully slow. This does not make sense to me because Quicktime videos on the server play on the Macs without a problem over wifi. Movie streaming to the Apple TV is seamless as well.

    My server is connected to my Apple Airport extreme (4th generation) via Gigabit ethernet.

    I found the guide helpful in finding good parts, in particular the case. I wish more time had been spent on scenarios and use cases with the server. Also the HDD section was useless. What would be much more practical would be advise on a decent setup. I decided to use the Intel RAID 1 with identical drives, but I have lots of questions about how to best maintain these and also the best way to consider adding new drives when needed. I'm trying to figure out an automated way to monitor the SMART status and have my server e-mail me if any drive has an issue. An brief overview of different backup methodologies would've been useful as well. Another useful discussion would be on UPS systems and how to configure your server to power down in the case of power loss.

    Thanks for the guide and hope to see more on this topic in the future!
  • pacomcfriendly - Saturday, October 15, 2011 - link

    This has been the best OS experience I've found for my own home fileserver. Its built on opensolaris, super easy to work with, free, and zfs / zpool is fantastic.
  • wiz329 - Saturday, December 10, 2011 - link

    This is a very beginner question, but what is the best way to access such a file server over the internet?

    I am looking to either use NAS or build such a home file server to store media. Over LAN, it seems pretty straightforward, just connect to your router. How would you go about accessing/streaming over WAN?
  • marcus77 - Saturday, October 6, 2012 - link

    euroNAS would be also worth looking. They are offering storage software that is more for business use but they also offer technical assistance which is nessesary if something goes wrong. Also they have some advanced features such as storage cluster. http://www.euronas.com
  • Amar7 - Monday, December 3, 2012 - link

    Mr. Throckmorton,

    Great File Server Build Guide... Any update to hardware suggestions? Some of the parts are no longer available. Would love some ideas on a budget home file server.

    Much thanks.
  • buxe2quec - Friday, December 7, 2012 - link

    I would put in the list of possible operating systems also NexentaStor, it'a a very polished OpenSolaris/Illumos based NAS distribution free for personal use.
  • war59312 - Monday, December 10, 2012 - link

    Would love a 2012 update for this guide?
  • StoatWarbler - Tuesday, October 22, 2013 - link

    I'd love to see a 2013 update, especially for tower cases which can take a bunch of hotswap drive racks (My current ZFS array has 20 2Tb drives + 2 SSDs + OS drives. Yes, I'm barking mad and I enjoy it.)

    Why hotswap? Because opening a case to retrieve a failed drive is troublesome (case has to have access cleared to it, risk of pulling wrong drive on a running system, etc.)

    On the subject of controllers: A good HBA (and SAS expander) are far better than using hardware raid controllers. Modern PCs have more than enough horsepower to push checksum calculations and it means that drives are portable between controllers.

    I know about the cube cases out there such as Lian Li's D8000 - but afaik this doesn't allow for hotswap drives.
  • imaheadcase - Thursday, August 21, 2014 - link

    I know this is a old old post. But can Anand do a updated one of this?
  • evangelicaloutreach - Thursday, February 6, 2020 - link

    https://www.evangelicaloutreach.org/backslider.htm
    https://www.evangelicaloutreach.org/eternal-securi...
    https://www.evangelicaloutreach.org/eternal-securi...
    https://www.evangelicaloutreach.org/getsaved.htm
    https://www.evangelicaloutreach.org/fatima.htm
    https://www.evangelicaloutreach.org/antichrist.htm...
    https://www.evangelicaloutreach.org/post-tribulati...
    https://www.evangelicaloutreach.org/dancorner.htm
    https://www.evangelicaloutreach.org
    https://www.evangelicaloutreach.org/sinuntodeath.h...
    https://www.evangelicaloutreach.org/eternallife.ht...
    https://www.evangelicaloutreach.org/calvinismrefut...
    https://www.evangelicaloutreach.org/charles-stanle...
    https://www.evangelicaloutreach.org/joseph-prince-...
    https://www.evangelicaloutreach.org/ouija2.htm
    https://www.evangelicaloutreach.org/white-witchcra...
    https://www.evangelicaloutreach.org/occult.htm
    https://www.evangelicaloutreach.org/spiritual-deat...
    https://www.evangelicaloutreach.org/newconverts.ht...
    https://www.evangelicaloutreach.org/salvation-quiz...
    https://www.evangelicaloutreach.org/christian-poem...
    https://www.evangelicaloutreach.org/plan-of-salvat...
    https://www.evangelicaloutreach.org/grace-or-works...
    https://www.evangelicaloutreach.org/galatianism.ht...
    https://www.evangelicaloutreach.org/got-questions....
    https://www.evangelicaloutreach.org/hereticDavidJS...
    https://www.evangelicaloutreach.org/hereticDavidJS...
    https://www.evangelicaloutreach.org/jehovahs-witne...
    https://www.evangelicaloutreach.org/nwt.htm
    https://www.evangelicaloutreach.org/who-is-michael...
    https://www.evangelicaloutreach.org/grace-or-works...
    https://www.evangelicaloutreach.org/christian-beli...
    https://www.evangelicaloutreach.org/etlicense.htm
    https://www.evangelicaloutreach.org/printfreetract...
    https://www.evangelicaloutreach.org/witnessing.htm
    https://www.evangelicaloutreach.org/dan-corner-the...
    https://www.evangelicaloutreach.org/mary-of-the-bi...
    https://www.evangelicaloutreach.org/religious-dece...
    https://www.evangelicaloutreach.org/miracles.htm
    https://www.evangelicaloutreach.org/sermons.htm

Log in

Don't have an account? Sign up now