RAID 0 vs SSD?

Discussion in 'Desktop Computers' started by finsfree, Dec 11, 2019.

  1. finsfree

    finsfree Member

    Messages:
    282
    I'm looking to build a media server (lots of IOPS). This will be ran on desktop grade hardware not server grade.

    I could either buy a 4TB SSD for around $500.00 or get 4x 1TB HDD and put them in a RAID 0 for around $325.00 (including RAID controller). According to some websites the RAID 0 is right up there in performance with the SSD. What would be the down side of the RAID 0 if all the data was backed up? Big deal if a $40 drive fails it's better than a $500.00 SSD.

    Has anyone out there done this? How was the performance on the RAID 0 with 4 drives?

    Thanks,
     
  2. voyagerfan99

    voyagerfan99 Master of Turning Things Off and Back On Again Staff Member

    Messages:
    23,017
    The time it takes you to restore the backup.

    For a media server you don't really need screaming fast speed. RAID5 would be a better option so that you can replace the disk and the array will rebuild.
     
    Cromewell, beers and Darren like this.
  3. Darren

    Darren Moderator Staff Member

    Messages:
    12,159
    I'd do RAID 5, you won't get the raw speed of an SSD but for a media server that's practically irrelevant as mechanical drives are plenty fast for that. I can stream 4K from my NAS with 4x2TB drives in it and have 5+ TB of space in RAID 5 with one disk redundancy.

    Redundancy is going to be more a concern long term. If you do RAID though, doing drive upgrades might get a little difficult.
     
  4. beers

    beers Moderator Staff Member

    Messages:
    8,456
    I've rolled a RAID setup on a home media/xbmc/plex server for a while. As per others, you don't need a lot of speed. Video streaming is not IOPS sensitive.

    Some larger drives would be a better play. Amazon has 4TB WD Reds (with 15% cash back if you have an amazon card) at $90/ea, for example. Throw three of those in RAID5, you can always add another drive later to expand, and retain your data in the event of a single drive failure.

    Are you looking to roll the motherboard's RAID controller or are you doing like ZFS or software Linux md RAID?

    The main benefit of SSD would be higher IOPS and lower latency, which you won't reach with the RAID0 setup. The sequential transfer rates may be similar but that doesn't tell the whole performance story across all workloads.
     
  5. Cromewell

    Cromewell Administrator Staff Member

    Messages:
    15,415
    This. I have 5400 rpm drives in my NAS w/raid5 and it's plenty fast to saturate my network link
     
  6. finsfree

    finsfree Member

    Messages:
    282
    I was actually going to buy a RAID controller similar to this click here. I don't want to use software RAID (Fake RAID).

    This will be a Plex server with a Ubuntu OS. I have some cheap drives laying around and just thought buying a RAID card instead of a one large SSD would be the best way to go money wise.

    Any thoughts on a good RAID card? I'm open to anything. Remember this is just a home lab for fun. I don't need the $600.00 RAID for crying out loud.

    Thanks,
     
  7. Cromewell

    Cromewell Administrator Staff Member

    Messages:
    15,415
    Do you need to offload raid to the card for perf reasons? If not, md is pretty solid especially for a home lab.
     
    beers likes this.
  8. beers

    beers Moderator Staff Member

    Messages:
    8,456
    To be honest I'd just roll mdraid. It's more flexible and you can swap hardware to any SATA controller and keep your data.
    I wouldn't call it fake, the CPU is just used for parity calculations instead. You still have the same parity based striping between drives. Performance on modern PCs is high. The card you listed is pretty old as well.

    There are some benefits of a dedicated card like battery backup persistent cache and auto rebuild by hot swapping other drives without touching anything else, but you'd realistically want to pair it with other things like swappable drive trays and similar. Also having things like ECC RAM will do more to prevent data corruption than the card itself.
     
  9. finsfree

    finsfree Member

    Messages:
    282
    I found this online click here that explains how to use mdraid. My question is do I need to add the new raided array to the fstab file so it will still be there after a reboot?
     
  10. finsfree

    finsfree Member

    Messages:
    282
    I took your advise and used mdraid (mdadm commands). Everything went just fine, but the computer won't boot after a restart. I know you have to edit the /etc/fstab file to auto mount the drives. What is the right way to edit that file?

    I tried editing that file two different ways but it never rebooted successfully. I had to restore form backup to get the OS back each time.

    I tried both of these but with no luck:

    /dev/md0 /mnt/raid0 ext4 defaults 1 2

    and

    /dev/md0 /mnt/raid0 ext4 defaults 1 1

    This is the wed site I followed click here.

    The web site says do it like this but that has never worked for me in the past so I never tried it. The last two numbers are the issue I believe.

    /dev/md0 /mnt/raid0 ext4 defaults 0 0
     

Share This Page