Help Needed with server

Shlouski

VIP Member
My Server:

Nzxt source 210
GA-M68M-S2P
Amd phenom x2 550
Kingston 4Gb DDR2 800
Corsair CX600
2x sil3114 PCI raid cards
8x seagate 2TB HDD's
Seagate 160Gb HDD

My server is to share, consolidate and protect my data only, which is why is does not need to be powerful and it is capable of saturating my gigabit network when reading files off the server.
My 160gb hdd is running windows 7 ultimate and I have 4 mirroring arrays 2x 2TB hdd's each, giving me a total of 8TB storage, both hdd's in each array are the identical models. When it comes to permissions I simply add users to the server and change which HDD's or files they are able or not able to read and/or write.
The problem I'm having at the moment is the my raid arrays keep failing due to inconsistencies and when I compare what is on the hdd's there are differences.
Is this due to the cheap raid cards or the hdd's?
The cards still work like normal sata ports when there is no array and you can see all the hdd's appear in windows, so should try using software raid in windows, instead of hardware raid?
Would there be any benefit to me using server grade hardware and having a board compatible with ecc memory?
 
Why are you running 4 arrays of RAID1? Why not run RAID5 and get 16TB of storage with redundancy?

If you're just looking for network storage, look into a NAS device instead of a full blown server.
 
Is it a specific HDD that drops out of the array, or random?

PCI is probably a poor choice since the bus is half duplex with maximum bandwidth of 133 MB/sec.

4x mirrored arrays is probably also a bad choice. You could have higher performance with RAID10 at the same storage loss, or RAID5 or 6 for parity for one/two drive faults.
 
Why are you running 4 arrays of RAID1? Why not run RAID5 and get 16TB of storage with redundancy?

If you're just looking for network storage, look into a NAS device instead of a full blown server.

So if I had 16TB completely full and a 2TB hdd in the array died I wouldn't lose anything?
I looked at NAS boxes, but they are damn expensive for the gigabit ones and I would need enough of them for 8 hdd's, by the time I've paid for all that I could have just bought more storage.
 
So if I had 16TB completely full and a 2TB hdd in the array died I wouldn't lose anything?
You'd more have 14TB of storage. If one drive died, you won't lose any data. You just replace the failed disk and the data will be rebuilt from the parity of the other drives.
 
You'd more have 14TB of storage. If one drive died, you won't lose any data. You just replace the failed disk and the data will be rebuilt from the parity of the other drives.
Then I really dont understand these other raid configurations, how to they replace data on a lost disk when they are completely full of other data themselves?
Surely to rebuild another disk you need to know what data was on it, so where are these others drives getting the data from if they contain different data?
 
Then I really dont understand these other raid configurations, how to they replace data on a lost disk when they are completely full of other data themselves?
Surely to rebuild another disk you need to know what data was on it, so where are these others drives getting the data from if they contain different data?
If you have 8x 2TB drives, normally that would be 16TB of storage, however with RAID 5 you'd have 14TB available so you are never able to completely fill it with 16GB of data. If one 2TB drive fails your 14TB of data remain intact, you just can't lose more than 1 drive at the same time.
 
Then I really dont understand these other raid configurations, how to they replace data on a lost disk when they are completely full of other data themselves?
Surely to rebuild another disk you need to know what data was on it, so where are these others drives getting the data from if they contain different data?

Each chunk written to disk is distributed across the drives. There's an extra drive or two (such as RAID 6 for two drive redundancy) worth of data that consists of these parity bits. It's distributed in a fashion where, with the remaining data on the other drives in the array, you can calculate what data the missing drives had and therefore rebuild any failed member. Since all of the data is distributed across the entire array, it doesn't make a difference which specific drive is impacted.

See more here:
https://en.wikipedia.org/wiki/Standard_RAID_levels#Parity_computation
 
If you have 8x 2TB drives, normally that would be 16TB of storage, however with RAID 5 you'd have 14TB available so you are never able to completely fill it with 16GB of data. If one 2TB drive fails your 14TB of data remain intact, you just can't lose more than 1 drive at the same time.
This. There's a parity chunk for each drive stored on another drive. If the drive fails, the parity chunks can rebuild all the data. The diagram may help.
675px-RAID_5.svg.png
 
If you have 8x 2TB drives, normally that would be 16TB of storage, however with RAID 5 you'd have 14TB available so you are never able to completely fill it with 16GB of data. If one 2TB drive fails your 14TB of data remain intact, you just can't lose more than 1 drive at the same time.
Thanks.
I was just reading about raid parity bits:

Data Parity bits
00 = 0
10 = 1
11 = 0
01 = 1


Basically a 0 means that the data was the same and a 1 means that they were the opposite, so if one column of data is lost you can work out what is was. How are the parity bits going to be organized in my array and why is a spare disk needed?


This. There's a parity chunk for each drive stored on another drive. If the drive fails, the parity chunks can rebuild all the data. The diagram may help.
675px-RAID_5.svg.png
Ah now I understand, you lose 1 disk worth of data because of the parity bits.
 
Last edited by a moderator:
Exactamundo. Or two disks worth for RAID6 as it is double parity.

Is it possible for all the parity bits to be on one disk?
If the parity disk fails then the parity can be worked out again from the original data and if another disk fails you still have all the parity bits to work out what the data was?
 
Is it possible for all the parity bits to be on one disk?
If the parity disk fails then the parity can be worked out again from the original data and if another disk fails you still have all the parity bits to work out what the data was?
That'd be RAID4, but performance is bottlenecked by writing all of the parity data to a single disk.

Check out the wiki, it will probably answer most of your questions.
 
That'd be RAID4, but performance is bottlenecked by writing all of the parity data to a single disk.

Check out the wiki, it will probably answer most of your questions.
Thats for the help.
From what I read Raid 5 can be a little more risky, if you lose 2 disks, have a problem with your controller or run into some problem in the rebuild process your pretty much done for?
Do you think its worth it for me to go to ecc memory with supporting board?

I will look into raid 10, but my network is saturated a little before 100mbps, which my server can deliver now. Also mirroring does allow me to get to my data straight away, take off what I need and then rebuild the array.
 
Last edited by a moderator:
Eh, depends. In your case, I'd definitely want to get off of the PCI bus first and foremost. Are you specifically married to using Windows? Linux has a very flexible software RAID (or FreeBSD for RAIDZ, which is more advanced) that has no problem spanning any controller that has a SATA interface.

Do you have a UPS? How much data do you usually move a day? ECC itself is probably moot unless you have super high load 24/7. I've used non-ECC in my server for the past few years across a couple different arrays/platforms and haven't witnessed any noticeable corruption.

With as large of an array as yours and the time to rebuild a failed drive, it may be worth considering RAID6 for dual drive redundancy.

Raid 5 can be a little more risky, if you lose 2 disks, have a problem with your controller or run into some problem in the rebuild process your pretty much done for?
Indeed.
 
Eh, depends. In your case, I'd definitely want to get off of the PCI bus first and foremost. Are you specifically married to using Windows? Linux has a very flexible software RAID (or FreeBSD for RAIDZ, which is more advanced) that has no problem spanning any controller that has a SATA interface.

Do you have a UPS? How much data do you usually move a day? ECC itself is probably moot unless you have super high load 24/7. I've used non-ECC in my server for the past few years across a couple different arrays/platforms and haven't witnessed any noticeable corruption.

With as large of an array as yours and the time to rebuild a failed drive, it may be worth considering RAID6 for dual drive redundancy.


Indeed.

Yes I have a UPS, I thought of that one :D.
If I move off the pci cards then I guess I will need a mobo with a lot more sata ports because right now I need 9 and thats not including any future drives.
I prefer windows simply because I know how to make it work well for me, so no learning needed. I have no experience with any other operating systems other than ubuntu which I used for a short time and I remember having to install every little thing manually, having to remember the commands and type them in perfectly, I was just too lazy for all that.
I move bits now and then, but I usually move a lot of data in one go after accumulating data over a long period.

BTW thanks for the help Beers :)
 
Which ever raid I decide to go with, considering the problems I'm having with my arrays at the moment should I go with software or hardware raid?
Are there any non raid pcie cards with sata ports which would take advantage of the extra bus speeds and use software raid or should I buy a decent pcie raid card and go hardware raid?
How many hdd's could a good pcie raid card support?
 
Which ever raid I decide to go with, considering the problems I'm having with my arrays at the moment should I go with software or hardware raid?
Are there any non raid pcie cards with sata ports which would take advantage of the extra bus speeds and use software raid or should I buy a decent pcie raid card and go hardware raid?
How many hdd's could a good pcie raid card support?
Hardware RAID definitely, as software RAID can easily be corrupted. Any traditional SATA controller can be operated independently or in a RAID.
 
Back
Top