Jump to content
SAU Community

Recommended Posts

So my HP Proliant Microserver rocked up today.

Immediately turfed the onboard micro SAS and plugged in my HP P400 controller out of an HP SB40c Storage Blade for hardware RAID 5 sexy times.

Tomorrow I've gotta go to MSY and pick up 4x 2TB WD 5400RPM drives and 8GB RAM to finish my setup.

Haven't decided on the OS to run though. Still thinking ESXi with 1 Ubuntu server VM. Altho OpenIndiana and FreeNAS all seem nice too.

So my HP Proliant Microserver rocked up today.

Immediately turfed the onboard micro SAS and plugged in my HP P400 controller out of an HP SB40c Storage Blade for hardware RAID 5 sexy times.

Tomorrow I've gotta go to MSY and pick up 4x 2TB WD 5400RPM drives and 8GB RAM to finish my setup.

Haven't decided on the OS to run though. Still thinking ESXi with 1 Ubuntu server VM. Altho OpenIndiana and FreeNAS all seem nice too.

Do the WD greens still constantly drop out of RAID?

oh and 5400 drives are for poofs.

Edited by DivHunter

Do the WD greens still constantly drop out of RAID?

Not sure, I did read about people complaining about them dropping out of RAID, but they were usually running some ghetto soft-RAID, not a hardware one. As with most manufacturers they don't gurantee the safety of the data on their consumer HDDs on anything other than RAID 0 or 1.

oh and 5400 drives are for poofs.

It'd be lucky to even break a sweat with a 4 HDD striped RAID. No need for 7200RPM when there's that much buffer and cache available. This is a budget setup after all and all up it's costing about $600 for a bulletproof setup. I'm not buying into the "green" marketing BS. It's just a better price and lower power consumption cos it's lower RPM. I only have peak 200W at my disposal, 50W of which is used by the board.

Not sure, I did read about people complaining about them dropping out of RAID, but they were usually running some ghetto soft-RAID, not a hardware one. As with most manufacturers they don't gurantee the safety of the data on their consumer HDDs on anything other than RAID 0 or 1.

Pretty sure I have seen them being dropped from hardware RAID solutions Dell PERC/Highpoint/Adaptec etc

It's a TLER issue, the green drives do not recover fast enough and are dropped from the array unless you can configure the timeout to something like 30 seconds. The newer drives should be able to have TLER enabled with a tool from WD.

have read the same thing divveh, though most of my workmates are running RAID5 in their home servers with the Intel chipset softraid and 1, 2, 3tb Caviar Greens fine, just gotta wait for some 2950's to reach EOL so we can scavenge the pci-e PERC cards out of them :D

Personally, I would avoid spindle drives made by anyone other than WD (ESPECIALLY Hitachi LOL), but that's just me.

Run ESXi on yo stuff, it's what all the cool kids are using for teh VM's! (if you are planning on running Media Centre etc from one you may find it fairly fail though!).

Ended up buying 4x2TB Seagate ST2000DL003's. Got them hooked up to an HP P400 in RAID 5... not the most secure RAID ever, but meh. Got the 512mb cache version with battery backup, so I can change drive RAID type and array number on the fly.

Got ESXi running off an 8GB USB stick inside the server, with an Ubuntu VM doing the fileserving and torrenting. I'm also trying out Solaris and FreeNAS in VM's too, but so far Ubuntu's probably the one that both easy and feature rich to use. Solaris is feature rich, but it's a PITA and regresses me to Uni days. FreeNAS is great for fileserving, that's about it. Really wanna try ZFS though, but running ZFS on top of a RAID is retarded.

Also ordered a HP N350T dual gigabit pci-e ethernet card so that I can dedicate the onboard port to interwebs, and two others to serving/streaming data over LAN if I ever need to.

Oh FFS! I just realised the drives I got don't support TLER. FUCK.

Guess I'll just have to run the SMART util on a cron to stop em spinning down. Gonna be fun booting it up though.

Alternative is to use the onboard sata controller with the drives in a ZFS config and use the SAS card to drive a JBOD setup later down the track.

edit: actually it seems like most of the guys complaining about the drives dropping out of RAID are because they drop out under heavy load or during startup. Both of those events trigger a high power draw, and looking at the drive specs, average power draw is 5.8W but when I hooked up my multimeter, on heavy load (random data written to HDD on all platters) and during bootup it was drawing around 22W. 4 x 22W = 88W power draw, which most of their NAS' would struggle to supply. The microserver should be OK since it has a 200W supply. Ah well, guess I'll find out shortly.

Awwww yeah. Went back to MSY and swapped the 4 ST2000DL003's for Hitachi 5K3000's. Which are fully supported by the RAID card. Fuck yeah Leo strut.

It took a bit of convincing (including one of the guys at MSY asking me why I didn't just test the RAID 1 array using one HDD :blink:) but got there in the end.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



  • Similar Content

  • Latest Posts

    • Very decent bit of kit. Definitely black it out I reckon.  
    • Because people who want that are buying euros. The people with the money to buy the aftermarket heads and blocks aren’t interested in efficiency or making -7 power, they’re making well over 1,000hp and pretty much only drive them at full throttle  best way to way make money is know your customer base and what they want and don’t spend money making things they don’t want. 
    • It's not, but it does feel like a bit of a missed opportunity regardless. For example, what if the cylinder head was redesigned to fit a GDI fuel system? It's worth like two full points of compression ratio when looking at modern GDI turbo vs PFI turbo. I'm pretty reliably surprised at how much less turbo it takes to make similar power out of a modern engine vs something like an RB26. Something with roughly the same dimensions as a -7 on an S55 is making absolutely silly power numbers compared to an RB26. I know there's a ton of power loss from things like high tension rings, high viscosity oil, clutch fan, AWD standby loss, etc but it's something like 700 whp in an F80 M3 vs 400 whp in an R33 GTR. The stock TF035HL4W turbos in an F80 M3 are really rather dinky little things and that's enough to get 400 whp at 18 psi. This just seems unwise no? I thought the general approach is if you aren't knock limited the MFB50 should be held constant through the RPM range. So more timing with RPM, but less timing with more cylinder filling. A VE-based table should accordingly inverse the VE curve of the engine.
    • I've seen tunes from big name workshops with cars making in excess of 700kW and one thing that stood out to me, is that noone is bothering with torque management. Everyone is throwing in as much timing as the motor can take for a pull. Sure that yields pretty numbers on a dyno, but it's not keeping these motors together for more than a few squirts down the straight without blowing coolant or head gaskets. If tuners, paid a bit more attention and took timing out in the mid range, managed boost a bit better, you'll probably see less motors grenading. Not to name names, or anything like that, but I've seen a tune, from a pretty wild GT-R from a big name tuner and I was but perplexed on the amount of timing jammed into it. You would have expected a quite a bit less timing at peak torque versus near the limiter, but there was literally 3 degrees of difference. Sure you want to make as much as possible throughout the RPM range, but why? At the expense of blowing motors? Anyhow I think we've gone off topic enough once again lol.
    • Because that’s not what any of them are building these heads or blocks for. It’s to hold over over 1000hp at the wheels without breaking and none of that stuff is required to make power 
×
×
  • Create New...