![]() |
One for IT guys
I was going to ask this in one of IT forums I am member off, but they have issue that I cannot login at the moment so here it goes:
I have currently setup Dell Poweredge 2950 server with 2x quad Xeon processors running 3x2TB in Raid 5 +1x 2TB hot spare but this is overkill for my needs and I am downsizing to HP Microserver Gen8 to reduce power costs. Currently it costs me £250-300 in electricity per year and I am expecting it to be £30-50 per year on HP Microserver. While I do like RAID 5 unfortunately HP Microservers do not support it so my choices are 0, 1, 1+0 or 0+1. I know that if I was using 4 disks it would be best to use 1+0 instead of 0+1 for reliability but I am using 1 bay for OS drive so only have 3 available. My idea is to use 2 of old 2TB drives in RAID 0 and sell the other 2, get another 4TB drive and do RAID 1 between single 4TB drive and 2x2TB Raid 0 array. Important data is backed up on another external drive anyway and very important backed up in several places including cloud. Does it sound like a sensible idea or not. PS if anybody would be interested in Dell Poweregde 2950 with 2x E5450 Quad core Xeon processors, 16GB of RAM, Perc 6i Raid card let me know as it will be available after I complete migration. |
Might depend what software you're running on the hp microserver. Mine's running x4 3 TB WD reds plus a small HDD with the os (openmediavault) and is running RAID5 (about 8TB total)
If you get a cheap 5.25 to 3.5" drive caddy you can use the optical slot in the top for the os drive |
I've just setup a Microserver Gen8 at home, from what I can work out from my smart meter its consuming about 85watts of power running 4 HDDs, a single SSD in the ODD bay and upgraded to 16GB Ram.
I've gone down the VM route and running ESXi6 free version which is booting from a USB Stick plugged into the internal USB port. It also has an internal MicroSD slot which can boot ESXi. I've got 4 drives, 2 WD Red NAS 2TB drives stripped only and 2 Seagate drives also stripped only. I have a good backup routine of off and onsite backups to backup my 900GB of data using USB3 drives for speed. This is a good source of info on the GEN8 http://homeservershow.com/forums/ind...m/88-ms-gen-8/ |
Well, firstly I wouldn't bother with RAID 5 even if it was an option. RAID 5 pretty much became useless several years ago once drive capacities began exceeding 2TB:
http://www.zdnet.com/article/why-rai...rking-in-2009/ For my business customers, where possible I always choose RAID 10 or RAID 1, especially for mission-critical servers. The reason for using mirrored RAID arrays of course is to provide fault tolerance. In other words, it's a convenience thing, rather than a 'backup', to reduce down-time in the event of a drive failure. So, it depends on how mission-critical your system is. If reducing down-time was an important factor, I'd use a pair of drives in RAID 1 and perhaps keep the remaining bay as a hot swap, backup drive or just extra storage for less important files. If, on the other hand, down-time was less important than storage capacity, I'd probably use 3 drives in RAID 0. As long as everything is backed up, a failed drive just means the server is going to be down for a while until you've replaced it and recovered from backups. Also, how are you calculating the power consumption comparisons? I would be surprised if the difference is that great. How much would you be looking to sell the Poweredge for? And does it come with dual PSUs and rack rails? |
And people say i'm a geek :rolleyes:
Keep it up though, I love it and always like new ideas. Marcin gave me plenty today about home security. |
For my home file server I run a ZFS pool. Back up is made using Rsync to a separate pool in the same server; and Rsync to a remote server; and a portable HD. 4 copies of all files and all nice and simple +++
|
clearly i need to come bavl and read this when im sobber, i know my nax is raid 5. so the big wuestion is what should i be doinf ?
Quote:
|
Surely the type of RAID you chose depends on what you want to achieve, e.g. performance, redundancy, fast-fix, rapid recovery etc.
Some while since I bothered with this, but the servers I last was "responsible" for were all at least RAID 5 with hot-swap drives, mainly Dell kit. So if anything failed you just yanked it out and plugged a replacement in while still running. The array was then rebuilt automatically at a hardware/firmware level. IIRC there was another variant (can't recall which) that was secure against a 2-drive failure. We used that on really critical stuff. My excuse is that I was an IT manager/director not a techie! |
I can see the point Mark is making - if the typical read error rate of a large capacity drive is inevitably higher than the error rate that a RAID array can cope with rebuilding from, you're doomed to fail....
Mine's basically a media server for TV/music/films that happens to maintain backups of my desktop and laptop, so there's nothing critical on there. I guess RAID 5 with a load more discs reduces the chances of a failed drive causing a failed rebuild as the missing data may be on another drive? Just replaced the desktop HDD with a 4TB WD Black due to running short of space! Elsawin takes up a shed load.... At some point I really should sort out a cloud based/off site backup option, given a house fire would lose us everything digital! |
Quote:
Well, as long as you have backups, I wouldn't worry about it too much. Essentially, the problem with RAID 5 is that, by the nature of how it works, the amount of data that has to be written to rebuild an array of large capacity drives (>2TB) after a replacing a failed drive, exceeds that of the theoretical failure rate of most mechanical drives. Therefore, the chances of further data write failures during rebuild becomes likely. In enterprise environments, most systems administrators have known about the dangers of RAID 5 for years so it's relatively uncommon. For residential use, RAID 5 is still surprisingly popular, but then having a server/NAS fail in a home-setup will probably be nothing more than inconvenient. You do have to remember that RAID data mirroring is a convenience thing. Some people confuse the data mirroring aspect of it with backing up, which of course it's not. Everything is mirrored between drives, including corruptions and all deletions (be it intentional or malicious). RAID 1/10 is generally considered 'best practice' amongst most IT professionals who work with medium to large businesses. For my server setups, I generally have a pair of SSDs in RAID 1 for the Operating System and RAID 10 for both the internal storage and NAS/SAN external storage and backup devices. Rarely do the severs or storage devices I install have any less than 4 bays though. If you have less than 4 bays available, and you need the convenience of being able to swap-out a failed drive with little-to-no down-time, I'd recommend RAID 1. If you don't need that convenience, RAID 5 is fine, though bear in mind that it (theoretically at least) provides little more fault tolerance than RAID 0. Quote:
|
With 4 identical drives in RAID5 am I likely to get away with a single drive failure and rebuild (ie is it likely any missing segments of date from a failed read will be available from elsewhere in the RAID?)
I was running mirrored with 2 drives but ran out of space, so expanded it with 2 more drives and went to RAID5 at that point.... |
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
I think I might try ESXi on USB and use all for HDD's in 1+0 then |
Quote:
Having said all that, there's a chance the drives might last better than expected and rebuilding may well succeed, but if you want to vastly improve your chances, with 4 drive bays, you could create a RAID 10 array instead. |
Quote:
|
Years ago I had a home server, with 5 x 200Gb disks, and indeed 3 of them failed over a very short period. Now I have a NAS.
When my work HP server starts wobbling, it will be replaced with a NAS. I can't see the benefit of a server, and its complexity and expense, over the simplicity of a NAS. |
Quote:
Of course a proper server can do so much more than any NAS though and in most larger businesses you'll usually find several of each, all serving different purposes. One of my customers is presently running 16 servers (6 physical and 10 virtual) and 4 large capacity NAS units (for shared storage and backups). |
I had two NASes before but they were too slow, underpowered and lacked flexibility so I ditched them and bought Dell server. 20-30MB/s transfer speeds from NAS were not good enough for me, on Dell I am getting 110-112MB/s which is pretty much the limit of 1Gbps network. Besides that fully blown server lets me run all kinds of software say the same ElsaWin installed on server and then PC/Laptop only need small client installed. It also runs CCTV, acts as a router as I ditched BT Hub and I am just using open reach modem and then Cisco access point for wireless, it does VPN to log on home network remotely and some other soft as well.
Installing ESXi did not go to plan it wanted at least 4GB of RAM but with 4GB installed there is only 3.84 available. I will update RAM in the future but just want to get it up and running at the moment. I do not really need ESXi as I don't need to run several virtual servers anyway so might go for 5th HDD in ODD bay as I have several 2.5" HDD's lying around just need to get power cable adaptor from FDD to SATA. |
1 Attachment(s)
Quote:
This is what you need .... some serious server power (notice the 24 logical processors and 72GB of RAM) :) Attachment 11974 This is a setup I've been working recently on for a business customer. There are 4 servers in the group: 2 of them are of the spec you can see in the screenshot and the other 2, which only have 12GB of RAM each, are running the free Hyper-V 2012 R2 'core' OS (but can be managed from any of the full Server 2012 R2 Standard installations). The best thing about MS Hyper-V is how you can 'live-migrate' VMs from one physical server to another with zero downtime. I also have some of the VMs configured for replication such that, if one server dies, I can quickly power-up VM clones on another server in the group. |
Quote:
|
Quote:
The days of servers having storage are limited (well, over in most cases) Separating compute and storage makes sense full stop. When storage is all in one place its a lot easier manage. When compute breaks, throw it away and get a new one. You've lost no data. I run a Synology NAS at home, and feature wise its dripping with them, performance wise the 1GB lan is the bottleneck every time. Power, it uses very little. It also has a lot of apps available, including CCTV. All compute is on ESXi as its just easier (and cheaper) that virtualising over windows. In terms of raid, even at home, losing access to data whilst you restore from backups is a pita. For that reason alone, I would run at least Raid 1 to give you a chance of keeping things going whilst you replace the failed disc. Also, mirroring from a 4TB disc to 2 spanned 2TB discs, if even possible on a desktop O/S is a bit messy. Mirroring is best performed across identical geometries, however any mirroring beats none at all. |
At the moment running 2.5" 500GB drive for OS Windows Server 2012 R2 + 3x old( circa 2003 so not very efficient) 250GB 3.5" drives in RAID 0 for testing purposes and power consumption is 47-48W at idle, 60W when running speedtest for disks using CrystalDiskMark and max I have seen when CPU is 100% load was 99W. More modern HDD's should be more efficient so I am hoping to see under 50W power consumption with 4x2TB drives.
The way I calculated 1W of power consumption equals to £0.768 per year running 24/7 with economy 7 tariff so going from 300W to 50W over 3 years is £576 saving for me and will pay for CPU upgrade to Xeon E3-1265L and RAM to 8GB with spare left. |
Throw away any sub 500 Gb discs and replace with mirrored pair of SSD.
Very low power. Very quick access... Spinning spindles is getting to be a waste of money now unless they are multi TB. |
Quote:
I do not remember exact models I had but first I had D-Link 2 bay NAS( about £50) which did about 30MB/s transfer speeds with2x2TB in RAID0, I felt this was not enough so went for Netgear 4 bay(over £100) which only did 20MB/s so I scrapped that (well sold on E-bay to honest) that and went for Dell server. |
Quote:
|
I swapped the SSD out of my desktop - keep meaning to install the server OS on the old one and stick it in there. Main benefit is it's silent and doesn't need to spool up. The supplied HDD with the HP servers is a noisy little bugger!
I ran mine off a USB stick for a year with no issues, but got nervous about the amount of read/writes to it and took it out... |
I'm sober now and it still confuses me all the technical stuff you guys are talking about.
I currently have a 8tb qnap 412 naz setup in raid 5 because when I read the setup prompts that seemed like the best compromise. It gets used for time machine backups for MacBook Pro with 1tb drive and iMac also with 1tb There is also all my photos backup on it and copy of entire music collection to serve sonos What do you guys think would be best for the above, I would like some other means of storing photos and important documents but don't really fancy paying for it to be stored in the cloud, have a huge trust issue with the amount of data we freely give to others these days. There's one other thing that bothers me, just recently one of the drives has reported minor errors but is still working, I dud some reading and it says not to change a drive that hasn't failed on a raid 5 setup because it may not rebuild because the disk was still in use, is that true? |
Quote:
Just quads though? Aren't you over-provisioning the available cores somewhat with 75 VMs? ... or does that include redundant VMs? And wouldn't it be better to use Fibre Channel to the SAN to reduce the amount of network cables needed? That's a hell of a lot of storage too! What sort of application is this lot running? And out of interest, do you virtualise your DCs or are they physical? |
Quote:
|
Bloody Heck, you guys are up and about rather early for a Sunday :tuttut:
There was me thinking with two early door airport runs I was the only bloke up :D And you are all talking really involved technical, thank heaven I have retired from all that malarkie :ROFL: I'm just running a two bay Zyxel NAS with 2 x 2TB drives and hanging on the back a 4TB USB drive for backup. This simple setup supports all the home machines, roughly
My problem is the cost of coal to keep the fires burning for the steam generation. |
Quote:
|
I suspect you'll need to write the data to another drive before rebuilding the RAID to new config. Could be wrong (and I'm sure someone will correct who is more knowledgeable!) but I wouldn't want to risk losing the data either, so belt and braces approach is to copy it off before building the RAID.
If you're needing new drives anyway to increase capacity it's not such an issue. Depending on how complex the file structure is you could move bits to other computers to free up space. Whatever you have left you need to be able to copy to a USB HDD or if space is a problem you could create a rar archive of it, but that's going to take a good few hours if it's TBs of data. If you copy it off and rebuilding the array doesn't lose you data then happy days and delete the backups. If it doesn't at least you're not attempting to recover them from borked drives..... |
Yup Del, as Adrian says, you'll need to transfer all the files off first unfortunately.
Depending on the NAS features, you can sometimes do some certain RAID configurations 'online', such as storage capacity 'expansion' (when replacing drives individually), but even then it's safer (and sometimes quicker) to just backup the data and reinstate it afterwards. If the data is important, you should already have it backed up elsewhere. |
Quote:
The 214 play is a 2 bay, but the 'play' gives it extra CPU grunt that can help with many apps, media transcoding etc. Enterprise storage is my bread and butter, Netapp specifically, but I replaced all my Netapp kit with synology as the functionality and power savings were irresistible.... For SOHO / SME , I can't fault it..... |
So have had a little play with Microserver testing and now have transferred my stuff over to it, I have some software still to install and still waiting for another 4GB of RAM to arrive. 8GB should be enough for my needs as the Dell had 16GB but was normally using 4.6-4.8GB
One issue I had was with 2.5" HDD that I was using for OS as it was running rather hot 45-50C and causing fan to spin 50-54% speed at idle making it rather loud, still nothing comparing to Dell server, but that was living in the loft due to noise so could not be heard much but I was going to use HP in under stairs cabinet so wanted it to be more quite. Swapped HDD for 500GB SSD that I had bought with intention to fit in one of laptops and this has resolved fan speed /noise issue, now fan is down to 6-10% in idle and pretty much quite. Also updated from Celeron G1610T 2 core with no hyperthreading CPU to Xeon E3 1265L, 4 cores + hyperthreading. Currently running 4x2TB Seagate HDD's in RAID 1+0, 1x 500 GB Samsung Evo 850 SSD, Xeon E3 1265L CPU, 1x 4GB original RAM, soon to be joined by another 4GB. Power consumption with original CPU, idle 48-54W, full load running Crystal disk mark locally on RAID array to load HDD's, another Crystal Disk Mark from another PC over network for SSD and Prime95 running full load on CPU power consumption 70W. Interestingly with Xeon CPU while it brings full load power consumption higher than Celeron as expected being 45W part instead of 35W Celeron CPU it does jump by more than 10W difference to 93W but it is lower at idle 42-44W. Highest power consumption is 144W during startup for a second or 2 while HDD's spin up. The costs including VAT and shipping have been: Server (new, 1 year warranty) - paid £173.88 ( expecting £55 rebate so should be £118.88) Xeon E3 1265L CPU (used) - £150 Extra 4GB of RAM(used) - £25.99 Samsung Evo 850 500GB SSD (new, 5 year warranty) - £118.70 ( this was a bit unexpected and could have used smaller/cheaper disk but this was what I already had bought and will need to buy another one for laptop now) HDD's were transferred over from Dell server. So total comes to £413.57. If I sell Dell for £100 it will take about 1.6-1.7 years to recover remaining costs in power savings. I am expecting it to run at least 3 years so it was worth doing and while the performance is lower than Dell server it is still sufficient for me now with Xeon CPU. |
I'd just buy a really small SSD and mirror the drive so you can use the 500GB in your laptop - bit of a waste otherwise. I ran the software on my server for over a year on an 8GB USB stick in the MOBO lol
|
It needs to be fairly spacious due to software that only installs to C: drive and it cannot be be 2.5 " HDD because they run too hot, make fan to spin too fast and make too much noise. I could probably get away with another 240-250 GB SSD (I was using about 200GB on system drive on Dell) for cost saving but probably not worth reinstalling everything. Another option is to run hypervisor off SD card or USB with the image stored on RAID array but again cannot be ar$ed
PS. At this point I guess it's gone from asking for advice more towards giving feedback and while I appreciate any suggestions that follow, it's unlikely that I will make any fundamental changes at this point. My question was if it is sensible to use 2x2TB in RAID 0 and then mirror it to single 4TB disk in RAID1, while I did not get certain answer it made me believe I would be better off with 4x2TB in RAID1+0 which I have done now. |
HP ProLiant MicroServer Gen8 is on sale with £55 cashback again if anybody else is interested in them. Mine has served very well so far and I did get my casback without any issues.
http://www.serversplus.com/product.a...sp_serversplus |
Glad i stumbled across this some really good info that has helped me with a couple issues i have been having. Also nice set up there
|
Quote:
We are just commencing a datacenter move of the above, but none of the existing kit is being retained. We've just setup the new kit, 3 hosts are replacing the four but we now have dual 8 core CPUs in each with 512GB ram in each box. So now if anything it will have loads of spare capacity. We also have SAN redundancy with the whole lot being mirrored in addition to the normal RAID setup. |
All times are GMT. The time now is 08:48 PM. |
Powered by vBulletin® Version 3.8.0
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.