Question: Bang for buck Veeam repository server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Marco2G

New Member
Feb 22, 2022
4
0
1
Hi everyone

I apologize if this isn't the venue for asking for input, I just couldn't find a better spot.

I'm hoping this question will not be taken as a company trying to get cheap consulting. This really is just a nerd trying to abuse company budget for his tinkering here but I'd also like to not look like a fool.



I started a new job this year and I'm facing an overhaul of the backup environment. We are running HP DL380 G9 servers right now that receive iSCSI LUNs (80 to 160 TB each).

We have scaleout repositories with S3 offloads and the problem is with copy jobs also being a thing, these LUNs are getting killed. Seriously, I think I can hear the server scream for mercy. We regularly have a disk queue depth of 4 for hours on end...

The general idea is to have new repos with less actual space but faster disks and copy ALL data to S3 and mirror it there over two sites so we can do away with copy jobs in general.

Now here's the rub: I kinda want to go DIY on these puppies. I think the CEO would find the concept of getting most bang for buck appealing as he's not all that married to paying the likes of HP thousands in support contracts when we could just as well keep some spare hardware on hand.

My thinking goes in the direction of a value server with something like a local micron 15TB RAID5. To save on MS licensing, we'd do a linux repo.

Has anyone here experience with the pitfalls such a system might bring with it? I think it wasn't too long ago when ridiculously fast NVMe storage only ran under Linux at all and it took firmware updates out the wazoo to fix it on windows, right? Is that a non-issue these days? Do I just slap them in a software RAID and format with xfs and be done?

Also what would be the best rackable server hardware for this endeavor? We have TONS of old hardware in the cellar from G7 upwards and I wouldn't be totally against using those with consumer grade SSDs I just think we'd probably leave IO performance on the table as we'd be connecting SATA disks to the SAS interfaces, yeah?

Would it even be an idea to do a RAID0 with the disks? After all, if all data gets pushed to S3 anyway, we'd lose one backup if the server dies... might be a strategy worth considering.

Anyway, I hope you get an idea of where my mind is wandering. Do not hesitate to call me out if the concept is stupid to begin with. I really appreciate any input based on facts and/or experience.