Enterprise SSD "small deals"

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

lopgok

Active Member
Aug 14, 2017
245
171
43
I'm still looking, being limited to one backup is profoundly stupid. as is the fact that it's so difficult with Linux and so stupid easy with windows. CLI is out of the question, tho it made be easy to code it's anything but to use. It's like groping in the dark and never really knowing if you got it right or not. Plus it's much easier to destroy your data with cli than it is with a gui. And typos and syntax errors will drive you insane. And finally, I'm of the opinion that it's a bad idea to put all your eggs in one basket by using backup software that uses an archive. And as for security and encryption, I don't need any of that and would rather not have to deal with it.
Personally, I have 4 backup file servers. They are powered off expect when actively doing backups. If one gets corrupted, I have the other 3. They are all running linux and use mdadm raid 5.
 
  • Like
Reactions: nexox and ca3y6

jode

Active Member
Jul 27, 2021
127
79
28
Personally, I have 4 backup file servers. They are powered off expect when actively doing backups. If one gets corrupted, I have the other 3. They are all running linux and use mdadm raid 5.
Same, except my backup servers rely on ZFS. Thanks to COW there is no question what blocks/files have changed and "incremental" backups are a matter of seconds vs. minutes.
The machines turn on automatically on a schedule and turn off after completed backup. This lowers exposure to compromise, but more importantly (to me) saves $$$ on the power bill.
 

Fritz

Well-Known Member
Apr 6, 2015
3,699
1,651
113
71
Same, except my backup servers rely on ZFS. Thanks to COW there is no question what blocks/files have changed and "incremental" backups are a matter of seconds vs. minutes.
The machines turn on automatically on a schedule and turn off after completed backup. This lowers exposure to compromise, but more importantly (to me) saves $$$ on the power bill.
How do you manage the on/off?
 

Greg_E

Active Member
Oct 10, 2024
391
139
43
Since the last deal was pages ago, probably time to start a new thread with the next deal.
 
  • Like
Reactions: abq

jode

Active Member
Jul 27, 2021
127
79
28
How do you manage the on/off?
The cheapest and easiest is in the UEFI/BIOS power management option. i.e. boot computer daily. There are software packages which can be used to set/update the time of day.

You can use programmable IoT outlets and boot the computer from another one.

I have a "switched" PDU which is the lexus option to power on/off computers over a network.

... and yes, IPMI works, too :p
 
  • Like
Reactions: nexox

Sacrilego

Now with more RGB!
Jun 23, 2016
202
292
63
How do you manage the on/off?
like ca4y6 said, ipmi.

I have a script in TrueNAS scheduled to turn on my backup server via IPMI periodically at night to replicate snapshots.
Helps me keep power usage down and provide me with some protection against hardware failures and ransomware if they somehow end up removing or encrypting all snaphosts on my main storage.
 
Last edited:
  • Like
Reactions: itronin

luckylinux

Well-Known Member
Mar 18, 2012
1,567
501
113
The money in ransomware isn't in consumer, it is in enterprise. They have money and insurance. So do not assume these malware will be unsophisticated and unaware of common enterprise storage features. They are designed to circumvent anti-malware protections. By running enterprise hardware in a homelab, you basically makes yourself a collateral damage of an attack not designed for you.

Building barriers between machines that have no reason to talk to each others on the same network is the one way I could have made this less painful.
Usability vs security :( .

I could also use something like Qubes OS but to be honest I typically prefer a Debian-base (be it Debian or Ubuntu). Also KVM instead of Xen (Qubes OS uses Xen Hypervisor).

I think that would very easily get very complex though, let alone it will explode the Disk Space usage if you need a full VM for every Application.

On Desktop I use firejail but the most it does (IMHO) is to limit the access your sandbox Application has to the Folders. Like all of them have access to my /home/<user>/Downloads Folder, but only a few have access to more.

Bubblewrap would be way better but it's more complex to configure.

Running everything inside containers would also be unpractical although, if you run them as separate Users (using podman), you would effectively ensure isolation from one app to the other, in case one gets compromised.

I'm not doing that right now, but seeing e.g. cryptominers making their way into Docker Images (or e.g. React Apps in general), highlights to me that that is another Thread Vector that needs to be considered.

That's something to be mindful ... on my Container Server where I run around 40 Containers as a podman User (rootless), you already have the benefit of running rootless in case anything gets compromised. However, since Volumes are a PITA to deal with and Filesystem Permissions using subuid/subgid can be a PITA in their own right, I mostly run every container as podman User too, so one container getting compromised can compromise all other containers.

Possible Solutions (I'm currently evaluating the latter) include:
  • Different unprivileged Users for each Application in one KVM Virtual Machine
  • Different unprivileged LXC Containers with one podman User (unprivileged LXC + unprivileged podman)
 

luckylinux

Well-Known Member
Mar 18, 2012
1,567
501
113
Backup server should be pulling from the clients and refuse inbound connections from everything except a non-client system... management or jumphost style. That box should also never touch or be mentioned on the clients. Different subnet is good too.

I too use zfs for all my backups and windows hosts store anything important on zfs backed samba. If that's not where it is, it doesn't matter enough and won't be backed up.
Good Point about pulling vs pushing.

My thought was for backup Server to pull from Client, but not so much for security Reasons.

It just seemed easier/more logical, maybe because I wouldn't need 100 Users, cannot remember now (didn't do it yet).
 
  • Like
Reactions: tuxflux85

luckylinux

Well-Known Member
Mar 18, 2012
1,567
501
113
I'm still looking, being limited to one backup is profoundly stupid.
Limited to one backup in which way ?

For Block/Filesystem Level Backup you can use zfs snapshots.

For File Level Backup you can use restic, borg, rsnapshots, ... There are plenty of choices.

BackupPC / Amanda / Bacula are some older software, not sure how much I'd like to use them nowadays. Surely exposing a Samba/CIFS Server just for that is quite the Security Risk ...

Also ZFS you could create a File-based Pool, send snapshot to it, then upload your ZFS Pool (File-backed) to any cloud Storage (after encrypting it in whatever Way you prefer).

as is the fact that it's so difficult with Linux and so stupid easy with windows.
My only Issue with Linux (would be the same with Windows) is about saltstack and secrets management. That was my only big roadblock.

If you set up a User & Password manually on each End, it should be pretty easy.

Windows does plenty of stupid Stuff. So can you especially on Windows.

CLI is out of the question, tho it made be easy to code it's anything but to use. It's like groping in the dark and never really knowing if you got it right or not. Plus it's much easier to destroy your data with cli than it is with a gui. And typos and syntax errors will drive you insane. And finally, I'm of the opinion that it's a bad idea to put all your eggs in one basket by using backup software that uses an archive. And as for security and encryption, I don't need any of that and would rather not have to deal with it.
OK so which Backup Solution do you want, if not archive based ? Block based ? I'm not sure I follow you.

As for security/encryption, I personally encrypted all of my Servers (minus some Raspberry Pi etc whose Performance would completely tank if I did). Locallz it's one Thing, but for sure if you want to upload your backup to the Cloud ... Well, think about it :cool:.
 
  • Like
Reactions: itronin

seany

Member
Jul 14, 2021
43
38
18
Same, except my backup servers rely on ZFS. Thanks to COW there is no question what blocks/files have changed and "incremental" backups are a matter of seconds vs. minutes.
The machines turn on automatically on a schedule and turn off after completed backup. This lowers exposure to compromise, but more importantly (to me) saves $$$ on the power bill.
This is what I do too. Main nas is 4x rz2 6 wide. Every 18mo the "smallest" vdev gets totally replaced with what ever id the current best $/tb sas on the used market. Extra disks replace whatever is smallest in the 2 backup nas (one local, one remote) that are 3x rz2 12 wide with no consideration for size in vdev matching. Backup machines are powered up and down via ipmi, remote unit has opnsense + tailscale on it... been working well for years.
 

luckylinux

Well-Known Member
Mar 18, 2012
1,567
501
113
Not just nvme, but PCIe 4. Still a bit pricey in my opinion. But forget the < $100 prices for 3.84TB. I haven't seen a deal like that in months. I am selling regularly some surplus 3.84TB SSDs and they sell at £200 in a day or two.
Yeah, it's AI Hell :confused: .

I should have bought more when I had the Chance. Well, I still need to deploy them anyways, so it's not like they are currently being used ... yet.

There is always some Parts I miss for the Server Builds, then everything drags on ... forever :rolleyes:.