SATA doms

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

alex1002

Member
Apr 9, 2013
519
19
18
Good day
I just build my first fat twin Supermicro server and got 1 sata Dom per node. Are these realibible. Do I need to get two in raid1. My plan is to install hyperv on them.
Thank you

Sent from my Nexus 6P using Tapatalk
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
The trick with SATA DOMs is limiting writes. In the smaller form factor there is not as much room for NAND. I think the SATA DOMs are 0.3 or 1DWPD but remember they are only 32GB, 64GB, 128GB.

The #1 issue people have with these is that they put the SATA DOM in as a cache drive in storage arrays and write too much to them. That is not a good use case. Booting ESXi, pfSense, FreeNAS, and my guess is Hyper-V is a great use case. Small log writes are not an issue but if you are then going to run heavy database writes in a Hyper-V VM located on the drive that may be an issue.

I like to RAID 1 just because it makes for less downtime but all of the Xeon D machines I am using SATA DOMs on I am using single units (e.g. for Proxmox/ FreeNAS OS installs).
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
I will not put VMS on Dom.

Sent from my Nexus 6P using Tapatalk
Then you should be ok, I've yet to see a solid state device "just fail" like I have hard drives. although I still use raid1 where I can I'm not as convinced it's necessary for an os partition using an ssd.(Which I've only seen fail because of psu voltage spikes, exhausted write capacity or DOA defective but not "Age" like a spinner)
 

cheezehead

Active Member
Sep 23, 2012
723
175
43
Midwest, US
I'm not as convinced it's necessary for an os partition using an ssd
The last set of servers I ordered last month will be actually booting from SSD because it was actually significantly cheaper to get a pair of SSD's in a mirror vs 5x10k spinners to handle the I/O profile needed for the box. Even though the SSD's were more per drive than the 10k spinners, by not needing more advanced raid levels I was able to save a fair amount by buying a lower end raid controller as well.
 
  • Like
Reactions: alex1002

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
The last set of servers I ordered last month will be actually booting from SSD because it was actually significantly cheaper to get a pair of SSD's in a mirror vs 5x10k spinners to handle the I/O profile needed for the box. Even though the SSD's were more per drive than the 10k spinners, by not needing more advanced raid levels I was able to save a fair amount by buying a lower end raid controller as well.
Can't argue with it simplifying your array as well, I've used lots of raid1 ssd's in place of raid 10 10k/15k rpm drives.
 
  • Like
Reactions: cheezehead

alex1002

Member
Apr 9, 2013
519
19
18
The last set of servers I ordered last month will be actually booting from SSD because it was actually significantly cheaper to get a pair of SSD's in a mirror vs 5x10k spinners to handle the I/O profile needed for the box. Even though the SSD's were more per drive than the 10k spinners, by not needing more advanced raid levels I was able to save a fair amount by buying a lower end raid controller as well.
My issue is that trays are full of drives already. And no more space for ssds or drives. This server.does not have any USB or SD card on the motherboard. There's only two at back which we need to kmv and some other appliances that use USB.

Sent from my Nexus 6P using Tapatalk
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
As long as you are just booting Hyper-V off of them you are fine.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Why would something like FreeNAS be O.K but CentOS, FreeBSD, etc not be if you're using other drives for your data / logs / etc ?
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Logs can also be a burden on small ssd boot drives (esp if you run something log-happy like Ceph). I set up a logstash server and set up syslog to only throw logs to it instead of the boot drive.

Sent from my SM-G925V using Tapatalk
 
  • Like
Reactions: wsuff and Patrick

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
rsyslog FTW :-D
And many reasons in terms of record keeping and security it's the only sensible thing to do.

Anyway back on topic the SM 64gb ones seem to allow for 65TB writes according to spec ! That's well and truly not an issue for most operating systems used as a hypervisor or hosting all changing data elsewhere.