I'm starting new thread to avoid unnecessary discussions that only make everything more complicated.
But first of all a disclaimer:
This is something I do not need AT ALL, it's purely a combination of "I'm an idiot and have an upgrade itch" and "I like to mess with crap and tweak it for no real gains", so please accept that and help me (if you want) achieve exactly what I want without suggesting doing something completely different unless what I want to do wouldn't work or there is much effective way (like pointing out I do not need to use a HBA with NVMe drives which I had no idea about).
There is a small home server sitting on a small rack in the corner of the living room.
It runs ESXi (which I'm going to replace with XCP-ng when I find the mental strength to mess around with Linux stuff I know absolute friggin' zero about; installing the server back then took me solid several weeks of trial and error, frustrated questions, bugging people and googling around until everything worked flawlessly whereas it would be a matter of few hours for someone who knows what is he doing) and originally served as all in one box that also ran virtualized pfSense, which eventually turned out to be a really bad idea under certain circumstances, so I replaced that with standalone device. The server was also supposed to run more things like some game server to host coop sessions for various games, but that never materialized, so it still only runs TrueNAS and an Ubuntu server seedbox, barely doing anything.
The NAS is used to store illicit content (music, films) and regular daily backups from about three computers. It sees very little writes, I'd download few hundreds of GBs of stuff once in a blue moon, the backups are few tens of GBs a month or so, so all in all, I guess endurance is not really a concern for the storage.
the specs:
Supermicro X11SCH-F motherboard
Xeon E-2136 (I believe, the hexacore basically)
Intel X710 10Gb NIC (mental note: I need to upgrade my PC in order to be able to put the 10Gb Mellanox card back so this thing makes sense again)
LSI HBA (9305-xx)
4x Samsung PM883 SSDs connected to the HBA and passed through to the TrueNAS VM in RAIDz1 configuration.
I had an upgrade itch of sorts recently, but that quickly changed and/or morphed in a slightly different way after I was corrected about one major oversight I wasn't aware of (the HBA thing). I'm not going to elaborate on that and will instead try to describe what I'd like to do right now instead.
Again, I preface this by saying there is absolutely nothing wrong with the server performance-wise, it's actually quite an overkill for its current purpose.
Mainly I would like to replace the SSDs with NVMe ones for a slight speed boost when copying large files, because I am an extremely impatient person. Those full backups take too long at times and were the reason I put 10Gb card in the server (and my PC obviously) at some point. If I wasn't such an impatient idiot, I could have used Mini-ITX board and put the thing in much smaller case instead, which is something I've been thinking about for a long time, because the Fractal Node 804 case is just big)
Problem: The motherboard does not support bifurcation. Some people claimed it did, but there is no such option in the BIOS, not a word in the manual, and I am pretty certain this particular model simply cannot do it. I'd have to upgrade the server first, which I want to do anyway. The MB also only supports PCIe 3.0 and there's the x710 card so I'm sure I wouldn't have enough lanes anyway.
I also want to lower the server's power consumption.
In its current form it idles at around 110W, and I'm sure a modern Ryzen 5-based (that's what I'm very preliminarily thinking about) platform would use a lot less.
Getting rid of the HBA is at least 10W in itself, but the upgrade (which will have to happen for the SSDs anyway) will do a lot more. But everything counts. Of course, there is the question of power consumption of SATA vs NVMe SSD, but I don't think there should be a huge difference IF I don't buy enterprise-class stuff (I might if it makes sense for various reasons like price, endurance, blabla).
So for now, maybe lets focus on the SSDs?
Disregarding the question of how to physically connect them (assuming neither the motherboards I have in my mind for the upgrade don't support bifurcation and having to use some sort of a lanes splitter adapter or something), I guess I could get away with any desktop junk with the little writes the NAS sees, but I don't want to buy complete crap either.
At the very least, the SSDs should have consistent sustained speeds. They can be PCIe 3.0 for all I care, the speed boost over SATA would be massive anyway.
I would prefer if they weren't too power hungry though.
I took a look at our biggest eshop with electronics and filtered out some of the 2TB M.2 ones, and there's WD Red SN700 that is supposedly designed for NAS usage. It has huge endurange too (irrelevant, but reassuring) and good sustained speeds according to the STH review. Not terribly expensive at ~€140 either. Let's call that the maximum I'm willing to spend on one. It doesn't say what the power consumption is, unfortunately.
I don't mind buying used drives if the source is reputable though. I don't think enterprise drives are the way to go because I heard U.2 ones get really, really hot (probably use a lot of power too), and I would like to keep the server as quiet as possible (=I am apprehensive about adding extra fans inside).
So, thoughts?
But first of all a disclaimer:
This is something I do not need AT ALL, it's purely a combination of "I'm an idiot and have an upgrade itch" and "I like to mess with crap and tweak it for no real gains", so please accept that and help me (if you want) achieve exactly what I want without suggesting doing something completely different unless what I want to do wouldn't work or there is much effective way (like pointing out I do not need to use a HBA with NVMe drives which I had no idea about).
There is a small home server sitting on a small rack in the corner of the living room.
It runs ESXi (which I'm going to replace with XCP-ng when I find the mental strength to mess around with Linux stuff I know absolute friggin' zero about; installing the server back then took me solid several weeks of trial and error, frustrated questions, bugging people and googling around until everything worked flawlessly whereas it would be a matter of few hours for someone who knows what is he doing) and originally served as all in one box that also ran virtualized pfSense, which eventually turned out to be a really bad idea under certain circumstances, so I replaced that with standalone device. The server was also supposed to run more things like some game server to host coop sessions for various games, but that never materialized, so it still only runs TrueNAS and an Ubuntu server seedbox, barely doing anything.
The NAS is used to store illicit content (music, films) and regular daily backups from about three computers. It sees very little writes, I'd download few hundreds of GBs of stuff once in a blue moon, the backups are few tens of GBs a month or so, so all in all, I guess endurance is not really a concern for the storage.
the specs:
Supermicro X11SCH-F motherboard
Xeon E-2136 (I believe, the hexacore basically)
Intel X710 10Gb NIC (mental note: I need to upgrade my PC in order to be able to put the 10Gb Mellanox card back so this thing makes sense again)
LSI HBA (9305-xx)
4x Samsung PM883 SSDs connected to the HBA and passed through to the TrueNAS VM in RAIDz1 configuration.
I had an upgrade itch of sorts recently, but that quickly changed and/or morphed in a slightly different way after I was corrected about one major oversight I wasn't aware of (the HBA thing). I'm not going to elaborate on that and will instead try to describe what I'd like to do right now instead.
Again, I preface this by saying there is absolutely nothing wrong with the server performance-wise, it's actually quite an overkill for its current purpose.
Mainly I would like to replace the SSDs with NVMe ones for a slight speed boost when copying large files, because I am an extremely impatient person. Those full backups take too long at times and were the reason I put 10Gb card in the server (and my PC obviously) at some point. If I wasn't such an impatient idiot, I could have used Mini-ITX board and put the thing in much smaller case instead, which is something I've been thinking about for a long time, because the Fractal Node 804 case is just big)
Problem: The motherboard does not support bifurcation. Some people claimed it did, but there is no such option in the BIOS, not a word in the manual, and I am pretty certain this particular model simply cannot do it. I'd have to upgrade the server first, which I want to do anyway. The MB also only supports PCIe 3.0 and there's the x710 card so I'm sure I wouldn't have enough lanes anyway.
I also want to lower the server's power consumption.
In its current form it idles at around 110W, and I'm sure a modern Ryzen 5-based (that's what I'm very preliminarily thinking about) platform would use a lot less.
Getting rid of the HBA is at least 10W in itself, but the upgrade (which will have to happen for the SSDs anyway) will do a lot more. But everything counts. Of course, there is the question of power consumption of SATA vs NVMe SSD, but I don't think there should be a huge difference IF I don't buy enterprise-class stuff (I might if it makes sense for various reasons like price, endurance, blabla).
So for now, maybe lets focus on the SSDs?
Disregarding the question of how to physically connect them (assuming neither the motherboards I have in my mind for the upgrade don't support bifurcation and having to use some sort of a lanes splitter adapter or something), I guess I could get away with any desktop junk with the little writes the NAS sees, but I don't want to buy complete crap either.
At the very least, the SSDs should have consistent sustained speeds. They can be PCIe 3.0 for all I care, the speed boost over SATA would be massive anyway.
I would prefer if they weren't too power hungry though.
I took a look at our biggest eshop with electronics and filtered out some of the 2TB M.2 ones, and there's WD Red SN700 that is supposedly designed for NAS usage. It has huge endurange too (irrelevant, but reassuring) and good sustained speeds according to the STH review. Not terribly expensive at ~€140 either. Let's call that the maximum I'm willing to spend on one. It doesn't say what the power consumption is, unfortunately.
I don't mind buying used drives if the source is reputable though. I don't think enterprise drives are the way to go because I heard U.2 ones get really, really hot (probably use a lot of power too), and I would like to keep the server as quiet as possible (=I am apprehensive about adding extra fans inside).
So, thoughts?