Help with motherboard and CPU for new FreeNAS build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

rskoss

Active Member
Oct 9, 2020
126
26
28
Rosewill cases are finally available again. I have 12-bay case on the way. Now I need things to put in it! I am new to this, never put a computer together before.

This server will be a file server only. Plex, VM's, anything else that needs the data it holds will be on another server.

I currently have a Synology DS1815+. I understand that FreeNAS doesn't need a lot of CPU power, but I don't want to be annoyed with my new build as I am with the Synology. When it does data scrubbing, or rebuilds its array, the machine is absolutely useless. Incapable of doing anything else.

I plan to run 64GB of DDR4, ECC RAM. I'd like the board to do at least 128GB in case I need more.

I need a PCIe slot or 2 to connect to the 12 SATA hard drives.
I need a PCIe slot for a 10GbE port as I work with 4k video files.

My reading suggests that booting from an M.2 would be good. A pair as RAID-0 even better.

I don't want to waste money, but cost isn't the most important factor.

Looking for guidance, pointers - all information is good.
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
Rosewill cases are finally available again. I have 12-bay case on the way. Now I need things to put in it! I am new to this, never put a computer together before.

This server will be a file server only. Plex, VM's, anything else that needs the data it holds will be on another server.

I currently have a Synology DS1815+. I understand that FreeNAS doesn't need a lot of CPU power, but I don't want to be annoyed with my new build as I am with the Synology. When it does data scrubbing, or rebuilds its array, the machine is absolutely useless. Incapable of doing anything else.

I plan to run 64GB of DDR4, ECC RAM. I'd like the board to do at least 128GB in case I need more.

I need a PCIe slot or 2 to connect to the 12 SATA hard drives.
I need a PCIe slot for a 10GbE port as I work with 4k video files.

My reading suggests that booting from an M.2 would be good. A pair as RAID-0 even better.

I don't want to waste money, but cost isn't the most important factor.

Looking for guidance, pointers - all information is good.
FreeNAS on bare metal then?

Are you concerned about power consumption at all?

You mention 4K video editing... Are you planning an SSD pool to feed your 10Gb network connection? If so I'd probably look at SAS over SATA and honestly nvme would probably be the better route - at least for the clips you are actively working on...

Not to send you away but have you read up on the hardware recommendations or looked at the "Will it FreeNAS" thread on the iX community board?
See the first three pinned threads.

Are you looking for used or new gear recommendations?

(think growth 18 - 24 months out) for (plex) media, video editing and VM working sets?

What total capacity and size spinners are you looking for What total capacity and size SSD's are you looking for?
What total capacity and size NVME are you looking for?
NB used SAS spinners may be cheaper per GiB/TiB than SATA - would obviate your on motherboard SATA ports and require 2 x HBA's, though nothing wrong with mixing SATA and SAS spinners in a pool so long as they are the same size.
 

rskoss

Active Member
Oct 9, 2020
126
26
28
Hi - thanks for replying.

Yes, TrueNAS on bare metal. Not concerned about power or electric bill, but I don't want to heat the basement nor do I want to hear a vacuum cleaner roaring 24/7.

I did see the "Will it FreeNAS" posts. My takeaway is that FreeNAS itself doesn't require a lot of horsepower. A SuperMicro board is probably what I need. But there are dozens of boards and I don't know how to choose.

I'm looking to buy new. I planned to stuff the case with HGST 12TB drives because they have the lowest failure rate according to BackBlaze.

I currently have RAID6 in my Synology so thought I'd do the TrueNAS equivalent in this new build. 1 poll, 1 vdev, and make shares for anything that needs access.

I'm very open to suggestions.
 

zack$

Well-Known Member
Aug 16, 2018
716
349
63
For homelab use, I wouldn't consider running baremetal anymore for maybe three main reasons:

1. You can share resources over a host (run other VMs) and achieve no performance issues compared to running baremetal (nvme pools excepted)...Strangely enough, on TrueNAS Core 12.0 RC I even get better slog writes compared to baremetal (no idea why, still investigating).

2. Maintenance will be a lifesaver. Upgrading firmware on disks, making backups, migrating? Done.

3. Reliability - Even ixsystems have a blog post on virtualizing on ESXI (I tried proxmox ages ago and realized that, though drives were passed-thru, the FreeNAS VM was not getting all smart data...this was with a drive known to show failing smart attributes on baremetal but nothing of that sort on proxmox...ESXI mirrored the results of baremetal).

I know that point 1 above is not applicable to you...but, if anything, maintenance is a huge benefit of running TrueNAS virtualized.

On the MB side, supermicro boards, IMHO, are a clear choice.

Given you want to run 12 Sata drives + 10G, there are MBs that already have these and you can use pcie slots for more I/O. See: x11spm-tpf, x11spm-tf, x11sph-nctpf, x11sph-nctf.

Without 10G but covering 12 Sata drives onboard on the x10 SM generation: x10srh-cf, x10srh-cln4f.

Cheapest route? Maybe an X9 SM generation e5-2600/1600 V2 MB with an add-on 10G/HBA card. But I honestly would not recommend as those are already EoL and pretty soon will be loosing ESXI support..If you decide to run TrueNAS baremetal, then it should be perfectly fine.

Also, given you work with 4K...might want to consider a ssd pool as @itronin said. Sas3/nvme prices on the used market are good enough to justify looking in that direction...On the reliability of those drives, well even STH did a write up sometime ago where the results are pretty promising (used ssds still have a load of endurance left in them).
 
  • Like
Reactions: Markess and itronin

rskoss

Active Member
Oct 9, 2020
126
26
28
Given you want to run 12 Sata drives + 10G, there are MBs that already have these and you can use pcie slots for more I/O. See: x11spm-tpf, x11spm-tf, x11sph-nctpf, x11sph-nctf.
Thank you. I have not seen those boards in any of my searches. I'll go through and do a detailed comparison, but being able to cable all 12 drives and have a 10G port on the MB is a lot less to think about.

Next question is which processor? You've added a complication if I follow your advice and run TrueNAS as a VM, along with my other VM's. Now a processor has to do much more than just serve files.
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
second what @zack$ says about virtualiz-ing. (for now) I have 1 bare-metal freenas and 2 others: virtual primary & backup. I feel I get a lot of bang for my buck on the virtualized servers and am not really giving anything up. Same primary server holds my emby instance, freenas (media), 2 DNS servers, a windows 10 VM and a handful of other items. FreeNAS has an HBA (spinner pool) and an optane 900 AIC (slog) passed through. Emby and Windows 10 each have an nvidia GPU passed through. this server is *always* on as are its VM's.

If you are doing 12x 12TB and hoping for close to 10Gbe with spinners as large an ARC as you can afford will likely be critical to success. For writes definitely need a SLOG. Remember SLOG is not cache but it helps getting those writes on media that is not your main pool.

Probably 2x6Rz2. maximizing performance will be more like 6x2 mirror. Just remember on a mirror if both drives in a mirror fail then the pool is tot.
IMO 1x12Rz2 is about as big as I would go but not sure you'll get the performance you are looking for.

per drive performance (on board cache) etc. will probably be a little bit better with SAS vs. SATA - but that's all about your comfort level. If it were me I'd be looking for a good SAS3 controller ala 9400-16i to pass through to a virtualized freenas. (just an example no relationship with seller).

I like planning out a virtual server by drawing out what will run on it, creating budgets and allocating which resources are going where . That helps me steer to the correct hardware choice though it is a circular process of refinements and ultimately trade-offs.

budgety-things:
CPU cores/threads, memory, onboard storage ports, pcie lanes/slots with lanes, hot swap bays, not hot swap or "velcro bays"...
Now, what VMs: FreeNAS, linux1, linux2, Windows 10 etc. etc. and line up your resources.
look at hardware options then do it again and refine it until happiness is reached.

If you use VMware and intend this to be an always on server then I heartily recommend a hardware raid controller and R1 SSD boot array for ESXI, maybe a pair of 1.6TB intel DC S3500 series. If you go with proxmox - different story (and I'm in the early stages of learning about that). Size the boot array to hold everything on day 1 and if you can afford the cost double that that storage size.

You could also burn two SATA ports and install a pair of SATADOM's to use for your FreeNAS boot pool - gives you some flexibility to boot FreeNAS bare metal if VMware took a dump.

did I mention memory? I think 128-256GB is probably your overall system goal with 64-128 given over to the FreeNAS VM. FreeNAS likes its memory.

I'm not a super huge fan of all integrated on motherboard components. For a single CPU board I really am fond of the X9/X10SRL-F but as @zack$ says ESXI is likely to throw out support for those older boards... The Fuji board in the deals thread might be of interest to you too.

72-120TB is a lot of storage even 20% utilized in the beginning is a big chunk, potentially larger than most USB backup drives. What's your backup plan?

If you go with ESXI and it will be your only server I'd research the limitations of the free VMware and make sure its a fit. If you think you'll add another vmware server then look at VMUG advantage.

My requirements are not yours neither are my needs YMMV - just some ideas for you to kick around.

So here's an example build with resource budget from my rack, everything was purchased used (exceptions noted) via the bay or private sellers.

Compellent SC030 chassis upgraded to 920SQ power supplies (CSE-836)
X10SRL-F
256GB - 8x32 GB PC4-2400T memory
E5-2680v4 (14c/28T)
SM 3U cooler
Mellanox ConnectX3 dual 10Gbe (x8) (ESXI)
Nvidia P2000 (x8 in x16 phys) (CentOS Emby)
Nvidia P620 (x8 in x16 phys) (Windows 10)
LSI 9400-16i (x8) IT mode (NIB from above linky) (FreeNAS)
LSI 3008-8i (x4 in x8) (R mode) (ESXI)
intel Optane 900P AIC (x4 in x8) (FreeNAS)
2 x intel DC3520 1.6TB 2.5" SSD's (connected to LSI 3008 in R1 array)
16 x HGST 8TB SAS3 HDD (mix of 4201, 4200, 5200), configured as 2x8RZ2

ESXI 6.7
FreeNAS 11.3U5 6 vcpu, 128GB, 940016-i, Optane 900P AIC
CentOS Emby 8 vcpu, 8GB
CentOS 4 v cpu, 4GB
CentOS 4 vcpu, 4GB
CentOS 4 vcpu, 4GB
Centos 8 vcpu, 8GB
Debian 4 vcpu, 4GB
Windows 10 8 vcpu, 16GB
Vcenter and a few others...
 
Last edited:

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83
As recently we went thru a somewhat similar goal - saturating 10g speeds on disk ZFS, our (truenas x20) system has 36 active spinners (6x6Rz2).
In your best case (2x6Rz2) you should expect about 400MBps on writes or 685MBps on 50/50 mixed loads. Fast SLOG/Large enough L2ARC would improve this somewhat for random IO. RAID Performance Calculator - WintelGuy.com
 
  • Like
Reactions: itronin

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
off topic HAHA! then I should be really happy with 731.5MBps during a 10TB snapshot replication with no network or tunables tweaks ... :eek:o_O
 

rskoss

Active Member
Oct 9, 2020
126
26
28
So many great things for me to ponder and learn. Thank you all.

The consensus seems to be that instead of having a separate TrueNAS server and Proxmox server, each running on bare metal, that I combine my file server and virtualization server in the same box. I bow to the collective wisdom here. One box it is. And esxi instead of proxmox.

I looked at the limits of the free esxi offering and I don't ever see me hitting the limits of when I have to start paying for it.

Some comments about buying used things - I don't. Call it a personality disorder but I just like things that I buy to be new.

re: SLOGs and ARCs

I don't know that they'll help me. Let me tell you how I use my Synology now, and then comment on if I do need them. The Synology has a share for multi-media (plex stuff), a share for my photography/videography, a share for the surveillance cameras, and a share that holds backups for all the other computers in the house.

Is a cache necessary for plex? Yeah, it's sequential, but shouldn't system memory be able to cache it?

For photography, I just went back and looked at some random project folder sizes and I didn't see any that exceeded 50GB. I do most of my work on an iMac where files are stored locally, and then I copy finished projects to the Synology - and that takes an annoyingly long time. I've never seen transfers go above 80MB/s.

Often, while working on a project, I will want to incorporate a segment of video from a past shoot. I don't quite know which file I want, so have to scrub and browse from the Synology. This is almost impossible. And completely impossible if it's doing a data scrub or resilvering. But will a cache help here? I'm bouncing around different files, not transferring much from any one file - until I find the one I want. But then it's a quick copy.

Backups happen when I'm not working so I don't care how long they take. The iMac is backed up twice locally and another copy is sent to BackBlaze, so even if a write to my new box fails, I have plenty of copies.

That leaves the camera feeds. Can't believe a cache is useful for that - even though they are 8MP cameras.

re: TrueNAS architecture (is that the right word?)
I was going to go with a single vdev and because I'm simple-minded. But the consensus here is to go with 2x6RZ2. I will of course bow to the collective wisdom of this group, but that's a lot of drives going to parity. I would be down to 8 data drives. That still fills my needs - I have 30TB spinning on the Synology now, no new movies or TV shows seems to be being made thanks to the virus, and I haven't been out shooting because I've been recovering from back surgery and have one more to go. So still 80+ TB of space. Should keep me happy for a couple of years.

I'm going to go through the motherboards that have been mentioned here and see which ones suit my needs.
 

zack$

Well-Known Member
Aug 16, 2018
716
349
63
You don't need to do 2x6RZ2. Depends on your needs. I do a spinning disk pool of 1x9RZ2 with a hot spare (though ixsystems recommends that vdevs not be larger than 8 drives). I only use it for backing-up.

Use this to calculate your pool size: https://www.servethehome.com/raid-calculator/

Everything I run hot/warm is on ssds (they've become so cheap, it's a no brainer). I really would recommend that your most accessed data be put on ssds (especially if you care about performance). There are some Intel P3605 (1.6TB) add-in cards in the great deals section that I've had good results with! If the MB you eventually pick has enough PCIE slots for your needs, those P3605 are nice.

Also think about segregating your data into which you use the most...place those on a ssd pool.. A VM pool, for example could be on a separate pool from your 4K videos.

What I do is add up all my ssd pools (at 80% capacity) and ensure that my spinning hdd back-up pool can cover that (at 80% capacity). In zfs, it's not recommended to go over 80% of the pool's capacity as that leds to performance loss.

I understand your concern about "used" gear but those fears are greatly mitigated when your talking about "used enterprise gear". Your basically talking about gear that was made to run 24/7 and goes through the rigours. For any used drive, you burn in and return if you get errors. This has been working great for most people. Used enterprise gear almost always tops new consumer gear.

I should also mention, with new drives you still need to burn in and...yes you do get errors and have to return. In fact, if you get a used enterprise drive that has maybe 200-300 power-on hours (with moderate writes) and no smart errors, you have probably found yourself a burnt in drive that is basically "new".

Slogs are most useful for sync writes (think VM storage). On the L2ARC side, more system ram is always reccommended before considering same.

After considering all of the above. Your gonna want to start thinking about networking and anything else you may want to do (game server etc).
 
  • Like
Reactions: itronin

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
I totally understand your point @rskoss "Some comments about buying used things - I don't. Call it a personality disorder but I just like things that I buy to be new."

I'd like to elaborate a little on what @zack$ said "Your basically talking about gear that was made to run 24/7 and goes through the rigours."

Very, very true for drives and its also very, very true for the rest of the components that you'll need, Motherboard, CPU, memory, etc. etc.
Consumer gear is not designed to run 24x7. It is a blessing if it does, and a curse when it goes south - typically demonstrating weird, hard to troubleshoot, behaviors.

I like your work-flow. Local on the mac, push to long term storage and you want to minimize the wait on the push.
You have "production needs" plex, possibly a few other things. 1x12RZ2 will almost certainly handle 1Gbe, maybe 4 or 5Gbe without a slog. Do mount via NFS on your mac...

If I understand your process and while you'd like to learn, there is an end goal to improve your work-flow, and strengthen the production needs you've identified and not so much the eternal upgrade-itis some of us indulge in.

Buy 2-3 year old used or old/newstock gear may allow you to buy better quality, more durable, more reliable gear than buying new. of course YMMV and buying new, switch, new motherboard etc obviates those instances where you get a used dud. And warranties are very NICE!


Here's a question: Do you need/want IPMI (remote IP based console) on the motherboard?

Lastly there are also ecosystems to consider, and while you can build an open garden sometimes staying within the walls makes things easier / more compatible.

Using a mix of Disks and some All in one cards or m.2 nvme on a carrier card would be handy like @zack$ said for VM's. You can pass them through to FreeNAS, do a mirror for data protection, and then serve that back to ESXI - however this is advanced. There are plenty of people here who have done this and there are some scripts and what not to take care of some of the gotchyas but I would not say there is a recipe per se and you have to know what you are doing before it all works... simpler would be a raid 1 card and a pair of enterprise SSD's that are large enough to handle all of your VM's.


going new allows you some options like getting a nice CPU like a good xeon with IGPU... you can pass that through to Plex.
I concur that a SM server board and the appropriately matched CPU (esp NEW) will be good. I think @IamSpartacus might have some insight on new gear as he's posted his new builds before and I think he's doing some IGPU pass through. Careful though he might turn you to the dark side of Unraid (kidding, nothing wrong with it and if you didn't look you might want to do so!)...

If you don't need IPMI then new old stock fuji motherboard in the deals section is worth taking a look.
You'll probably need to consider a used CPU though as new E5-26xxv4's can still be pricey. Deal section has a thread on used e5-2680v4's which would probably be pretty nice. Same with memory. The CPU and memory tend to either work or not, they're enterprise grade and I've personally not had a used enterprise memory module or CPU fail (yet... knock on wood).

If you have not looked at serverbuilds.net for build ideas you might as your use case is somewhat similar to many there. However they're really big on unraid, and used gear so that may not be a droid you are looking for. Their primary goal seems to be maximizing storage /computes using used gear.
 
Last edited:

rskoss

Active Member
Oct 9, 2020
126
26
28
I think I have a motherboard and CPU:

For the MB: Supermicro X11SPH-NCTF
https://smile.amazon.com/dp/B07532B3Q2/ref=nav_timeline_asin?_encoding=UTF8&psc=1

Not many CPU choices with that socket, so Intel Xeon Silver 4110
CPU

And a fan to cool it: fan


I started reading the TrueNAS user guide and watched a setup video on YT this morning. It looks like I don't even have to worry about vdevs. If I select all 12 drives for my pool, TrueNAS will offer an optimal suggestion - and I'll met US dollars that it will tell me to go with 2x6 RAID6 vdevs. Done. Then make datasets.

The free ESXi forbids making backups of vm's. Are you guys using ghettoVCB or doing it some other way?
 

rskoss

Active Member
Oct 9, 2020
126
26
28
re: used gear

I'm just not a tinkerer (wow - that's really a word!). Everything I touch turns to crap. I don't even have a lot of faith that I'll be able to get this all working buying everything new, but that will at least stack the deck a little in my favor.
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
For the MB: Supermicro X11SPH-NCTF
https://smile.amazon.com/dp/B07532B3Q2/ref=nav_timeline_asin?_encoding=UTF8&psc=1
Not many CPU choices with that socket, so Intel Xeon Silver 4110
CPU
And a fan to cool it: fan
... US dollars that it will tell me to go with 2x6 RAID6 vdevs. Done. Then make datasets.

The free ESXi forbids making backups of vm's. Are you guys using ghettoVCB or doing it some other way?
I like your combo choice! Are you going to drop in a GPU for plex?

hmmm... Not sure about Truenas as I have not spun it up yet. FreeNAS probably 1x12RZ3 - but for either if it does something you don't want you can always change it!
 

rskoss

Active Member
Oct 9, 2020
126
26
28
Given you want to run 12 Sata drives + 10G, there are MBs that already have these and you can use pcie slots for more I/O. See: x11spm-tpf, x11spm-tf, x11sph-nctpf, x11sph-nctf.


Also, given you work with 4K...might want to consider a ssd pool as @itronin said. Sas3/nvme prices on the used market are good enough to justify looking in that direction...On the reliability of those drives, well even STH did a write up sometime ago where the results are pretty promising (used ssds still have a load of endurance left in them).
Thanks for the board suggestions. I'm ready to go ahead with the NCTF. I'd rather overbuy than regret later.

Used ssds for cache is one thing I might consider buying used. If they don't work (or more likely, I won't be able to get them to work) I still have a working system.

Everything I've read says to wait on caches to see if there is a need. That resonates with me. Get it working, then get it working better!
 

rskoss

Active Member
Oct 9, 2020
126
26
28
Probably 2x6Rz2. maximizing performance will be more like 6x2 mirror. Just remember on a mirror if both drives in a mirror fail then the pool is tot.
Am I understanding this correctly if I mirror 2x6Rz2...... Out of my 12 drives, I'd only get to use 6 because of the mirror... and out of those 6, I'd lose 2 for parity - leaving me only 4 drives to store my data? And I understand that TrueNAS doesn't like being anywhere near full.



I like planning out a virtual server by drawing out what will run on it, creating budgets and allocating which resources are going where . That helps me steer to the correct hardware choice though it is a circular process of refinements and ultimately trade-offs.
I just don't know enough to do that. I'm going to have to experiment and make course corrections as I go.

If you use VMware and intend this to be an always on server then I heartily recommend a hardware raid controller and R1 SSD boot array for ESXI, maybe a pair of 1.6TB intel DC S3500 series. If you go with proxmox - different story (and I'm in the early stages of learning about that). Size the boot array to hold everything on day 1 and if you can afford the cost double that that storage size.

You could also burn two SATA ports and install a pair of SATADOM's to use for your FreeNAS boot pool - gives you some flexibility to boot FreeNAS bare metal if VMware took a dump.

did I mention memory? I think 128-256GB is probably your overall system goal with 64-128 given over to the FreeNAS VM. FreeNAS likes its memory.
Let's talk about booting. The board I'm ready to buy is the SuperMicro X11SPH-nCTF

Link to MB

I was thinking about using this in the boards M.2 slot for booting. Yes? No? Maybe? If I get SSD's, I'd have to velcro them someplace inside the case.

The MB has lots of room for memory. I'll take your suggestions and put in 128 and give 64 to TrueNAS.

72-120TB is a lot of storage even 20% utilized in the beginning is a big chunk, potentially larger than most USB backup drives. What's your backup plan?

If you go with ESXI and it will be your only server I'd research the limitations of the free VMware and make sure its a fit. If you think you'll add another vmware server then look at VMUG advantage.
re: backups

If I get this working (huge if), what's currently on my Synology will be moved to this server. The Synology will then serve as backup. I also send backups to BackBlaze and to Google (although I understand Google is fiddling with their plans now). Crashplan for business also looks to be affordable.

Thanks for taking the time to send your thoughts. I really appreciate it.
 

rskoss

Active Member
Oct 9, 2020
126
26
28
As recently we went thru a somewhat similar goal - saturating 10g speeds on disk ZFS, our (truenas x20) system has 36 active spinners (6x6Rz2).
In your best case (2x6Rz2) you should expect about 400MBps on writes or 685MBps on 50/50 mixed loads. Fast SLOG/Large enough L2ARC would improve this somewhat for random IO. RAID Performance Calculator - WintelGuy.com
Those speeds are 10x what I'm seeing now writing over 1g to the Synology NAS.
 

rskoss

Active Member
Oct 9, 2020
126
26
28

Buy 2-3 year old used or old/newstock gear may allow you to buy better quality, more durable, more reliable gear than buying new. of course YMMV and buying new, switch, new motherboard etc obviates those instances where you get a used dud. And warranties are very NICE!


Here's a question: Do you need/want IPMI (remote IP based console) on the motherboard?
Warranties are nice. Amazon 30 day returns are nice. Plus, I can enjoy that new gear smell ;-)

Somebody in the DIY section of this board said something along the lines that IPMI was his favorite feature that he didn't even know he wanted.

I only have a vague understanding of it, but sounds like I need it unless I get a board with onboard VGA or put in a graphics card.

The board I'm considering says IPMI - Aspeed AST2500 BMC.

Thanks.
 

rskoss

Active Member
Oct 9, 2020
126
26
28
I think I have a motherboard and CPU:

For the MB: Supermicro X11SPH-NCTF
https://smile.amazon.com/dp/B07532B3Q2/ref=nav_timeline_asin?_encoding=UTF8&psc=1

Not many CPU choices with that socket, so Intel Xeon Silver 4110
CPU

And a fan to cool it: fan

The free ESXi forbids making backups of vm's. Are you guys using ghettoVCB or doing it some other way?
Hmmm..... no replies. Everybody is busy, or I made brilliant choices and there's nothing more to say, or my choices are so stupid that nobody can figure out how to say so politely.
 

rskoss

Active Member
Oct 9, 2020
126
26
28
I like your combo choice! Are you going to drop in a GPU for plex?

hmmm... Not sure about Truenas as I have not spun it up yet. FreeNAS probably 1x12RZ3 - but for either if it does something you don't want you can always change it!
I'm going to wait on GPU to see if I actually need it or if the CPU can handle it. It's just me watching.

I've seen different numbers for the max size of a vdev and 12 is the upper limit I've seen. Others have suggested 2x6RZ2 and I'll probably do that.