Need help with storage options on ASUS Pro WS W680M-ACE SE (For NAS Build)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NerdAshes

Active Member
Jan 6, 2024
105
51
28
Eastside of Westside Washington
So I bought a W680M-ACE to go in a Silverstone CS381 case. For the CPU I chose a Intel Core i7-12700K.

W680M-ACE Connectivity Specs:
Total support for 2 x M.2 slots PCIe 4.0 x4 and 8 x SATA 6Gb/s ports.
  • Intel Processors
    • 1 PCIe 5.0 x16 slot
    • M.2 slot 2280 PCIe 4.0 x4

  • Intel W680 Chipset
    • 1 PCIe 4.0 x4 slot
    • 1 PCIe 3.0 x1 slot
    • M.2 slot 2280 PCIe 4.0 x4
    • 4 SATA 6Gb/s ports
    • 1 SlimSAS (SFF-8654) Slot PCIe 4.0 x4 mode or 4 SATA 6Gb/s

  • Internal USB port
    • 1 USB 3.2 Gen 2x2

12700K Processor:
  • PCIe Configurations = 1x16+4, 2x8+4
  • Max PCIe Lanes = 20

CS381Case:

  • External Drive Docks
    • 8 (hot-swappable) 3.5” HDD Drives
  • Internal Drive Bays
    • 4 (5mm ~ 15mm) 2.5” SSD Drives
  • HDD Backplane
    • 2 Mini-SAS HD (SFF-8643) 36 pin connectors
For the 8 HDD my plan is to use the obvious ports:
I bought the CPS05-RE cable, for the four motherboard SATA ports to one of the Mini-SAS HD backplane ports (it's reversed for that purpose).
I bought a SlimSAS x4 to Mini-SAS HD cable, for the SlimSAS to the other Mini-SAS HD backplane port.
That should be "job done" there.

I was thinking of using the two M.2 PCIe 4.0 x4 slots for a RAID 1 OS. The two M.2 slots being split from the CPU and PCH though, is that an issue?

For the 8 HDD, I was thinking of making a RAID 10 cache with four SSD drives on an adapter card, using the PCIe 4.0 x4 slot. This sound good to you? What adapter card would I use?

Have I lost the plot? Anything you'd add/do different?
Thanks!
 
Last edited:

Chriggel

Member
Mar 30, 2024
85
40
18
I was thinking of using the two M.2 PCIe 4.0 x4 slots for a RAID 1 OS. The two M.2 slots being split from the CPU and PCH though, is that an issue?
The only problem here would be massive data transfers between these devices while also using the bandwidth between CPU and chipset for other data transfers, creating a bottleneck. I'd expect very little activity from the boot drive, so this shouldn't be a problem.

For the 8 HDD, I was thinking of making a RAID 10 cache with four SSD drives on an adapter card, using the PCIe 4.0 x4 slot. This sound good to you? What adapter card would I use?
This sounds like overkill. Do you plan to run any storage related workload that would specifically require this or benefit from this?

Have I lost the plot? Anything you'd add/do different?
It depends :)
To answer this it's probably important to know which OS you're going to use, how the HDDs will be set up, what's the workload going to be and what you need in terms of IOPS and/or bandwidth over what type of network. Also, are you not going to use the x16 slot?
Generally speaking, this looks solid. W680 is a solid choice so that you can use ECC.
 
  • Like
Reactions: NerdAshes

NerdAshes

Active Member
Jan 6, 2024
105
51
28
Eastside of Westside Washington
The only problem here would be massive data transfers between these devices while also using the bandwidth between CPU and chipset for other data transfers, creating a bottleneck. I'd expect very little activity from the boot drive, so this shouldn't be a problem.
I thought the same. I'd use slots for the caching otherwise.


This sounds like overkill. Do you plan to run any storage related workload that would specifically require this or benefit from this?
It's just to speed up network transfers (writes).

  1. Which OS are you going to use
  2. How will the HDDs will be set up
  3. What's the workload going to be
  4. What are your needs in terms of IOPS and/or bandwidth, over what type of network
  5. Are you not going to use the x16 slot?
  1. Well I *think* I'm going to use ProxmoxVE, but I may use TrueNAS scale? I thought about Incus too.. It'll be Proxmox, 90% sure...
  2. ZFS 2-wide Mirror vdevs
  3. It's mostly a backup target for the server cluster/desktops/laptops/devices. The goal is automated backup testing (boot a VM). Backups pop as a running VM, on a node/server/service failure. Disaster Recovery as a self hosted Service. So 90% file server, 10% VM host.
  4. The servers are on 100Gb/s private LAN to a 25Gb/s switch for the publlic LAN - that the NAS will be on (with two 25Gb/s connections (one for backups another for VM communications). Only thinking of the SSD cache as a way to speed up the LAN transfers (though I doubt it'll get even 10Gb/s in HDD speed)).
  5. The PCIe Gen5 x16 slot, is going to be wasted on a Connectx-5 dual 25Gb/s NIC. o_O (it's what I have and the 2.5Gb/s on the mobo are not enough).
 
Last edited:

Chriggel

Member
Mar 30, 2024
85
40
18
Yes, even with fast HDDs, you're not going to get speeds over 10G with 4 2-wide mirrors when you're down to disk speeds without caching.

Before you add any SSDs to cache data, you need to make sure to have enough RAM to store as much data as possible in the ZFS ARC. However, I guess the backup data will be sequential data streams and not file based? And it's mostly going to be incoming traffic until something really fails and you need the backed up data?

Also, ZFS doesn't support direct read/write caching with additional SSDs. Read caching would only be L2ARC, "write caching" would be a SLOG, but then it's also not a real write cache and I'm not sure how, or if, your workload would even benefit from it.

Based on my gut feeling, I'd say that you don't really need the SSDs to take in the backup data streams. Once the VMs actually run their services it might be a different thing. If you're going that route, I wouldn't bother with a PCIe adapter to connect SATA/SAS drives though. PCIe 4.0 x4 is the perfect match for a NVMe SSD for a L2ARC or SLOG vdev, depending on what the workload would benefit from the most.
 
  • Like
Reactions: NerdAshes

NerdAshes

Active Member
Jan 6, 2024
105
51
28
Eastside of Westside Washington
Before you add any SSDs to cache data, you need to make sure to have enough RAM to store as much data as possible in the ZFS ARC. However, I guess the backup data will be sequential data streams and not file based? And it's mostly going to be incoming traffic until something really fails and you need the backed up data?
I have 128GB of RAM on the way. The backup data will be both sequential and file based (revisions of device files/vhd/etc.). And yes hopefully 100% incoming writes - in a perfect world (except reading for backup testing).

Also, ZFS doesn't support direct read/write caching with additional SSDs. Read caching would only be L2ARC, "write caching" would be a SLOG, but then it's also not a real write cache and I'm not sure how, or if, your workload would even benefit from it.
Admittedly my ZFS knowledge is low... but the SLOG should speed up the writes, just by not having to wait for awk from HDDs. I'm not sure the L2ARC would speed anything up really.. but since neither of those really "need" mirrored drive protection - I wonder if they would make good use of the two M.2 slots, rather than the OS? Maybe an Intel Optane SSD for each?
Based on my gut feeling, I'd say that you don't really need the SSDs to take in the backup data streams. Once the VMs actually run their services it might be a different thing. If you're going that route, I wouldn't bother with a PCIe adapter to connect SATA/SAS drives though. PCIe 4.0 x4 is the perfect match for a NVMe SSD for a L2ARC or SLOG vdev, depending on what the workload would benefit from the most.
Well with Sync and SLOG coupled to spinning rust.. I doubt anything is going to be "fast", as much as I'd like higher bandwidth, it's just not in the cards.
I did notice that PCIe Gen4 x4 slot is open in the back... So I should be able to put the Connectx5 card in there and use it (at up to 36Gb/s (PCIe Gen3 x4 speed))!
That opens the PCIe Gen5 x16 slot to be set to x8, x8 in the UEFI settings. I could run a couple U.2 NVME SSDs for the OS on that... hmmmmmmmmm
 

Chriggel

Member
Mar 30, 2024
85
40
18
The L2ARC will only do its thing when the RAM is full. RAM is always used first for ARC. SLOG will provide a benefit when using synchronous writes.

For L2ARC, you typically want something that's balanced between size and performance. Consider it an optional storage tier between fast but small RAM and the slow but big HDD pool. L2ARC is just a price/performance consideration, because you can't keep adding RAM to any given system, which would always be the first thing you do if you find that your RAM is full and cache hits are low. The SLOG will keep the ZIL off of the HDDs and allow faster synchronous write confirmation. It doesn't need to be big, but as fast as possible with lowest possible latency and high write endurance. Optane would be good.

For the boot drives, you don't need anything particularly large nor fast, so you could basically choose any type of SATA/SAS/NVMe SSD and they all will perform equally good in this role. You don't need to use a higher performance slot / drive if you could use it for something else.
 
  • Like
Reactions: NerdAshes