Ignoring the ZFS points (which are all well made, well articulated and I agree with them esp don't use Z1).
Some questions:
Is hot/swap and/or blinky drive lights something the client wants or is requiring hence the chassis selection?
From the client's perspective I am sure they feel that they are dropping a large chunk of change today but if they are as much of a DH as you describe what's the growth plan? Can the client describe how fast they grew their data usage?
Possible solutions? Second server? add a JBOD chassis? If the latter plan today for an HBA later - reserve a slot for it.
What's the estimated life-time of this server? What was the lifetime of the current synology solution?
Were you able to talk to them about the cost with a view to the total lifetime of the solution? I have to wonder if that would help with their budget.
My mantra with DH (and exchange users) "If you build it, they will fill it" - and faster than you or they think.
I may have missed it but I don't recall seeing anything about the power protection for this system? What's the plan? Based on the client's usage what is the likelihood of a power supply failure or street power failure impacting data in flight?
Some comments on your design.
I see little value burning the 2 m.2 slot's for boot as you may find a better use for them - even as itinerant high(er) speed storage. ARC isn't likely to help you - In your use case I do not expect a performance improvement from a dedicated zil/slog but I have to wonder if the inflight protection for writes may be desirable? the m.2 slots (2) support 110's on that motherboard so nvme with PLP could be used if you wanted that.
I'd make sure though that those m.2 slots will NOT rob x4 lanes from other slots.
you don't appear to be using the onboard SATA ports. Since this is an mATX motherboard in a large chassis you have TONS of room and with the NORCO you can get the Norco two drive shelf (or 4 x 7mm/9.5mm using velcro) that is designed to mount inside the chassis. Consider the onboard SATA ports for your boot drives and maybe even an expander to go to a single HBA for internal storage.
With the NORCO consider right angle SFF-8087's for the 6 backplane connections you need to make. IMO and in that case the cable management using straight SFF-8087's may impede your air flow *and may* put strain on the backplane boards with the cable bends required, the right angle connectors solve that problem for you. the correct SFF-8087 are pretty easy to find. SFF-8643 to right angle SFF-8087 maybe not so much.
(opinion) I don't see SAS3 buying you anything today or the mid future and I believe you said used enterprise HBA's are okay? Perc H310's and the like are $20.00, replace the thermal paste, bump the firmware and call it a day for the HBA's. Buy an extra and have it ready as part of the package.
I get the "new" part - what about new old stock? For example (and just an example) Tyan S5510GM3NR is about 100.00 shipped off the bay. You could buy 2 for the price of the ASROCK. CPU's will be a lot cheaper too. I say CPU's because for the price of the ASROCK you could buy two motherboards with CPUs for a little over half the cost of the ASROCK+CPU. Downside is memory - that board is limited to 32GB udimm but that may well be enough for your use case.
If size, blinkies, and hot swap are not that big a deal have you considered the 48 bay
SM/Chenbro? Yes, you'll have to deal with the drive trays (3d printer and/or get someone to fab them) but it can be found NEW and in budget and redundant power supplies and growth in one chassis...
With the NORCO do purchase a new box of band-aids and have that handy during the build process. We all probably leave a little bit of ourselves in any build but with the NORCO I have found it is a bit more than typical.
Don't get me wrong, I think the NORCO is a fine solution for some scenarios. I feel it breaks down when you try and push it to its max capabilities (ie. a fully populated chassis, lack of redundant power supplies, trying to use it for production 24x7x365).