Intel Xeon D-1500 Series Discussion

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

evancox10

New Member
Nov 12, 2015
8
2
3
124
Hey, so I noticed there has been some discussion of PCIe bifurcation support in this thread on the SuperMicro boards. What is the conclusion of that? Do you need an "active" adapter with a PLX chip? Or can you split the lanes passively? If you can split them passively, how many lanes can you split them into?

Since you have a full PCIe x16 slot, I'm wondering how you could split that into 4 individual PCIe x4 M.2 slots. With something like this maybe
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Hey, so I noticed there has been some discussion of PCIe bifurcation support in this thread on the SuperMicro boards. What is the conclusion of that? Do you need an "active" adapter with a PLX chip? Or can you split the lanes passively? If you can split them passively, how many lanes can you split them into?

Since you have a full PCIe x16 slot, I'm wondering how you could split that into 4 individual PCIe x4 M.2 slots. With something like this maybe
Use a PLX chip adapter.
 

fringfrong

New Member
Aug 28, 2016
9
2
3
34
Last edited:
  • Like
Reactions: Patrick

smokey7722

Member
Oct 20, 2015
109
23
18
Does anyone know where the Gigabyte MB10-DS5 are? They don't seem to be available anywhere. Alternatively, any other ITX boards with SFP+ and 128GB RAM?
The MB10-DS4 and DS5 (according to my distributor rep) are built to order motherboards so the retailer would need to place a quantity order knowing they would sell them all. Thats most likely why you don't see much of them. The MB10-DS3 is in stock though and not considered built to order. I was going to go the route of mini-itx but since I needed SFP+ as well. I gave up and moved to Flex and used the X10SDV series from Gigabyte.
 
Last edited:

SSS

New Member
May 4, 2016
5
0
1
53
Hey, want to buy SC721TQ-250B chassis and X10SDV-2C-TLN2F (pentium 1508) mobo for it.

It seems that the mobo is not suitable for this chassis because passive CPU heatsink.

Could I buy additional fan for CPU and any other ways to use this mobo in this case?
 

jgreco

New Member
Sep 7, 2013
28
16
3
I believe the board in question uses the shorter heatsink, so that is probably the closest to a direct fix that you'll find. Boards like the X10SDV-7TP4F come with the taller 1" heatsink.

It isn't actually necessary to have a fan attached to the heatsink, but it will be difficult to get sufficient airflow across it without something blowing at it. Putting a high velocity fan blowing across it will work, but in the end by the time you finish figuring out all the sharp edges to the problem, it makes something like the 4C+ board look fairly attractive.

The big upside here is that a 25W TDP CPU isn't anywhere near as hard to cool as a lot of other stuff would be.
 
  • Like
Reactions: SSS

SSS

New Member
May 4, 2016
5
0
1
53
@Patrick , @jgreco thanks for reply.

May be I could use heatsink+fan from 4C+ board? Have tried to find both heatsind w or w/o fan on supermicro.com and didn't see then in accesorries.
 

jgreco

New Member
Sep 7, 2013
28
16
3
Huh, well, whaddayaknow. I went digging a little and found the SNK-C0057A4L available as a separate part. For retail, looks like WiredZone carries them.

From the manufacturer's point of view, there isn't much value in encouraging end users to tinker with the cooling arrangement on these new FCBGA boards. Since the CPU is part of the mainboard, heat damage to the CPU is a warranty issue for the manufacturer. Also be aware that replacing the heatsink might be viewed as a voiding of your warranty.
 

jgreco

New Member
Sep 7, 2013
28
16
3
@jgreco thank you. Seems reasonable to consider 4c+ version instead.
Yeah, it's just too bad that it is quite a bit more expensive. The Xeon D stuff is frustrating because it ties your hands. We build servers here for data center and commercial use, and one of the things I'd love is an X10SDV-7TP4F in a SC216BAC-R920LPB, which along with a RAID controller would make a very nice small office storage and virtualization box, but running at only around 100 watts. It's frustrating because the SQ's are only available on the large supplies, and it looks like some custom air shroud hackery would be necessary. It's almost like no one understands that there's a market for these things as a branch office server platform...
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Yeah, it's just too bad that it is quite a bit more expensive. The Xeon D stuff is frustrating because it ties your hands. We build servers here for data center and commercial use, and one of the things I'd love is an X10SDV-7TP4F in a SC216BAC-R920LPB, which along with a RAID controller would make a very nice small office storage and virtualization box, but running at only around 100 watts. It's frustrating because the SQ's are only available on the large supplies, and it looks like some custom air shroud hackery would be necessary. It's almost like no one understands that there's a market for these things as a branch office server platform...
I can tell you that many of the vendors are reading this thread.

Also, some of the commercial NAS servers are starting to adopt Xeon D.
 

smitty2k1

Member
Mar 2, 2016
37
11
8
113
Patrick, I posted this in the Pentium D1508 review, but do you have any benchmarks for that CPU? I cannot find them online ANYWHERE. I'm mostly looking for a Passmark score, but would be interested in others if that is all that is available.

I would like these for a quick double check that the thing can handle a Plex transcode or two and that I don't have to pay the $125 premium for the D1518
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Patrick, I posted this in the Pentium D1508 review, but do you have any benchmarks for that CPU? I cannot find them online ANYWHERE. I'm mostly looking for a Passmark score, but would be interested in others if that is all that is available.

I would like these for a quick double check that the thing can handle a Plex transcode or two and that I don't have to pay the $125 premium for the D1518
I replied to your comment and linked the Linux-Bench runs that were in the body of the board review you commented on. You can use those to compare to other reviews on STH or Anandtech for example.
 

jgreco

New Member
Sep 7, 2013
28
16
3
I can tell you that many of the vendors are reading this thread.

Also, some of the commercial NAS servers are starting to adopt Xeon D.
I've yet to see anything compelling in the Xeon D NAS space, alas. A lot of it seems to be continued regurgitation of last year's strategies. I'll read your response as an opportunity to vent/explain/whatever.

For redundancy, you want more smaller drives rather than fewer larger ones. For a small office, you may want one server, not two or three. The continued focus on 3.5" form factor drives is infuriating.

We've had good success with an 8-drive for hypervisor strategy here, which usually works out to three datastores of RAID1, and then two spare disks. This allows you to do two SSD datastores and a HDD datastore, or three of a single type, and be fully redundant, with spares, so that you don't have a critical "fix it now" emergency if a disk fails.

For NAS, 12 hard disks in RAIDZ3 can provide a large chunk of high reliability storage, but most of the vendors have been implementing this as 3.5" storage in the typical SC826 style chassis. This makes sense in some scenarios, and certainly if you want to go offering a system with 120TB of raw storage in 2U, that may be a good way to go. A lot of people get hung up on trying to make HDD faster, but we're finally at a point where it really isn't unreasonable to put the stuff that needs faster storage on SSD, and use HDD for slower "nearline" style storage.

So I've been finding the idea of a 16 bay NAS more compelling, and with the availability of 4TB 2.5" HDD's, you could make a 12 drive ZFS RAIDZ3 (12x4TB=>36TB) for around $1200 if you're shucking, plus four SSD's for fast storage, and the Xeon D with its 128GB capacity gives you sufficient memory to run that alongside other workloads. And ~~100 watts. Really? I can get a competent eight core virtualization platform like the 7TP4F with 128GB of RAM, a 24 bay chassis, with local RAID storage for VM's, and NAS storage, all for around ~~$5K? WOW.

So, hey, Supermicro, please go and release a lower watt rating, redundant, high efficiency SQ PSU module. Not everyone wants to run heavy iron in their 24/26-bay chassis. I'd also love to have a shorter chassis option optimized for the Xeon D form factor boards, with an appropriate air shroud, but that's just off in the wishful thinking department...

I'm seriously considering retiring some of our newer gear in favor of Xeon D. Watts drives so much of the TCO. Your power protection and power distribution is simpler with lower watts, your air conditioning requirements are reduced with lower watts, the noise level is reduced with lower watts, gear typically runs cooler and things like drives last longer with lower watts.
 
  • Like
Reactions: wsuff

cbutters

New Member
Sep 13, 2016
5
0
1
40
I am researching the X10SDV-TLN4F-O board, for use with esxi and ZFS. I have a few questions that I can't seem to glean from the discussion so far.

A) Is it possible to passthrough JUST the m.2 drive to a VM?
B) Is it possible to boot esxi via usb and passthrough all the all the onboard SATA ports to a VM?
C) If B is not possible, is it possible to use RDM to pass through certain drives directly to a VM?

Short of answering the questions above, does anybody have a screenshot of what is available for passthrough on the current revision of board and esxi 6.0u2?
 
Last edited:

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
@cbutters

A) Yes but get a PCIe M.2 SSD not a SATA M.2 SSD so you can just PCIe vt-d pass through
B) Yes but you'll RDM pass through. The SATA controller is still an Intel PCH in ESXi so it isn't a PCIe device like a LSI HBA you can pass through.
C) See B. Use RDM.

Does that help?
 

cbutters

New Member
Sep 13, 2016
5
0
1
40
@cbutters

A) Yes but get a PCIe M.2 SSD not a SATA M.2 SSD so you can just PCIe vt-d pass through
B) Yes but you'll RDM pass through. The SATA controller is still an Intel PCH in ESXi so it isn't a PCIe device like a LSI HBA you can pass through.
C) See B. Use RDM.

Does that help?
Yes very useful! Thanks.
So is RDM as reliable as PCIe passthrough? I might eventually use this same setup in the office as I'm building for my home right now, and I'm concerned with longevity/stability.
 

jgreco

New Member
Sep 7, 2013
28
16
3
RDM isn't as reliable as PCIe passthrough. Over on the FreeNAS Forums, we've seen a lot of train wrecks with it, and since RDM isn't intended for this use, you won't get any support from VMware. There's also talk that RDM may eventually be deprecated. I'm basically the resident virtualization geek over there and even though I mercilessly virtualize workloads, I've seen enough users have problems that I always post a conspicuous warning label.

PCH SATA controllers can usually be passed through. I thought I tried this with the X10SDV board here and it worked, but I don't have this set up to try right now. On Wellsburg systems, the general technique would be:

esxi% lspci | grep "Wellsburg AHCI" (obviously not the only thing you "could" grep for)
0000:00:11.4 Mass storage controller: Intel Corporation Wellsburg AHCI Controller [vmhba0]
0000:00:1f.2 Mass storage controller: Intel Corporation Wellsburg AHCI Controller [vmhba1]

Now look up those prefixes in "lspci -n"

esxi% lspci -n | egrep '0000:00:11.4|0000:00:1f.2'
0000:00:11.4 Class 0106: 8086:8d62 [vmhba0]
0000:00:1f.2 Class 0106: 8086:8d02 [vmhba1]

Then you edit /etc/vmware/passthru.map, adding
# Intel Wellsburg AHCI/SATA
8086 8d62 d3d0 false
8086 8d02 d3d0 false

And restart the hypervisor. The same general technique should work for other platforms. Please note that PCIe passthrough, while generally stable on most modern gear where you are able to get it to work at all, is a risky proposition. Please test thoroughly before committing any data of any importance to a system relying on it.

That having been said, a properly designed and tested PCIe passthrough system should be perfectly stable. We've got one FreeNAS fileserver running here that's been stable for four years, having transitioned from ESXi 4.1 to 5 to 5.5, and several others of varying length. The extra layer of complexity is the thing that's most likely to mess you up.
 
  • Like
Reactions: SSS and Patrick

mackle

Active Member
Nov 13, 2013
221
40
28
I'm interested in getting an '08 for a 12TB (4x4TB Z1) FreeNAS setup - my only concern is whether it'll bottleneck on 10GBps links.

Does anyone have any experience?