Intel Xeon D-1500 Series Discussion

hjfr

Member
Nov 21, 2013
79
20
8
France
Are there any other platforms that even approach the Xeon-D's performance/watt? I love the lower power aspect, as my server is always on, even if it's mostly idle. I get much longer UPS run times with the lower draw, and the whole thing is quieter. I just want more PCIe slots. I'd even give up the 10 GbE for it. The extra expansion slots would give me flexibility to add whatever I wanted.
Atom C2000 series (example: C2750) ? Low power but less performance.
 

whitey

Moderator
Jun 30, 2014
2,771
872
113
40
I'm starting to feel that SFP+ on these boards are a pipe-dream. I need a total of 2 D-1540's for Hyper-V clustering and another D-1520 for either a stand alone backup or SOFS. It's hard to not throw some money on a few 2011 systems and be done with it... grrr
Yeah color me VERY disappointed as well that there are no SFP+ mobo options from SM. I'll just stockpile my new server loot for now and try to stop buying SSD's. Weak sauce :-(
 

icrf

New Member
Aug 5, 2015
7
1
3
41
Atom C2000 series (example: C2750) ? Low power but less performance.
Yeah, sorry, worded my question poorly. There have been various Atom chipsets out for some time that were pretty low power, but performance headroom is just not there. I want the flexibility to run a few motion detection IP cameras or gigabit VPN (yay Chattanooga fiber!), which I don't think Atom isn't quite up to handling. Maybe the new Goldmont cores will fare better, but that's at least six months away, probably more.
 

Davewolfs

Active Member
Aug 6, 2015
337
31
28
Seems like everyone wants SFP+. If that's what people want why did they make the regular ports?
 

mstone

Active Member
Mar 11, 2015
505
118
43
45
Seems like everyone wants SFP+. If that's what people want why did they make the regular ports?
Maybe the "everyone" in this thread is outnumbered by the number of people actually buying the 10GbaseT parts? Copper 10G will very shortly be the baseline, pushing out copper 1G. Designing for that is a no-brainer. It would be nice to have dual ports, to use either copper or an SFP+, but the boards were already getting pricey.
 

Davewolfs

Active Member
Aug 6, 2015
337
31
28
Any ideas when VMWare might choose to provide a 10GB driver? Is it something that is likely or is the chip on this board a one trick pony?
 

miraculix

Active Member
Mar 6, 2015
116
25
28
Maybe the "everyone" in this thread is outnumbered by the number of people actually buying the 10GbaseT parts? Copper 10G will very shortly be the baseline, pushing out copper 1G. Designing for that is a no-brainer. It would be nice to have dual ports, to use either copper or an SFP+, but the boards were already getting pricey.
10GbaseT may eventually be a baseline across the board (as in across different network areas), but that is an awfully broad brush to use.

For data center space the 10GE trend has firmly been SFP+ with fiber or Twinax, which needs much lower watts per port than 10Gbase-T. (I believe BER is also much lower by nature)

Hence maybe why some of the surprise SFP+ variants haven't already been delivered.
 

ggg

Member
Jul 2, 2015
35
1
8
42
Problem 2: I'm running a pfSense firewall in a VM. For the external interface, I've VT-d device assigned the 1Gb NIC to the pfSense VM so that the traffic flows directly into the firewall without any local bridging or link sharing (seemed most secure this way). However, for the internal interface, I'd like to share the 10Gb port with the parent OS and other VMs on the machine. I should be able to do this using SR-IOV using the max_vfs= parameter to the ixgbe driver (and specifying intel_iommu=on in the kernel cmdline). When I do all this, the driver refuses to create virtual functions, giving me the error "ixgbe: 0000:03:00.0: ixgbe_check_options: IOV is not supported on this hardware. Disabling IOV." Has anyone seen this error before? Is anyone successfully running SR-IOV on the 10Gb NIC?
I don't have one of these but I would want to be able to use SR-IOV.

I was hoping the Xeon D-1500 processor root PCI Express ports support, and report support for, ACS like the Xeon E5 does (but the E3 doesn't).

Perhaps they do not, or don't report it. I'm guessing that is more exactly what the rep is referring to.

I wonder what port the device is attached to? What's the output of:

$ lspci -vt

Is it in it's own IOMMU group?

$ find /sys/kernel/iommu_groups/ -type l

What are the details of the root PCI Express ports?

$ lspci | grep -i 'root' | cut -d ' ' -f 1 | xargs -I {} sudo lspci -vvvnn -s {}

If you could pastebin the (copious) output of the following that would be nice!

$ sudo lspci -vvnn

Depending on where the device is connected and if there is enough isolation you might quirk the port it's connected to by adding in the port identifiers alongside these others:
linux/quirks.c at v4.1 · torvalds/linux · GitHub

This is a great post on IOMMU groups and ACS support:
VFIO tips and tricks: IOMMU Groups, inside and out
 
Last edited:

jgreco

New Member
Sep 7, 2013
28
16
3
Maybe the "everyone" in this thread is outnumbered by the number of people actually buying the 10GbaseT parts? Copper 10G will very shortly be the baseline, pushing out copper 1G. Designing for that is a no-brainer. It would be nice to have dual ports, to use either copper or an SFP+, but the boards were already getting pricey.
The "everyone" in this thread would tend to be high octane computer folks who adopted 10G technology early on.

Historically, 10Mbps ethernet was standard back in 1993, 100Mbps in 1996, 1Gbps in 1999. In each case, inexpensive hardware was available well within five years of introduction. 10G Ethernet came along right on that 3 year cycle, in 2002. Ten years later, and cards were many hundred dollars, and switches were several thousand dollars - very odd! And 40/100G didn't come along until 2010.

So there's this big window of time during which 10G non-copper gear was released, and a lot of networks are invested in it. SFP+ is a real winner in many ways: lower latency, lower power consumption, etc. But it isn't likely to be the winner in the long run. Copper tends to be a much easier technology to deploy. Cables cost less. It is less complex to debug.

It's just recently that there's been any significant momentum towards 10G-at-the-server (and 10G-at-the-desktop), because to a large extent, 1G has been sufficient for most purposes. This has allowed this really weird thing to happen where 1G has become dirt cheap ($30 for a quality NIC, $100 for a decent switch) while the next tier up is still an order of magnitude more expensive.

We're not going to see lots of 10G SFP+ on servers going forward. There will hopefully be some! However, what's happening now is that the 10G copper gear being purchased today can be plugged into conventional 1G switches (futureproofing) or it can be plugged into 10G copper switches which are ALSO able to support legacy 1G copper servers.

This sucks for early adopters who bought into SFP+, but it is what it is.
 

mstone

Active Member
Mar 11, 2015
505
118
43
45
10GbaseT may eventually be a baseline across the board (as in across different network areas), but that is an awfully broad brush to use.

For data center space the 10GE trend has firmly been SFP+ with fiber or Twinax, which needs much lower watts per port than 10Gbase-T. (I believe BER is also much lower by nature)

Hence maybe why some of the surprise SFP+ variants haven't already been delivered.
I remember way back when 100baseTX and 1000baseT were also low density and hotter than hell...and it looks like 10GbaseT is finally going the same way they did and benefiting from enough scale and technology maturation to get efficient chipsets. Once upon a time I spec'd only fiber for gigabit connections because the copper stuff was flaky and unreliable--not any more. Yeah, fiber will be more power efficient, but once 10GbaseT gets to the per-port pricing of today's 1000baseT, the savings in power over the life of an SFP+ interface is unlikely to pay for the premium of the fiber transceivers and infrastructure over copper. (Especially now that the network chip manufacturers have gotten better at noticing that a 1M cable doesn't need the same amount of power as a 100M cable.) SFP+ isn't going away, and you'll still be able to buy servers with it built in, but there is going to be a lot more choice for copper connected solutions. The holdup has basically been demand, as someone said earlier, but with commodity storage being able to push 400+MBps, the 125MBps of 1000baseT is seeming more and more pokey even at the consumer level; one more rev of wireless evolution and it'll be fairly common for consumers to have more wireless bandwidth than they can serve from a gigabit connected NAS.
 

capn_pineapple

Active Member
Aug 28, 2013
356
80
28
one more rev of wireless evolution and it'll be fairly common for consumers to have more wireless bandwidth than they can serve from a gigabit connected NAS.
I noticed this the other day when looking up Wave 2 AC. It's great and all having 2.xGb/s of bandwidth, but what's the point when the backhaul from it is a single gigabit link (in the few examples that I saw)?

The drive of these faster wifi devices is what I think is starting to push the 10Gb stuff down to consumers. I know that in my workplace, the Ubiquiti AC AP's that we have are being complained about because they are too slow. There's only 45 people across 8 AP's so capacity is fine, it's purely a case of people perceiving the network to be slower than what they want. Mind you.. we're still operating on a 1G backbone, so there's work to be done there too.
 

jonaspaulo

Member
Jan 26, 2015
44
2
8
36
Hi,

I am planning on buying one of these, specifically the Superserver 5028D-TN4T . The question is, does the board work with only 1x32GB M393A4K40BB0-CPB or as it is dual channel based it has to have at least 2 memory slots filled (hence 2x32gb).
Another question is it worth it to wait for the new update (Xeon D-1541)? Specifically I think the major update (besides the 0.1GHz increment which doesn't seem much to wait for) is the support for ddr3l RAM. Price wise is this ddr3l more affordable than the above mentioned ddr4 Samsungs?

Edit: Also does ESXi support booting off the back USB ports?

Thanks a lot!
 
Last edited:

jonaspaulo

Member
Jan 26, 2015
44
2
8
36
Thanks a lot for the update. A little low level for me but I guess all processors at least on the first batch have some issues right?
Regarding the memory, can I fill in just one slot with a 32Gb stick ? Or does the board only work with at least 2 slots of RAM filled?
 

Davewolfs

Active Member
Aug 6, 2015
337
31
28
Thanks a lot for the update. A little low level for me but I guess all processors at least on the first batch have some issues right?
Regarding the memory, can I fill in just one slot with a 32Gb stick ? Or does the board only work with at least 2 slots of RAM filled?
Pretty sure all CPU's have issues of some sort even the E5 V3's have issues (more of them).

You need to fill a min of 2 banks.
 
  • Like
Reactions: jonaspaulo