Hm too bad. Then I will have to buy 2x32gb which makes it much more expensive. gotta think it through.
I loved mine - until it crashed today. Hopefully does not happen again.Hm too bad. Then I will have to buy 2x32gb which makes it much more expensive. gotta think it through.
Here's the pastebin with the output above (2.6 kernel, so not quite everything)...I don't have one of these but I would want to be able to use SR-IOV.
I was hoping the Xeon D-1500 processor root PCI Express ports support, and report support for, ACS like the Xeon E5 does (but the E3 doesn't).
Perhaps they do not, or don't report it. I'm guessing that is more exactly what the rep is referring to.
I wonder what port the device is attached to? What's the output of:
$ lspci -vt
Is it in it's own IOMMU group?
$ find /sys/kernel/iommu_groups/ -type l
What are the details of the root PCI Express ports?
$ lspci | grep -i 'root' | cut -d ' ' -f 1 | xargs -I {} sudo lspci -vvvnn -s {}
If you could pastebin the (copious) output of the following that would be nice!
$ sudo lspci -vvnn
Depending on where the device is connected and if there is enough isolation you might quirk the port it's connected to by adding in the port identifiers alongside these others:
linux/quirks.c at v4.1 · torvalds/linux · GitHub
This is a great post on IOMMU groups and ACS support:
VFIO tips and tricks: IOMMU Groups, inside and out
Indeed, it's only a matter of time. I've gone through more than my share of major changes in cable plant trends (anybody here ever deal with cascaded DELNIs off thicknet, or worse??). The state of the art advances on all fronts.I remember way back when 100baseTX and 1000baseT were also low density and hotter than hell...and it looks like 10GbaseT is finally going the same way they did and benefiting from enough scale and technology maturation to get efficient chipsets.
I wasn't referring to fiber at all (though it's lumped in due to SFP+) but Twinax/DAC. I've implemented a ton of 10GE deployments large and small for years in my day job, and <10M connectivity (host to ToR) has been almost 100% Twinax. I'm actually surprised you only bring up fiber.Once upon a time I spec'd only fiber for gigabit connections because the copper stuff was flaky and unreliable--not any more. Yeah, fiber will be more power efficient, but once 10GbaseT gets to the per-port pricing of today's 1000baseT, the savings in power over the life of an SFP+ interface is unlikely to pay for the premium of the fiber transceivers and infrastructure over copper.
The slower jump from 1GBASE-T to 10GBASE-T wasn't just because of endpoints not able to (or not needing to) drive 10gig speeds, but dealing with and improving UTP physical/electrical characteristics.(Especially now that the network chip manufacturers have gotten better at noticing that a 1M cable doesn't need the same amount of power as a 100M cable.) SFP+ isn't going away, and you'll still be able to buy servers with it built in, but there is going to be a lot more choice for copper connected solutions. The holdup has basically been demand, as someone said earlier, but with commodity storage being able to push 400+MBps, the 125MBps of 1000baseT is seeming more and more pokey even at the consumer level; one more rev of wireless evolution and it'll be fairly common for consumers to have more wireless bandwidth than they can serve from a gigabit connected NAS.
"Never say never" but I don't believe it will be abrupt. Maybe there's some customer vertical I'm overlooking, but I am in a position to see SFP+ (specifically Twinax for host to switch) versus 10GBASE-T deployments. Per my other reply I'm only seeing a few of the latter, and usually running at 1gig speeds.We're not going to see lots of 10G SFP+ on servers going forward. There will hopefully be some! However, what's happening now is that the 10G copper gear being purchased today can be plugged into conventional 1G switches (futureproofing) or it can be plugged into 10G copper switches which are ALSO able to support legacy 1G copper servers.
This sucks for early adopters who bought into SFP+, but it is what it is.
It sounds like you're concerned about this as part of your day job (and stuff like the cost of cable pulls). Have you looked into NBASE-T?I noticed this the other day when looking up Wave 2 AC. It's great and all having 2.xGb/s of bandwidth, but what's the point when the backhaul from it is a single gigabit link (in the few examples that I saw)?
The drive of these faster wifi devices is what I think is starting to push the 10Gb stuff down to consumers. I know that in my workplace, the Ubiquiti AC AP's that we have are being complained about because they are too slow. There's only 45 people across 8 AP's so capacity is fine, it's purely a case of people perceiving the network to be slower than what they want. Mind you.. we're still operating on a 1G backbone, so there's work to be done there too.
I'm really just doing basic SR-IOV in CentOS 6; the driver clearly says that it's not supported by the hardware. It's likely that this is disabled in BIOS for some reason, rather than being disabled in silicon. Could be that it's not yet stable, so it's being forced disabled until it works reliably. Hard to say - Supermicro is probably under NDA with Intel on details like that.What?
This official document says x557 supports SR-IOV
Intel® Xeon® Processor D-1500 Product Family Datasheet, Vol. 4
Also the latest additions to Intel's open source drivers mentions SR-IOV
Are you talking about bridging where you would bridge VFs together?
Maybe I'm missing something?
To a degree yes, but at the same time, what then happens when I want to (reliably) stream multiple 4k feeds around the house over wifi? I personally wouldn't because cable > wireless, but no doubt I'd be pulled in to do it for a friend or family. I have been looking at HDbase-T for doing media streaming around the house for a couple of months now, it's just justifying it now really.It sounds like you're concerned about this as part of your day job (and stuff like the cost of cable pulls). Have you looked into NBASE-T?
If you want to deploy twinax instead of fiber, it only changes the numbers a little bit--you'll still save a bit on power, and still probably not save enough to pay for the increased cost of the transceivers above the cost of a patch cable across the life of the hardware. If you've invested heavily in sfp+ ports top of rack, then by all means continue to buy sfp+ servers. At some point when you buy a new rack or a new switch and 10GbaseT is standard on all the servers, you'll have to question whether you want to pay a premium to put sfp+ in again, and most people won't see a point in doing so. The only real question is when that change is coming; I tend to think it's imminent, but reasonable people can disagree.I wasn't referring to fiber at all (though it's lumped in due to SFP+) but Twinax/DAC. I've implemented a ton of 10GE deployments large and small for years in my day job, and <10M connectivity (host to ToR) has been almost 100% Twinax. I'm actually surprised you only bring up fiber.
Cisco tends to have awkwardly long product cycles for some things, and misses stuff that's a no-brainer for the broad market because they're so dominant in the enterprise. E.g., it was amusing back in the day when any <$100 consumer grade switch came with auto MDI-X but a $1000+ cisco enterprise switch still needed the proper crossover cable. In an enterprise you can assume people have the right cable (or you can sell it to them), you have no incentive to redesign silicon if you don't have to, and if you're cisco people will buy the stuff even if it's not the most cutting edge experience. I think the real driver for NBASE-T is SOHO and prosumer, and you'd see it explode if it get onto the really cheap high-volume integrated chips in commodity switches. Having some steps between 1G & 10G is a no-brainer for most endpoints, because the number of systems that are bottlenecked by 1G is a heck of a lot larger than the number that can saturate 10G. The big networking companies have been terrified of cannibalizing their 10G margins with an intermediate standard, but at some point other interests may carry the day. (Note that cisco is still mostly talking about NBASE-T as basically a proprietary uplink for their wireless gear.)It sounds like you're concerned about this as part of your day job (and stuff like the cost of cable pulls). Have you looked into NBASE-T?
2.5 or 5gig over existing Cat5E/Cat6 cable plants seems very appealing to me... but Cisco is one of the founding members and they have yet to ship a Wave 2 AP that supports NBASE-T/multigabit. It may be best to wait for IETF to do it's thing.
I don't think it'll see consumer adoption, but maybe... I haven't really thought about it yet (probably due to no Wave 2 plans for my home yet). For enterprise, there's at least one use case (802.11ac wave 2 APs + a large amount of older cable plants) that could be critical for many companies.To a degree yes, but at the same time, what then happens when I want to (reliably) stream multiple 4k feeds around the house over wifi? I personally wouldn't because cable > wireless, but no doubt I'd be pulled in to do it for a friend or family. I have been looking at HDbase-T for doing media streaming around the house for a couple of months now, it's just justifying it now really.
Nbase-T does look interesting, but at this point in time, 10Gb is pretty well established in the enterprise world and is (slowly) filtering down to consumer level as well, especially with more and more motherboards coming with 10Gbase-T (Xeon-D and E3 prosumer workstation boards). I'm not sure that I (as a consumer not a worker) would invest in a new, unproven, and mostly unsupported networking setup which has no market presence nor history when I could rollout a 10Gb switch today and plug/play up to full bandwidth as I upgrade NIC's in my network. I don't think the price/performance would be there when compared to 10Gb, especially for early adopters. Unless they can match 1Gb today.
Short answer is no. I spoke with intel 6 months back and was told no chance to do that on chipsets available at that time. I was specifically asking about X540-T2 and i354 2.5G (was thinking the latter could "train down" a full NBASE-T ink peer to 2.5G). No idea if Intel is shipping an NBASE-T card yet.Now, if Nbase-T is plug and play with a simple driver/firmware update on my Intel NICs, get it in me!
What about the other integrated NIC i350-AM2?I'm really just doing basic SR-IOV in CentOS 6; the driver clearly says that it's not supported by the hardware. It's likely that this is disabled in BIOS for some reason, rather than being disabled in silicon. Could be that it's not yet stable, so it's being forced disabled until it works reliably. Hard to say - Supermicro is probably under NDA with Intel on details like that.
Anyways, I bought a Chelsio 10Gb NIC (Chelsio T520-SO-CR Dual-Port 10 Gigabit Ethernet Adapter SFP+ to fix both my 10GBase-T link problems (moving to SFP+) and my missing SR-IOV. Should be here in a couple weeks (backordered).
Haven't tried it; I currently have the i350 assigned via VT-d to my VM guest directly, without using any virtual functions. Since it's taking traffic, I don't want to pull it down to test SR-IOVWhat about the other integrated NIC i350-AM2?
On papers, it should support SR-IOV.
Intel® Ethernet Controller I350-AM2 Specifications
10GbaseT also has the issues of increased latencies as well over TwinAx/Fiber.If you want to deploy twinax instead of fiber, it only changes the numbers a little bit--you'll still save a bit on power, and still probably not save enough to pay for the increased cost of the transceivers above the cost of a patch cable across the life of the hardware. If you've invested heavily in sfp+ ports top of rack, then by all means continue to buy sfp+ servers. At some point when you buy a new rack or a new switch and 10GbaseT is standard on all the servers, you'll have to question whether you want to pay a premium to put sfp+ in again, and most people won't see a point in doing so. The only real question is when that change is coming; I tend to think it's imminent, but reasonable people can disagree.
Yup. If you have an application that's sensitive to latency and needs latency that's better than 10GbaseT but not as good as infiniband, then you should definitely spec SFP+.10GbaseT also has the issues of increased latencies as well over TwinAx/Fiber.