Are those the storage and networking ones? Like the C2758 is to the C2750? Those were in the announcement but never have shown up yet.I think it will come out after some of the D-154x variants which we should be seeing soon.
Are those the storage and networking ones? Like the C2758 is to the C2750? Those were in the announcement but never have shown up yet.I think it will come out after some of the D-154x variants which we should be seeing soon.
Early on I was hearing end of Q2 with the bulk in Q3. But I haven't heard any rumblings in months, so it may be shifting some.Is there perhaps a timeframe on what "soon" is? Mr Patrick "Insider Knowledge" person man sir?
If you can use more cores and RAM it is significantly faster.How would Xeon D-1540 compare to the Xeon E3-1245v2? I'm thinking of upgrading my current VM server and my most used VM is my Plex sever that does a good deal of transcoding.
Also, what recommended 16/32GB DIMMs are there for the SuperMicro X10SDV-TLN4F board?
Been lurking on this thread for some time now and figured it was time to make a post.Looks like the X10SDV (non 10Gb) version is in stock now at a few places. I ordered from Superbiiz yesterday (have no need or use case for 10Gb) and it's already shipped. Wiredzone also shows it in stock. I also bought the Samsung XP941 512GB SSD and 64GB (16x4) Crucial ECC RAM.
I will run ESXi 6 on it (upgrading my Avoton board), and will aggregate the two Gb ports to my Juniper EX2200-C switch (no LACP support in ESXi free, but static active/active is supported). Capacity storage (aside from the M.2 SSD) is a local array on my LSI 9260 controller. Out of curiosity, what use case does everyone here actually have for the 10Gb ports? There aren't really any ideal 10GBASE-T switches for home use (fanless, compact, etc.). The only thing I can think of is direct attaching a dedicated storage server with a 10Gb NIC. From my point of view, if you do that you might as well just use local storage for reduced complexity and cost.
Mind me asking what kind of switch you'll be using to connect your NAS via 10gb and what NAS OS you'll be using?Been lurking on this thread for some time now and figured it was time to make a post.
I ordered a X10SDV-TLN4F a couple of weeks ago from Ebiz Pc and I just received notification that it has shipped. Per my conversation when I ordered it, most channels were receiving them today.
One big conclusion that I came to is that 2x 32GB is similar priced to 4x16GB and gives the option to upgrade to 128GB in the future.
I ordered 2 x Samsung M393A4K40BB0-CPB 32GB DDR4-2133 which happens to be the only memory listed on the HCL for the board. They were around $740. Comnpared to Cruical 16gbx4 on Amazon which is around $710. $30 more and I will have a future $700 upgrade (maybe cheaper by the time i decide to) to 128GB where it could theoretically cost $1400 if you had 16x4. Which also assumes you didnt sell that memory to recoup some of the cost. But to my point just make sure you make that consideration before making the purchase.
In regards to 10gbe that was another future proof thought. For an extra $100 why not? Right now my idea is to setup my NAS with a 10gbe with a direct connection for network storage. In the future? Google fiber? Municipal fiber ISP? 1gb ISPs are too far away and neither will be reasonably priced 10gb switches. Will I actually make use of it? Maybe not but I will certainly have the option. Will 10gb interface cards be sub $100 by the time i get a 10gb switch? Probably but until then I will at least have 4 1gb interfaces which was a requirement for my lab. Adding 2 addtional ethernet ports, even at 1gb would be worth the $100 cost IMO.
Its a future project so I don't have anything yet. What ive considered is a QNAP appliance such as a QNAP TVS-863. In terms of switch Ill be looking at something like a ProSAFE PLUS XS708E. I Dont need a fully managed switch just some basic stuff such as VLANs and port mirroring. They are stil around $800 so when they hit the $300-$500 range I will probably start looking. Until I have a switch I would probably just consider a direct connection to my dell T420. The 1540D will be in a M350 case as I will likely be traveling (train and plane) with it for various customer demos and presentations.Mind me asking what kind of switch you'll be using to connect your NAS via 10gb and what NAS OS you'll be using?
Will see what I can do. I did not have any locally and the boards I did have are running in the DC right now so I need to find a window to bring them down and boot ESXi.Hello Patrick.
Can you do us a favor by showing us - ESXi nesters - this window:
http://www.servethehome.com/configure-passthrough-vmdirectpath-vmware-esxi-raid-hba-usb-drive/
We would like to know what's passthrough-able and what's not! Who depends on who?
And of course some SR-IOV juice won't hurt.
Edit: For some reason, my posted pic appears wrong.
+1We would like to know what's passthrough-able and what's not!
My installation is practically the same as yours (16GB SM SATA DOM, Samsung SM951 M.2 SSD, LSI 9211) except I have the TLN4F so I have the two 10gig NICs. Unfortunately I can't find a VIB for them to get them working in ESXi yet.OK, finally got around to installing the board (X10SDV-F) last night. Everything went smoothly, and I had no trouble installing ESXi 6.0 to the 16GB SATA DOM (Supermicro SSD-DM016-PHI). After installation, I noticed that VMware created a 7.5GB additional datastore on the unused space on the DOM, so I could also throw a lightweight Linux VM or something on there later. After noticing my Samsung XP941 PCIe SSD wasn't showing up under Storage Adapters, I installed the VIB from here and the drive promptly appeared after reboot. I also installed the LSI MegaRAID VIB to allow health status visibility on my RAID5 array.
Overall, everything seems great so far. I'm disappointed Supermicro didn't bother creating any fan control options in the BIOS yet, though.
Hello Patrick.
Can you do us a favor by showing us - ESXi nesters - this window:
http://www.servethehome.com/configure-passthrough-vmdirectpath-vmware-esxi-raid-hba-usb-drive/
We would like to know what's passthrough-able and what's not! Who depends on who?
And of course some SR-IOV juice won't hurt.
Edit: For some reason, my posted pic appears wrong.
I thought you never could pass through the onboard SATA controller? You pass through the PCIe SAS HBA's? Hasn't it be like that for like forever?If onboard Sata ccontroller cannot pass through that would be devastating in my opinion. It pretty much prevent the ZFS server setup in Vsphere passthrough method and make the platform serve a single purpose only instead of multiple function via Vsphere... I was hoping i can also passthrough onboard 1gbe nic to Pfsense and Sata controller to a ZFS server(Ubuntu based)