This looks like a steal if you can get it under 2k: Intel R1304JP40C E5 2690V2 3 0GHz 10 Core 64GB DDR3 1600 160GB SSD 2X2TB WD HD | eBay procs are still good and lots of NICs.
Maybe he just really wanted lots of network cables?I am curious what the original owner was doing with all that Network on a box like this.
What did I do to deserve the passive aggressive tone mr PigLover?Yeah - 1 CPU. Anything else obvious you want to point out?
I pointed it out because the original poster said PROCS, there are not PROCS there is a SINGLE CPU.This looks like a steal if you can get it under 2k: Intel R1304JP40C E5 2690V2 3 0GHz 10 Core 64GB DDR3 1600 160GB SSD 2X2TB WD HD | eBay procs are still good and lots of NICs.
As a moderator I would have expected more tact.FWIW I didn't notice the procs when I read. As I read the 2nd post with bold/ underline I thought @T_Minus had a bone to pick with @MiniKnight
maybe others had the same reaction?
Exactly...FWIW I didn't notice the procs when I read. As I read the 2nd post with bold/ underline I thought @T_Minus had a bone to pick with @MiniKnight
maybe others had the same reaction?
The last time I designed and built enterprise Hyper-V clusters, I used 10 NICs as we didn't have 10GB.I am curious what the original owner was doing with all that Network on a box like this.
Yea, that's definitely the default Hyper-V 2008 R2 way with that many NICs =)The last time I designed and built enterprise Hyper-V clusters, I used 10 NICs as we didn't have 10GB.
Everything was deployed in redundant pairs because of redundant switches and the network team's requirement of being able to take down a switch at a moments notice. The port breakdown worked like this:
2xNIC for the dedicated host "management" team. This is where management, monitoring, occasional VM backups, and SCVMM library pulls occured.
2xNIC for the dedicated guest network team. Multiple VLANs were supported.
2xNIC for the shared cluster network. This is where love migrations between hosts occured as well as normal cluster communications between clustered machines (including VMs as this network was shared).
2xNIC for the shared iSCSI network. This is what connected the hosts tot heir CSVs, and connected the occasional VM to its iSCSI LUNs.
2xNIC for the dedicated DMZ network access for guests. We had Lync Edge servers and they needed to be hooked up to the DMZ switches somehow.
I'm looking at this server config, I can totally see how they got to their config of they had needs and requirements like I did. And yes I know I probably over engineered things, but I had to are sure the deployment was successful.
LOL, nice someone let M$ in on the 802.1q secret and vSwitches teeheheYea, that's definitely the default Hyper-V 2008 R2 way with that many NICs =)
I think you missed the part where I mentioned the dedicated guest NICs used multiple VLANs.LOL, nice someone let M$ in on the 802.1q secret and vSwitches teehehe