Windows Server 2012R2 Hyper V Remote Hardware Connectivity Questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ratfink

New Member
Dec 23, 2015
3
0
1
105
Hi All,
First, I am fairly new to Hyper V, not so new to Windows Server and all the wonderfullesness of quality....

The question I have and still not getting my head around this is:?
  • I have a Server 2012R2 Box, dual cpu quad cores (8 cores), 12t of storage, 36gb of memory, 4 nic cards
  • Hyper V is up and running.
  • 6 vm's sitting on this one box.
  • I do not have enough resources, (CPU/Memory) to run all these VM's on this one box.
Essentially, my thoughts were that this box was to be a NAS (Storage) for ALL my VM's and that I would access all my VM's through remote server blades in my rack through gigabit Ethernet connections. Is this doable? I cant seem to do a correct search in order to pull up setup info on a setup like this? Questions I have?

  • If I have separate blade servers without hard drives. I want to connect to my VM/NAS/Server2012 Box and connect to the VM's/Drives through network cable.... CAT5, typical network cable. Can I do this? and a very brief summary of how would be sufficient?
  • Do I need to have Hyper V running on my VM/NAS/Server box OR from each server blade accessing the individual VM's?
I appreciate any guidance... Cant seem to get my head out regarding this issue.... I did this thinking I could put a dozen or more VM's on one box and access them through multiple blade servers without hard drives.... instead use a thumb drive to boot the blades up, then access the VM's in order to use the resources on the blades instead of the main storage box.... I hope this makes sense?

Thanks in advance, I really appreciate the help and guidance..
 

vl1969

Active Member
Feb 5, 2014
634
76
28
I don't think it works like that. If you are accessing a vm than you are using the resourcrs on vm server, not the machine you accessing vm from. Now you can boot the blades from network image uaing pxe, than you would be running a disk less machine but using the hardware on that machine.
Not sure if you can use the vm hard drive for pxe booting. And hyper-v has nothing to do in this case. Hv is a virtualization platform. You use it to consolidate multiple hardware into a single unit. What are you running in your vms that an 8 core/36gb ram is not enough for 6-8 vms? I just build out a hyper-v server for work. 2 six core / 32gb ram 4tb storages. I am running 6 vms and barely making a dent in the resources
 

Ratfink

New Member
Dec 23, 2015
3
0
1
105
Well my thinking was I would use blade servers, booting from a usb drive with server on them. then connect to the vm's on the other box through gigabit Ethernet or fiber.

I have allocated 2cpu's , and 4g of memory for each VM.

The VM's I have setup are
  • SharePoint 2013 Server, VM
  • SQL Server, VM
  • Exchange VM
  • Project Server VM
  • Team Foundation Server
  • Dynamics VM
Not really possible to run all these VM's from one machine, the physical box needs at least 2cpu to barely run. All the other VM's would be assigned a single CPU and still I would run out of resources... As it is with 2CPU's assigned there still very slow.

I should probably simplify and run fiber backbone from the blades to the main drive storage... I want my drives and partitions on one box for all my servers....
 
Last edited:

vl1969

Active Member
Feb 5, 2014
634
76
28
Well you can do pxe booting like they do with vdi. Iam not sure it's called pxe or titf, you provide the boot image via network like terminal services. This way you boot the blade from full image.
The thing is if I am reading right it will not work the way you think it would. If you connect to a running vm, vm uses the resources on the host not on connecting machine. Now if you provide the virtual drive to blade to boot from it will run on it and use the resources on the blade. Price out the fiber channel. It might be cheaper to just buy a second server to run some of the vms
 

Quasduco

Active Member
Nov 16, 2015
129
47
28
113
Tennessee
Well my thinking was I would use blade servers, booting from a usb drive with server on them. then connect to the vm's on the other box through gigabit Ethernet or fiber.

I have allocated 2cpu's , and 4g of memory for each VM.

The VM's I have setup are
  • SharePoint 2013 Server, VM
  • SQL Server, VM
  • Exchange VM
  • Project Server VM
  • Team Foundation Server
  • Dynamics VM
Not really possible to run all these VM's from one machine, the physical box needs at least 2cpu to barely run. All the other VM's would be assigned a single CPU and still I would run out of resources... As it is with 2CPU's assigned there still very slow.

I should probably simplify and run fiber backbone from the blades to the main drive storage... I want my drives and partitions on one box for all my servers....
I think some further clarification is necessary here.

What actual cpus are you running? Possible to add a few more cores with hex core cpus possibly for pretty cheap, depending on your load.
How many drive spindles, what raid config, how is the raid setup? Any SSDs, caching or otherwise?
What kind of user load are we talking about with these server VMs?
Are you seeing a ton of paging, or maxxed cpus, or just complaining users?

If this is a say 20 user office, and you are not just tearing through data, my instinct is first and foremost, you are likely choking either with lack of memory, or disk accesses.

Can a solution be done with the blades? Sure. Is it needed? At least with what we have knowledge wise, can't say.
 

Ratfink

New Member
Dec 23, 2015
3
0
1
105
vl1969 mentioned I could:

Now if you provide the virtual drive to blade to boot from it will run on it and use the resources on the blade
This is exactly what I want to do.

To provide further clarity on the setup:
  • This is my own personal hardware, NO OTHER USERS.
  • I have a ton of surplus Server Hardware, Racks, etc.... that I am setting up for my lab.
  • This is the only way I can set this up in this configuration, there are no other options. I have a bunch of Diskless Blades and I need to boot off the images on the Drive Storage box.
To add some further info:
  • The NAS Box with the drives has 8 cores total, 32g of memory, I have one 12T partition, this cannot change except for gigabit Ethernet and or fiber channel. Since I am the only user I can probably use Ethernet.
  • Requirements for the Server VM's I already have built are:
    • SharePoint2013 VM requires 4CPU and 8g of memory for a single box to work minimally.
    • SQL Server VM requires 4 CPU and 8g of memory for a single box
    • Project Server VM requires 2CPU, and 4g of memory
    • Team Foundation Server requires 2CPU and 4g of memory
    • Dynamics requires 4cpu and 8g of memory.

This is the reason I cant run all these server VM's off of one box, I don't have enough CPU and Memory resources to do so.
I don't want to have drives on every single server box. This doesn't seem to make a lot of sense to me due to the availability of VM's and Thin Clients. It seems I have few options...
  • If I want everything on one box then I need a 32 core processor and 100g of memory? Does this even exist? If it did it would be way out of my price range and not a sensible solution.
  • I could add a bunch of servers and put hard drives in them. This is the solution that most would present. I don't agree with this and don't understand why this has to be? I can boot off an external USB drive on my laptop! Why cant I boot off an external pc with large drive in it? Instead of using a USB Cable I use gigabit Ethernet and or Fiber? I cant be the only one who see's this potential and advantage?
Seems vl1969 solution is exactly what I need, I just need to make sure its as simple as that and I understand the setup/config to do it that way. I really appreciate the help and time reviewing and providing a solution.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
You right I did said that you can do it. Problem is the setup I refer to was and is ment for desktops. It is never ment to run servers on and specially this complicated setup like yours.
To answer some if the questons in the last post, yes you can have hardware with 32 cpus and 100gb ram. Infact many server grade motherboards support upto 128gb ram and more. The price for that ram is huge, but you can have it done.
For cpu you will need a 4 souquet board that suppprts 8 core cpus. However, if you is the only user, why do you need this setup. Most minimal specs are given for a company needs, meaning at least 5-10 users using the server at the same time. If you are the only user, simply build the vms with the specs you want but use dynamic ram with say, 4gb at startup. Than start vm one at a time. Try it and see if it works.

With virtualusation, you are ok with your cpu setup. Most setups will happily share cpus so you can over commit cpu, but ram is short, maybe you can bump up the ram and run all on one box
 
Last edited:

Diavuno

Active Member
So, from what I gathered you want a single box with everything that other blade servers can pull from so you can use their computer power.

Easy.

but first you'll need to setup the big box for the others to PXE boot from.
Also setup the big box for ISCSI target and your golden.

as for those VM's you seem to have AMPLE resurces for one (or more) users. Hyper V allows for Dynamic memory (must be enabled PER VM) on more newer operating systems.

Have a VM BOOT with the minimum required by the OS, but the minimum can be far less.... do not allow any to max out your ram though!

For instance, you have 36GB of RAM.
Windows 8 (64Bit) REQUIRES 2GB of Ram.

That means you have a max of 16 VMs (Assuming you leave 4GB for the Host)

but if you setup each VM for 2GB to START, minimum 1GB RAM and a Max of 4GB...

You can run twice as many VMs (assuming you load is low enough you can even set each VM to a MIN of 512 or even lower!

This is pretty silly, but good to know. one of my Hosted VDI clients is on a scale like this. 2GB min mine 256, max 8GB.
but since that business has 3 8 hour shifts it works great.


And yes, you can get servers with Much more power, off the top off of my head I think you max somewhere around 72 cores and 3TB of RAM.
 

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
Um I might have missed it somewhere, but just because you have 8 CPUs on your host does not mean you are limited to 8 CPUs for your VMs. That is whay Hyper-V is for to share resources. I have been able to run 27 VMs with 4 CPUs each on a 8 CPU host. Now that was an extreme, and some times the wait time would go up for a CPU to be available. Did not really have that much of an issue.

Memory is a different matter, and you cannot really "share" that so yes you would need more than 36GB of ram. How do you get 36GB anyways. just bit the bullet and get 16GB or 32gb sticks. much cheaper and less hassle than using Blades useless your compute is really that much. Which i doubt. I am in monitoring and I have very rarely seen the CPUs for application servers (not exchange) tax 4 CPU. your DB and exchange might put 20% load on your host. I doubt it.

As far as using that server as a NAS. yes you can. I would recommend making life easier on your self and putting cheap SSD boot drives in your blades then Connecting to the shared storage via SMB (Best compromise ) RDMA (fastest) iScsi (easiest)
 
  • Like
Reactions: Chuntzu and T_Minus