Response times in Hyper-V

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Andyreas

Member
Jul 26, 2013
50
4
8
Sweden
Hi

As the drive to store my virtual harddrives in Hyper-V (Win Server 2012 R2 DC) I am using 4 SSD drives in Raid 10. I am getting some strange latencyissues thou. The latency isnt the problem here but I can't understand the performance monitors in Windows.

When I open the resourcemonitor on the host I am seeing values of 400ms+ in the responsetime column but when i open the performancemonitor and setup the Logical disk avd disk sec/read and write counters I am seeing the graph jump up and down around 10-20 ms. How can one counter say that Drive D is having threads at 400ms and then Another show 10-20ms?

I have tried to find the answer but I am stuck. I am probably missing something very simple.

Thanks in advance!
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
Ivy Bridge power throttling?

Are you having applications where you can measure the latency from within the VM?
 

Andyreas

Member
Jul 26, 2013
50
4
8
Sweden
Hi MiniKnight, thanks for the answer. The VMs are mostly win 20012 r2 DC so I guess i can just use the built in performance monitor and Resource monitor to measure the same there and not just measure it on the host. Will do that and get back to this thread (a small vacation upcoming thou)
 

Andyreas

Member
Jul 26, 2013
50
4
8
Sweden
Hi MiniKnight. I've now checked the latency from within the vms and they are equally bad (not 400 ms as on the host but around 200 ms). I ran hdtune from within one of the vms too. Before I had setup hyper-v on the host i had average speed of 425,7 MB/sex and accesstime of 0.2 ms

When i run it inside a vm strange things happen. It starts of VERY slow first 30-60 seconds. Average speed of around 50 MB/sec then it starts to fly ending with an average speed of 790 MB/sec and accesstime of 0.7 (this is with a small load on the server). I can't understand why it behaves so strange. I have nothing special setup. Normal HP Dl380 G6 server with HP410 raidcard. 512 MB cache split in favor of Writing 80/20. I don't use any built in diskmanagement from Server 2012 R2 DC.

Anyone with an idea of what to do next to get to the bottom of this. Just need some help pointed in the right direction.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
Are you using software raid? that slow ramp-up sounds like an issue I had with software raid in windows, well I thought I was using software raid when in fact I was using storage spaces. Creating a SW raid with disk part performed pretty good.
 

Andyreas

Member
Jul 26, 2013
50
4
8
Sweden
Thanks for the reply gigatexal but sadly I am not. I am running it on a HP 360 G7 server with the standard P410i raid controllercard. 512MB BBWC =(, normal Raid10 there, nothing fancy and crucial M500 1TB SSDs. Something doesnt want to play together but I cant figure out which parts. Windows, the controllercard, the disks etc.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,827
113
@Andyreas is this only in Hyper-V? I may give something similar a shot and try to generate a lot of data.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Do you have performance profile in bios set to MAX MAX MAX performance with the hyper-V HOST/GUEST profile set the maximum performance? The G6/G7 line are very picky when it comes with power management and hypervisors (aka use none!)
 

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
I am going to have a few shots in the dark here as I don't have a G7.

  • The use of the the P410i doesn't excite me at all, seems to be others with issues on these with SSD's.
  • There is likely issues with Link State Power Management still enabled in the host OS and/or the VM - Kill it off!
  • Check that ALL device drivers are in and there no reds or yellows.
  • Watch the perf-mon and task manager closely to ensure you don't have DPC / System interrupts, these indicate bad drivers and common fault for latencies.
A little FYI that may help.
HP Support document - HP Support Center
 

Andyreas

Member
Jul 26, 2013
50
4
8
Sweden
Hi Patrick, mrkrad and lost-benji! I'll definitely check the max max max setting but if i remember correctly I already did that. Not 100% sure thou. Wish there was an easy way in to bios without rebooting. Patrick I am doublechecking Everything else now so u don't waste time on some noob error i made. Lost-benji thanks! You don't happen to have a Little explanation to turn of the link state Power management? (Update: Found it, was too easy, under normal power settings) Thousand thanks guys!
 
Last edited:

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
LSPM is where the OS and board will try to slow down or cut PCI-E lanes in interests of trying to save power. This might sound fine for video cards but when boards these days use PCI-E lanes to interconnect major board sub-systems like the south-bridge chipsets (Drives and other external I/O), not so great when storage is concerned.