Quantcast
Channel: Alex Who?
Viewing all articles
Browse latest Browse all 11

Where’s My vNUMA?

$
0
0

This is the perfect case of something that I saw a while back and thought to myself, "there's no way I can forget this." When encountered with a similar problem a few weeks ago it took me a while to figure out I had seen this issue before. This time I thought to myself, "I better write this down somewhere." That's been the main purpose of this blog for me, reminding myself of that which I have seen and too quickly forgotten.

For today's moment of deja vu we'll take a look at a case where you have configured a virtual machine to use vNUMA (see The CPU Scheduler in VMware vSphere 5.1 for details on vNUMA) either by manually setting the virtual machine advanced configuration numa.vcpu.min or by creating a virtual machine with greater than eight vCPUs. In my example below I did the latter, creating a virtual machine with 12 virtual sockets and one core per socket. This is the preferred method for allocating multiple vCPUs to a virtual machine as it allows ESXi to create the most optimal NUMA client configuration. Two cases for manually modifying the number of cores per socket is if you intentionally want to create multiple NUMA clients, perhaps to increase the memory throughput of your application by splitting its workloads among the physical NUMA nodes or in the case of licensing if the guest OS is limited in the number of sockets it recognizes (for instance Windows 2008 R2 standard is limited to four CPUs).

This virtual machine was created on an ESXi host with 2 x 6 core processors. Once up and running this Windows Server 2012 virtual machine sees two NUMA nodes with six logical processors each.

That's all good, right? So where's the problem. One issue that's been creeping back up here and there is one where customers are attempting to do their own benchmarking using various VM sizes. Sometimes in these tests the customer has enabled CPU hot add to make it easier to increase the number of vCPUs without needing to reboot the guest OS. In these cases they see reduced or suboptimal performance which often leads to confusion and inconsistent results. The reason for the suboptimal performance can be the lack of NUMA awareness in the VM as a result of CPU hot add being enabled. When a virtual machine is configured to support CPU hot add, vNUMA is effectively disabled for the virtual machine. This means that a virtual machine running software that is NUMA aware, such as SQL Server, that could take advantage of knowing the underlying NUMA topology resorts to UMA, or memory interleaving, at least from the guest OS perspective. ESXi will still create multiple NUMA clients for wide-NUMA virtual machines, however this doesn't allow the guest OS or application to make the most optimal scheduling decisions.

As you can see below, once CPU hot add is enabled Windows Server 2012 is no longer presented with a vNUMA topology.

This has been mostly encountered in benchmark tests, but I can see how someone might create virtual machine templates with CPU hot add enabled. How useful is hot add if you have to reboot to enable the ability to do the hot adding? Given how little CPU hot add might be used it may be a good idea to take hot add on a case by case basis. Incidentally, memory hot add does not impact the availability of vNUMA for virtual machines.

-alex


Viewing all articles
Browse latest Browse all 11

Trending Articles