NVIDIA
NVIDIA GRID VGPU support does not match desktop setting + Esxi console blank
Hi there, I have the 2 x Tesla M60 cards in an esxi 6.5 host. I have allocated the VM etc all fine but i have 2 issues there after. 1. Once I go back to the Horizon connection server to allocate a new VM to a pool i get the "NVIDIA GRID VGPU support does not match desktop setting" intermittently. It seems random and i'm not sure what is happening but the newly configured VM's don't initially show up. I left it overnight and 1 VM then showed up ok. I created a new VM restarted the server this morning and it showed up. Is there anything I'm missing to get them to instantly appear? When it says unavailable even the ones i have successfully created don't show up available... 2. I have lost Esxi console access once the nvidia driver is installed (windows 7 machine). RDP OK. OH and 3. i sometimes get a blank screen when using the horizon client to connect to a VM and blast extreme set? thanks guys in advance...
Hi there, I have the 2 x Tesla M60 cards in an esxi 6.5 host. I have allocated the VM etc all fine but i have 2 issues there after.

1. Once I go back to the Horizon connection server to allocate a new VM to a pool i get the "NVIDIA GRID VGPU support does not match desktop setting" intermittently. It seems random and i'm not sure what is happening but the newly configured VM's don't initially show up. I left it overnight and 1 VM then showed up ok.
I created a new VM restarted the server this morning and it showed up. Is there anything I'm missing to get them to instantly appear? When it says unavailable even the ones i have successfully created don't show up available...

2. I have lost Esxi console access once the nvidia driver is installed (windows 7 machine). RDP OK.

OH and 3. i sometimes get a blank screen when using the horizon client to connect to a VM and blast extreme set?

thanks guys in advance...

#1
Posted 06/13/2017 03:34 PM   
Hi Peter I'm more XenDesktop / XenApp than Horizon, but I'll see if I can help ... Q1 & Q3 - Which version of Horizon / Horizon Client are you using? Q2 - This is to be expected as a result of the using the NVIDIA driver, so don't worry about it. Something that may help, when installing VMTools at the start of the build, personally I don't install the vSGA driver as the NVIDIA driver will supersede it once installed. Also, once the NVIDIA driver has been installed, after a reboot I go into Windows Device Manager and disable the other display adapter, leaving the NVIDIA adapter being the only choice, then I enable "Show Hidden Devices" and remove all Ghost adapters as well. See if that works for you. Regards Ben
Hi Peter

I'm more XenDesktop / XenApp than Horizon, but I'll see if I can help ...

Q1 & Q3 - Which version of Horizon / Horizon Client are you using?

Q2 - This is to be expected as a result of the using the NVIDIA driver, so don't worry about it.

Something that may help, when installing VMTools at the start of the build, personally I don't install the vSGA driver as the NVIDIA driver will supersede it once installed. Also, once the NVIDIA driver has been installed, after a reboot I go into Windows Device Manager and disable the other display adapter, leaving the NVIDIA adapter being the only choice, then I enable "Show Hidden Devices" and remove all Ghost adapters as well.

See if that works for you.


Regards

Ben

#2
Posted 06/13/2017 04:44 PM   
HI Ben, thanks for the quick reply. Q1 and 3. horizon client: 4.4.0 build-5171611 Horizon: 7.1.0 build-5170113 Q2 - I Will disable thanks. I also have to tag on a 4th question (sorry) that i have unfortunately ran into! Q4. I have 10 x Win 7 VM's. I want to allocate 6 x VMs with a 4GB profile and 4 x VM's with a 2GB profile (i have 2 x Tesla M60 GPUs in one Esxi Host). I have configured 4 x VM's with the profile "grid_m60-4q". I have tried to go on and configure the next VM but it says "The a mount of graphics resource available in the parent resource pool is insufficient for the operation". I tried the profile "grid_m60-2q" with this attempt. I went out to lunch came back and was able to assign a 4q profile. I though OK ill stick it back on the 2q. It failed again. Thanks...
HI Ben, thanks for the quick reply.
Q1 and 3.
horizon client: 4.4.0 build-5171611
Horizon: 7.1.0 build-5170113

Q2 - I Will disable thanks.

I also have to tag on a 4th question (sorry) that i have unfortunately ran into!

Q4. I have 10 x Win 7 VM's. I want to allocate 6 x VMs with a 4GB profile and 4 x VM's with a 2GB profile (i have 2 x Tesla M60 GPUs in one Esxi Host). I have configured 4 x VM's with the profile "grid_m60-4q". I have tried to go on and configure the next VM but it says "The a mount of graphics resource available in the parent resource pool is insufficient for the operation". I tried the profile "grid_m60-2q" with this attempt. I went out to lunch came back and was able to assign a 4q profile. I though OK ill stick it back on the 2q. It failed again.


Thanks...

#3
Posted 06/13/2017 05:42 PM   
As you're running differing vGPU Profiles on the same Host, you'll need to configure the VM deployment / GPU allocation differently. By default, GRID is configured for performance, so it will spread VMs on to as many [u]different[/u] physical GPUs as possible. You need to change that so that it's the opposite and will consolidate as many VMs on to the [u]same[/u] GPU as possible. In vCenter, select your GPU Host and navigate to: [b]Configure > Hardware > Graphics[/b] Make sure you're on the "Host Graphics" tab, and on the right click "Edit" then select "Group VMs on GPU until full (GPU Consolidation)". I've attached an image of my settings ... Regards Ben
As you're running differing vGPU Profiles on the same Host, you'll need to configure the VM deployment / GPU allocation differently. By default, GRID is configured for performance, so it will spread VMs on to as many different physical GPUs as possible. You need to change that so that it's the opposite and will consolidate as many VMs on to the same GPU as possible.

In vCenter, select your GPU Host and navigate to:

Configure > Hardware > Graphics

Make sure you're on the "Host Graphics" tab, and on the right click "Edit" then select "Group VMs on GPU until full (GPU Consolidation)".

I've attached an image of my settings ...

Regards

Ben
Attachments

Consolidation.jpg

#4
Posted 06/13/2017 06:02 PM   
Ben, Unfortunately i still get the same message. " The amount of graphics resource available in the parent resource pool is insufficient for the operation" when trying to power on the VM.
Ben, Unfortunately i still get the same message. " The amount of graphics resource available in the parent resource pool is insufficient for the operation" when trying to power on the VM.

#5
Posted 06/13/2017 06:11 PM   
Were all VMs powered off before making that change? Did you restart Xorg after making the change? Failing that, reboot your host, then double check the config change has persisted after reboot. Use Putty and SSH to your ESX Host, run "nvidia-smi" and you'll see where each VM is located on the M60s. If you start all your 2GB VMs first, they should all be on the same physical GPU. Regards
Were all VMs powered off before making that change?

Did you restart Xorg after making the change? Failing that, reboot your host, then double check the config change has persisted after reboot.

Use Putty and SSH to your ESX Host, run "nvidia-smi" and you'll see where each VM is located on the M60s. If you start all your 2GB VMs first, they should all be on the same physical GPU.

Regards

#6
Posted 06/13/2017 06:22 PM   
Output below. Am i missing something based on my original specification plan above re the 10VM's and allocation of resources? Should i not see 8 outputs not 4? +-----------------------------------------------------------------------------+ | NVIDIA-SMI 367.92 Driver Version: 367.92 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla M60 On | 0000:06:00.0 Off | Off | | N/A 31C P8 24W / 150W | 4099MiB / 8191MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla M60 On | 0000:07:00.0 Off | Off | | N/A 29C P8 24W / 150W | 4099MiB / 8191MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla M60 On | 0000:84:00.0 Off | Off | | N/A 34C P8 25W / 150W | 4099MiB / 8191MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 3 Tesla M60 On | 0000:85:00.0 Off | Off | | N/A 28C P8 24W / 150W | 4099MiB / 8191MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 697303 C+G VM1 4080MiB | | 1 862898 C+G VM2 4080MiB | | 2 872777 C+G VM3 4080MiB | | 3 874792 C+G VM4 4080MiB | +-----------------------------------------------------------------------------+ the xorg failed to restart - i may need to reboot host.
Output below. Am i missing something based on my original specification plan above re the 10VM's and allocation of resources?

Should i not see 8 outputs not 4?

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.92 Driver Version: 367.92 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M60 On | 0000:06:00.0 Off | Off |
| N/A 31C P8 24W / 150W | 4099MiB / 8191MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla M60 On | 0000:07:00.0 Off | Off |
| N/A 29C P8 24W / 150W | 4099MiB / 8191MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla M60 On | 0000:84:00.0 Off | Off |
| N/A 34C P8 25W / 150W | 4099MiB / 8191MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla M60 On | 0000:85:00.0 Off | Off |
| N/A 28C P8 24W / 150W | 4099MiB / 8191MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 697303 C+G VM1 4080MiB |
| 1 862898 C+G VM2 4080MiB |
| 2 872777 C+G VM3 4080MiB |
| 3 874792 C+G VM4 4080MiB |
+-----------------------------------------------------------------------------+



the xorg failed to restart - i may need to reboot host.

#7
Posted 06/13/2017 06:32 PM   
That's still set for performance. They should be on 2 GPUs, not 4. You will need a reboot. Says on the top of the .jpg I attached, "Settings will take affect after restarting the "Xorg" service" :-)
That's still set for performance. They should be on 2 GPUs, not 4.

You will need a reboot. Says on the top of the .jpg I attached, "Settings will take affect after restarting the "Xorg" service" :-)

#8
Posted 06/13/2017 06:43 PM   
Ben, Magic - thanks. Looking better after a reboot...ill report back once i config and spin up the others. I'll also see if the Q1 above is resolved after rebooting. thanks for your speedy reply on this topic. Peter
Ben, Magic - thanks. Looking better after a reboot...ill report back once i config and spin up the others.

I'll also see if the Q1 above is resolved after rebooting.

thanks for your speedy reply on this topic.
Peter

#9
Posted 06/13/2017 06:59 PM   
No worries mate, glad you can now make full use of the GPUs! Keep us posted! Regards
No worries mate, glad you can now make full use of the GPUs!

Keep us posted!

Regards

#10
Posted 06/13/2017 07:01 PM   
UPDATE: I have all the allocated resources as planned! :) My issue 1 above still exists but i rebooted the connection after every VM installation and configuration was complete. It seems when i add anything new it does not appear automatically in the Pool. Anyway, once all are setup technically it's not an issue. Maybe just a bug in the version which can hopefully be sorted. +-----------------------------------------------------------------------------+ | NVIDIA-SMI 367.92 Driver Version: 367.92 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla M60 On | 0000:06:00.0 Off | Off | | N/A 32C P8 25W / 150W | 8179MiB / 8191MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla M60 On | 0000:07:00.0 Off | Off | | N/A 30C P8 24W / 150W | 8179MiB / 8191MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla M60 On | 0000:84:00.0 Off | Off | | N/A 35C P8 26W / 150W | 8179MiB / 8191MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 3 Tesla M60 On | 0000:85:00.0 Off | Off | | N/A 30C P8 24W / 150W | 8179MiB / 8191MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 69266 C+G VMPC001 4080MiB | | 0 69267 C+G VMPC004 4080MiB | | 1 69516 C+G VMPC003 4080MiB | | 1 72818 C+G VMPC005 4080MiB | | 2 70461 C+G VMPC006 2040MiB | | 2 70574 C+G VMPC007 2040MiB | | 2 73558 C+G VMPC002 2040MiB | | 2 85313 C+G VMPC010 2040MiB | | 3 79437 C+G VMPC008 4080MiB | | 3 84319 C+G VMPC009 4080MiB | +-----------------------------------------------------------------------------+
UPDATE: I have all the allocated resources as planned! :)
My issue 1 above still exists but i rebooted the connection after every VM installation and configuration was complete. It seems when i add anything new it does not appear automatically in the Pool. Anyway, once all are setup technically it's not an issue. Maybe just a bug in the version which can hopefully be sorted.

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.92 Driver Version: 367.92 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M60 On | 0000:06:00.0 Off | Off |
| N/A 32C P8 25W / 150W | 8179MiB / 8191MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla M60 On | 0000:07:00.0 Off | Off |
| N/A 30C P8 24W / 150W | 8179MiB / 8191MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla M60 On | 0000:84:00.0 Off | Off |
| N/A 35C P8 26W / 150W | 8179MiB / 8191MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla M60 On | 0000:85:00.0 Off | Off |
| N/A 30C P8 24W / 150W | 8179MiB / 8191MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 69266 C+G VMPC001 4080MiB |
| 0 69267 C+G VMPC004 4080MiB |
| 1 69516 C+G VMPC003 4080MiB |
| 1 72818 C+G VMPC005 4080MiB |
| 2 70461 C+G VMPC006 2040MiB |
| 2 70574 C+G VMPC007 2040MiB |
| 2 73558 C+G VMPC002 2040MiB |
| 2 85313 C+G VMPC010 2040MiB |
| 3 79437 C+G VMPC008 4080MiB |
| 3 84319 C+G VMPC009 4080MiB |
+-----------------------------------------------------------------------------+

#11
Posted 06/13/2017 08:40 PM   
Nice work. I'll do some more digging for you on Issue 1 tomorrow. Just out of interest, what applications are you planning to run on them?
Nice work.

I'll do some more digging for you on Issue 1 tomorrow.

Just out of interest, what applications are you planning to run on them?

#12
Posted 06/13/2017 10:21 PM   
The versions you mention look up to date, but for sanity, can you double check you're running the latest components from here: https://my.vmware.com/web/vmware/details?downloadGroup=VIEW-710-ENT&productId=641&rPId=16353 After that, can you have a look at this and see if it helps: https://communities.vmware.com/thread/565660 And if neither of those options work, can you have a look at some of these and see if they help: https://www.google.co.uk/search?q=NVIDIA+GRID+VGPU+support+does+not+match+desktop+setting&oq=NVIDIA+GRID+VGPU+support+does+not+match+desktop+setting&aqs=chrome..69i57j69i60l3j0.479j0j7&sourceid=chrome&ie=UTF-8 Let me know how you get on Regards
The versions you mention look up to date, but for sanity, can you double check you're running the latest components from here: https://my.vmware.com/web/vmware/details?downloadGroup=VIEW-710-ENT&productId=641&rPId=16353

After that, can you have a look at this and see if it helps: https://communities.vmware.com/thread/565660

And if neither of those options work, can you have a look at some of these and see if they help: https://www.google.co.uk/search?q=NVIDIA+GRID+VGPU+support+does+not+match+desktop+setting&oq=NVIDIA+GRID+VGPU+support+does+not+match+desktop+setting&aqs=chrome..69i57j69i60l3j0.479j0j7&sourceid=chrome&ie=UTF-8

Let me know how you get on

Regards

#13
Posted 06/14/2017 09:31 AM   
Morning Ben, We are using it mainly for ArcGIS (including PRO). CAD, Trimble and a few others. I seem to be all good for the versions. The client and Agent are fully up to date. I have not yet installed the direct agent but have the latest version and its on the to do list today. I have clicked on most of the links you sent above funnily yesterday. Thanks for sending them. All my VMWare tools etc are up to date. This morning I am running into a few bugs..some Nvidia related some horizon 1. I seem to be able to access the VM's now via Esxi console.. 2. I cant launch the nvidia control panel or display settings on each VM? This is quite important as i need to register the license on the VM's with my license server - it appears its not using the nvidia card? 3. Horizon Client stalls and goes into non responding mode on log off of the VM's. 4. using blast extreme connecting to PC on 2 monitors gives a black screen. Hopefully not anything else... cheers Peter
Morning Ben,

We are using it mainly for ArcGIS (including PRO). CAD, Trimble and a few others.

I seem to be all good for the versions. The client and Agent are fully up to date. I have not yet installed the direct agent but have the latest version and its on the to do list today. I have clicked on most of the links you sent above funnily yesterday. Thanks for sending them. All my VMWare tools etc are up to date.

This morning I am running into a few bugs..some Nvidia related some horizon

1. I seem to be able to access the VM's now via Esxi console..
2. I cant launch the nvidia control panel or display settings on each VM? This is quite important as i need to register the license on the VM's with my license server - it appears its not using the nvidia card?
3. Horizon Client stalls and goes into non responding mode on log off of the VM's.
4. using blast extreme connecting to PC on 2 monitors gives a black screen.

Hopefully not anything else...

cheers
Peter

#14
Posted 06/14/2017 12:55 PM   
It's definitely not utilizing the NVidia card anymore. Researching this now but would appreciate any feedback on what the issue may be. Regards
It's definitely not utilizing the NVidia card anymore. Researching this now but would appreciate any feedback on what the issue may be.
Regards

#15
Posted 06/14/2017 02:36 PM   
Scroll To Top

Add Reply