NVIDIA
Windows 10 vGPU GRID K1 black screen on 1st login
Config: ESXI 6.0u3 GRID K1 card GRID K1 370.16 Drivers(tried multiple drivers) vCenter 6.0u3 VM: Windows 10 1607 using PCoIP vGPU k140 profile View agent 7.2(tried 6.2.4,7.1,7.3) Linked Clone Floating pool I'm having a strange issue with vGPU and windows 10. I set up vGPU per documention included with the ESXI VIB. When I try to login the first time(horizon client or zero client) I get a black screen for about 15-20 second, then the connection server then closes the connection. I reinitiate the login, I get a black screen for about 5 seconds, then get a mouse curser, and the windows 10 'welcome' screen. I can confirm VGPU with nvidia-SMI. Our test pool is set to delete on logoff, so I'm not guaranteed the same VM on every login, something interesting I've noticed: Once I've logged into the first VM, I can log into 2-3 more VMs without getting the black screen issue, but on the 4th VM, I'll get it. Since the VMs delete on logoff, I know I'm not getting the same VMs on every login. I've opened tickets with both VMware, and Dell(OEM vender). VMware believes it's an issue with a driver or the card itself. I've also opened a ticket with Dell(oem vender) in hopes of getting some tech support from NVidia, the Dell tech tells me he can't do that and my understanding is that's the only real route into support under the K1 support structure, so I'm at a loss at this point.
Config:
ESXI 6.0u3
GRID K1 card
GRID K1 370.16 Drivers(tried multiple drivers)
vCenter 6.0u3


VM:
Windows 10 1607
using PCoIP
vGPU k140 profile
View agent 7.2(tried 6.2.4,7.1,7.3)
Linked Clone Floating pool



I'm having a strange issue with vGPU and windows 10. I set up vGPU per documention included with the ESXI VIB. When I try to login the first time(horizon client or zero client) I get a black screen for about 15-20 second, then the connection server then closes the connection. I reinitiate the login, I get a black screen for about 5 seconds, then get a mouse curser, and the windows 10 'welcome' screen. I can confirm VGPU with nvidia-SMI. Our test pool is set to delete on logoff, so I'm not guaranteed the same VM on every login, something interesting I've noticed: Once I've logged into the first VM, I can log into 2-3 more VMs without getting the black screen issue, but on the 4th VM, I'll get it. Since the VMs delete on logoff, I know I'm not getting the same VMs on every login. I've opened tickets with both VMware, and Dell(OEM vender). VMware believes it's an issue with a driver or the card itself. I've also opened a ticket with Dell(oem vender) in hopes of getting some tech support from NVidia, the Dell tech tells me he can't do that and my understanding is that's the only real route into support under the K1 support structure, so I'm at a loss at this point.

#1
Posted 11/09/2017 01:39 PM   
Hi jg123, does this also happen with Blast protocol? This is most likely a VMWare issue. Please check the Blast protocol first. Unfortuantely VMWare always points to NV as soon as a GPU is involved. I heard about different black screen issues in conjuction with PCoIP therefore I really recommend to check with Blast and also test with latest Horizon 7.3.1 Regards Simon
Hi jg123,

does this also happen with Blast protocol? This is most likely a VMWare issue. Please check the Blast protocol first. Unfortuantely VMWare always points to NV as soon as a GPU is involved. I heard about different black screen issues in conjuction with PCoIP therefore I really recommend to check with Blast and also test with latest Horizon 7.3.1

Regards

Simon

#2
Posted 11/10/2017 03:46 AM   
Maybe ScreenDMA? It has caught me out a few times - check this out: https://kb.vmware.com/s/article/2144475 You have linked clones so it should be enabled however it is worth a look. Otherwise it could be firewall or driver issues.
Maybe ScreenDMA? It has caught me out a few times - check this out:


https://kb.vmware.com/s/article/2144475


You have linked clones so it should be enabled however it is worth a look. Otherwise it could be firewall or driver issues.

#3
Posted 11/14/2017 09:14 PM   
We are having exact issue on Grid2.0 M10 cards using the 1b profile. ScreenDMA is in place and still cannot connect once vGPU is active. Remove Shared PCI device from hardware, reboot, and the VM/PCOIP work without issue. Any other ideas?
We are having exact issue on Grid2.0 M10 cards using the 1b profile. ScreenDMA is in place and still cannot connect once vGPU is active. Remove Shared PCI device from hardware, reboot, and the VM/PCOIP work without issue. Any other ideas?

#4
Posted 11/16/2017 10:49 PM   
Scroll To Top

Add Reply