NVIDIA
DDA: Black screen console VM (RDP no problem)
Hi! I have a “problem” with the VM console after implementing DDA. When installed the drivers on the HyperV host and configured DDA on the host and assigned a GPU to the VM that part works fine. After installing the drivers on the VM to install the GPU the drivers gets installed just fine. But after installing and a reboot of the VM I cannot manage the VM through the hyper-V console and the screen goes black. RDP on the VM works fine and the drivers are correctly installed on host and VM. What am I doing wrong here? Also do I need a license? My setup is: Server 2016 Datacenter Hyper-V HP Proliant DL380. Nividia Tesla M10 128 GB Profile: Virtualisation Optimisation. I have tested version 4.7 NVIDIA GRID (on host and VM) and 6.2 NVIDIA Virtual GPU Software (on host and VM). Also I have used these PS command for DDA: #Configure the VM for a Discrete Device Assignment $vm = "TS01" #Set automatic stop action to TurnOff Set-VM -Name $vm -AutomaticStopAction TurnOff #Enable Write-Combining on the CPU Set-VM -GuestControlledCacheTypes $true -VMName $vm #Configure 32 bit MMIO space Set-VM -LowMemoryMappedIoSpace 3Gb -VMName $vm #Configure Greater than 32 bit MMIO space Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName $vm #Find the Location Path and disable the Device #Enumerate all PNP Devices on the system $pnpdevs = Get-PnpDevice -presentOnly #Select only those devices that are Display devices manufactured by NVIDIA $gpudevs = $pnpdevs |where-object {$_.Class -like "Display" -and $_.Manufacturer -like "NVIDIA"} #Select the location path of the first device that's available to be dismounted by the host. $locationPath = ($gpudevs | Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0] #Disable the PNP Device Disable-PnpDevice -InstanceId $gpudevs[0].InstanceId #Dismount the Device from the Host Dismount-VMHostAssignableDevice -force -LocationPath $locationPath #Assign the device to the guest VM. Add-VMAssignableDevice -LocationPath $locationPath -VMName $vm Kind regards
Hi!

I have a “problem” with the VM console after implementing DDA.
When installed the drivers on the HyperV host and configured DDA on the host and assigned a GPU to the VM that part works fine.
After installing the drivers on the VM to install the GPU the drivers gets installed just fine.
But after installing and a reboot of the VM I cannot manage the VM through the hyper-V console and the screen goes black.
RDP on the VM works fine and the drivers are correctly installed on host and VM.
What am I doing wrong here?

Also do I need a license?

My setup is:

Server 2016 Datacenter
Hyper-V
HP Proliant DL380.
Nividia Tesla M10
128 GB
Profile: Virtualisation Optimisation.

I have tested version 4.7 NVIDIA GRID (on host and VM) and 6.2 NVIDIA Virtual GPU Software (on host and VM).

Also I have used these PS command for DDA:

#Configure the VM for a Discrete Device Assignment
$vm = "TS01"

#Set automatic stop action to TurnOff
Set-VM -Name $vm -AutomaticStopAction TurnOff

#Enable Write-Combining on the CPU
Set-VM -GuestControlledCacheTypes $true -VMName $vm

#Configure 32 bit MMIO space
Set-VM -LowMemoryMappedIoSpace 3Gb -VMName $vm

#Configure Greater than 32 bit MMIO space
Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName $vm

#Find the Location Path and disable the Device
#Enumerate all PNP Devices on the system
$pnpdevs = Get-PnpDevice -presentOnly

#Select only those devices that are Display devices manufactured by NVIDIA
$gpudevs = $pnpdevs |where-object {$_.Class -like "Display" -and $_.Manufacturer -like "NVIDIA"}

#Select the location path of the first device that's available to be dismounted by the host.
$locationPath = ($gpudevs | Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]

#Disable the PNP Device
Disable-PnpDevice -InstanceId $gpudevs[0].InstanceId

#Dismount the Device from the Host
Dismount-VMHostAssignableDevice -force -LocationPath $locationPath

#Assign the device to the guest VM.
Add-VMAssignableDevice -LocationPath $locationPath -VMName $vm


Kind regards

#1
Posted 10/05/2018 10:11 AM   
Nothing wrong. Works as designed. It is normal that you cannot use the console afterwards!!!!
Nothing wrong. Works as designed. It is normal that you cannot use the console afterwards!!!!

#2
Posted 10/05/2018 12:21 PM   
Hi sschaber, Thanks for the reply. In this video the console works (look at 13:50 min): https://vimeo.com/161800097 Could you explain please. :) Thanks!
Hi sschaber,

Thanks for the reply.
In this video the console works (look at 13:50 min): https://vimeo.com/161800097

Could you explain please. :)

Thanks!

#3
Posted 10/05/2018 12:52 PM   
Hi, I never tested K1 with 2016 but K1 was able to be the primary display device which is not possible for the current Tesla boards. What's the issue in using RDP? regards Simon
Hi,

I never tested K1 with 2016 but K1 was able to be the primary display device which is not possible for the current Tesla boards.
What's the issue in using RDP?

regards
Simon

#4
Posted 10/06/2018 08:56 AM   
In passthrough mode, I don't believe it was any different with the K1 or K2. You still need a separate video board for the underlying server if used as a hypervisor, I'm pretty sure.
In passthrough mode, I don't believe it was any different with the K1 or K2. You still need a separate video board for the underlying server if used as a hypervisor, I'm pretty sure.

-=Tobias

#5
Posted 10/18/2018 09:23 PM   
Thanks for the help people! No problem in using RDP. Just thought I was doing something wrong here. Regards!
Thanks for the help people!

No problem in using RDP. Just thought I was doing something wrong here.

Regards!

#6
Posted 11/02/2018 07:43 PM   
Scroll To Top

Add Reply