Artificial Intelligence Computing Leadership from NVIDIA
Tesla V100 - Hyper-V 2019 - device cannot find enough resources (code12) - MMIO config ?
Hello, I have a server HPE Proliant DL380 Gen10, 512GB RAM, with inside a Tesla V100-PCIE-32GB. Windows Server 2019 is installed. We have configured the Hyper-V role and installed a VM with 320GB RAM. What we have done on the host is : - Disabled the Tesla on the host. - Set-VM -Name $VMName -AutomaticStopAction TurnOff - Set-VM -LowMemoryMappedIoSpace 3Gb -VMName $VMName - Set-VM -HighMemoryMappedIoSpace 33000Mb -VMName $VMName - $locationPath = "PCIROOT(36)#PCI(0000)#PCI(0000)" - Dismount-VMHostAssignableDevice -force -LocationPath $locationPath - Add-VMAssignableDevice -LocationPath $locationPath -VMName $VMName Unfortunately, each time the VM starts I have this error message in the device manager : This device cannot find enough free resources that it can use. (Code 12) If you want to use this device, you will need to disable one of the other devices on this system. I have tried to change the MMIO to 3GB / 33000MB 2GB / 4GB (found on a blog but for Grid cards) 176Mb / 560Mb -> because the MS script listed the card as : [i]NVIDIA Tesla V100-PCIE-32GB Express Endpoint -- more secure. And its interrupts are message-based, assignment can work. And it requires at least: 48 MB of MMIO gap space PCIROOT(36)#PCI(0000)#PCI(0000) [/i]So I added 48 to the 128Mb / 512Mb from the default configuration. When changing the MMIO config, do I need to remove the card from the VM, change the config and reconnect the card to the VM or can I just change the config without doing anything to the VM ? No MMIO configuration solve the problem. Can someone please help me ? Thanks in advance for your help. Marc
Hello,
I have a server HPE Proliant DL380 Gen10, 512GB RAM, with inside a Tesla V100-PCIE-32GB.
Windows Server 2019 is installed.
We have configured the Hyper-V role and installed a VM with 320GB RAM.

What we have done on the host is :
- Disabled the Tesla on the host.
- Set-VM -Name $VMName -AutomaticStopAction TurnOff
- Set-VM -LowMemoryMappedIoSpace 3Gb -VMName $VMName
- Set-VM -HighMemoryMappedIoSpace 33000Mb -VMName $VMName
- $locationPath = "PCIROOT(36)#PCI(0000)#PCI(0000)"
- Dismount-VMHostAssignableDevice -force -LocationPath $locationPath
- Add-VMAssignableDevice -LocationPath $locationPath -VMName $VMName

Unfortunately, each time the VM starts I have this error message in the device manager :
This device cannot find enough free resources that it can use. (Code 12)
If you want to use this device, you will need to disable one of the other devices on this system.

I have tried to change the MMIO to
3GB / 33000MB
2GB / 4GB (found on a blog but for Grid cards)
176Mb / 560Mb -> because the MS script listed the card as :
NVIDIA Tesla V100-PCIE-32GB
Express Endpoint -- more secure.
And its interrupts are message-based, assignment can work.
And it requires at least: 48 MB of MMIO gap space
PCIROOT(36)#PCI(0000)#PCI(0000)
So I added 48 to the 128Mb / 512Mb from the default configuration.

When changing the MMIO config, do I need to remove the card from the VM, change the config and reconnect the card to the VM or can I just change the config without doing anything to the VM ?

No MMIO configuration solve the problem.
Can someone please help me ?

Thanks in advance for your help.

Marc

#1
Posted 08/19/2019 01:39 PM   
Hi Marc, I think I have exactly the same issue. I have a 32Gb Tesla V100 being passed through using DDA on Hyper-V Windows 2019 Server. When the guest VM is Windows 10 it seems to work fine but when it is any version of Linux the kernel throws a MIMO error followed by the hv_vmbus failed to probe the device error. Did you get any answers to your issue? Thanks Rob
Hi Marc,

I think I have exactly the same issue. I have a 32Gb Tesla V100 being passed through using DDA on Hyper-V Windows 2019 Server. When the guest VM is Windows 10 it seems to work fine but when it is any version of Linux the kernel throws a MIMO error followed by the hv_vmbus failed to probe the device error.

Did you get any answers to your issue?

Thanks

Rob

#2
Posted 01/25/2020 11:07 PM   
A combination of installing the latest kernel (5.0.0-1028-azure or 5.3.0-26-generic) on Ubuntu 18.04.03 LTS shuting down the guest. Increasing the high MIMO to 33Gb with: Set-VM -HighMemoryMappedIoSpace 33GB -VMName vm-name And then restarting, fixed the issue. Hope that helps Rob
A combination of installing the latest kernel (5.0.0-1028-azure or 5.3.0-26-generic) on Ubuntu 18.04.03 LTS shuting down the guest. Increasing the high MIMO to 33Gb with:

Set-VM -HighMemoryMappedIoSpace 33GB -VMName vm-name

And then restarting, fixed the issue.

Hope that helps

Rob

#3
Posted 01/26/2020 11:58 PM   
[quote=""]A combination of installing the latest kernel (5.0.0-1028-azure or 5.3.0-26-generic) on Ubuntu 18.04.03 LTS shuting down the guest. Increasing the high MIMO to 33Gb with: Set-VM -HighMemoryMappedIoSpace 33GB -VMName vm-name And then restarting, fixed the issue. Hope that helps Rob[/quote] This helped me a lot and I can confirm this for a guest vm running Ubuntu 20.04. LTS
said:A combination of installing the latest kernel (5.0.0-1028-azure or 5.3.0-26-generic) on Ubuntu 18.04.03 LTS shuting down the guest. Increasing the high MIMO to 33Gb with:

Set-VM -HighMemoryMappedIoSpace 33GB -VMName vm-name

And then restarting, fixed the issue.

Hope that helps

Rob


This helped me a lot and I can confirm this for a guest vm running Ubuntu 20.04. LTS

#4
Posted 09/16/2020 12:17 PM   
Scroll To Top

Add Reply