Artificial Intelligence Computing Leadership from NVIDIA
grid k1 pass through in openstack(kvm)
Hi all, I have passed through the grid k1 to windows 7 vm created by openstack with kvm.I have downloaded driver .Driver i downloaed: http://www.nvidia.com/download/driverResults.aspx/98543/en-us But checked with device status,it said Device Status: This device cannot find enough free resources that it can use. (Code 12). Do you guys have good suggestions? Arthur Thanks & Regards
Hi all,

I have passed through the grid k1 to windows 7 vm created by openstack with kvm.I have downloaded driver .Driver i downloaed: http://www.nvidia.com/download/driverResults.aspx/98543/en-us


But checked with device status,it said Device Status: This device cannot find enough free resources that it can use. (Code 12).

Do you guys have good suggestions?

Arthur
Thanks & Regards

#1
Posted 03/17/2016 03:20 AM   
how can i find the right slot, bus,domain of grid k1. alias_name = etree.Element("alias", name=self.name) dev.append(alias_name) address_with_type = etree.Element("address", type='pci', domain='0x' + self.domain, bus='0x' + self.bus, slot='0x06' ,#+ self.slot, function='0x' + self.function)
how can i find the right slot, bus,domain of grid k1.



alias_name = etree.Element("alias", name=self.name)
dev.append(alias_name)


address_with_type = etree.Element("address",
type='pci',
domain='0x' + self.domain,
bus='0x' + self.bus,
slot='0x06' ,#+ self.slot,
function='0x' + self.function)

#2
Posted 03/17/2016 03:41 AM   
Hi Arthur, Can you check what you get from [root@abcd]# lspci -vvvv -d "10de:*" | grep Region
Hi Arthur,

Can you check what you get from
[root@abcd]# lspci -vvvv -d "10de:*" | grep Region

#3
Posted 03/17/2016 06:50 PM   
Output from Openstack Server?not the vm: [root@localhost libvirt]# lspci -vvvv -d "10de:*" | grep Region Region 0: Memory at 94000000 (32-bit, non-prefetchable) [size=16M] Region 1: Memory at 33ff0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at 33ff8000000 (64-bit, prefetchable) [size=32M] Region 5: I/O ports at 5000 [size=128] Region 0: Memory at 93000000 (32-bit, non-prefetchable) [size=16M] Region 1: Memory at 33fe0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at 33fe8000000 (64-bit, prefetchable) [size=32M] Region 5: I/O ports at 4000 [size=128] Region 0: Memory at 92000000 (32-bit, non-prefetchable) [size=16M] Region 1: Memory at 33fd0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at 33fd8000000 (64-bit, prefetchable) [size=32M] Region 5: I/O ports at 3000 [size=128] Region 0: Memory at 91000000 (32-bit, non-prefetchable) [size=16M] Region 1: Memory at 33fc0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at 33fc8000000 (64-bit, prefetchable) [size=32M] Region 5: I/O ports at 2000 [size=128] One thing I can not understand is k1 passed to vm.so the command lspci should be executed on vm or the openstack server?
Output from Openstack Server?not the vm:
[root@localhost libvirt]# lspci -vvvv -d "10de:*" | grep Region
Region 0: Memory at 94000000 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at 33ff0000000 (64-bit, prefetchable) [size=128M]
Region 3: Memory at 33ff8000000 (64-bit, prefetchable) [size=32M]
Region 5: I/O ports at 5000 [size=128]
Region 0: Memory at 93000000 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at 33fe0000000 (64-bit, prefetchable) [size=128M]
Region 3: Memory at 33fe8000000 (64-bit, prefetchable) [size=32M]
Region 5: I/O ports at 4000 [size=128]
Region 0: Memory at 92000000 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at 33fd0000000 (64-bit, prefetchable) [size=128M]
Region 3: Memory at 33fd8000000 (64-bit, prefetchable) [size=32M]
Region 5: I/O ports at 3000 [size=128]
Region 0: Memory at 91000000 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at 33fc0000000 (64-bit, prefetchable) [size=128M]
Region 3: Memory at 33fc8000000 (64-bit, prefetchable) [size=32M]
Region 5: I/O ports at 2000 [size=128]

One thing I can not understand is k1 passed to vm.so the command lspci should be executed on vm or the openstack server?

#4
Posted 03/18/2016 01:00 AM   
[quote="arthur"]But checked with device status,it said Device Status: This device cannot find enough free resources that it can use. (Code 12).[/quote] PCI (x86) guest subsystem has some legacy limits to space (memory and io). There is usually insufficient "pci mmio hole" (eg. reserved memory "hole" ~512MB at the end of first 4G of physical addressable memory space for mapping 32bit PCI device memory (for example framebuffer)). The size of "pci mmio hole" depends on emulated chipset and parameters of kvm and/or qemu and/or guest bios (version dependent). What to try: - boot guest windows and summarize assigned memory usage of PCI devices - try to lower number of pci devices for guest windows - try to lower ram of guest windows (eg. ~3GB RAM) - try to find parameter of kvm/qemu to extend "pci mmio hole" for guest (I am not kvm aware but under xen/qemu it is "mmio_hole" parameter for guest) - try to avoid use of 32bit PCI eg. enable pci mapping over 4G for guest (an parameter of guest bios) (NVidia should say if it is possible and supported for K1 (eg. supported 64bit PCI bar) and kvm (maybe some vbios+PLX pcie bridge on K1 card upgrade needed, eg. "nvidia-smi -q | egrep 'Version|Firmware'")) M.C> PS: [url]https://en.wikipedia.org/wiki/PCI_hole[/url] [url]http://techfiles.de/dmelanchthon/files/memory_hole.pdf[/url]
arthur said:But checked with device status,it said Device Status: This device cannot find enough free resources that it can use. (Code 12).

PCI (x86) guest subsystem has some legacy limits to space (memory and io). There is usually insufficient "pci mmio hole" (eg. reserved memory "hole" ~512MB at the end of first 4G of physical addressable memory space for mapping 32bit PCI device memory (for example framebuffer)). The size of "pci mmio hole" depends on emulated chipset and parameters of kvm and/or qemu and/or guest bios (version dependent).
What to try:
- boot guest windows and summarize assigned memory usage of PCI devices
- try to lower number of pci devices for guest windows
- try to lower ram of guest windows (eg. ~3GB RAM)
- try to find parameter of kvm/qemu to extend "pci mmio hole" for guest (I am not kvm aware but under xen/qemu it is "mmio_hole" parameter for guest)
- try to avoid use of 32bit PCI eg. enable pci mapping over 4G for guest (an parameter of guest bios) (NVidia should say if it is possible and supported for K1 (eg. supported 64bit PCI bar) and kvm (maybe some vbios+PLX pcie bridge on K1 card upgrade needed, eg. "nvidia-smi -q | egrep 'Version|Firmware'"))

M.C>

PS:
https://en.wikipedia.org/wiki/PCI_hole
http://techfiles.de/dmelanchthon/files/memory_hole.pdf

#5
Posted 03/18/2016 08:02 AM   
Scroll To Top

Add Reply