NVIDIA
Could not initalize plugin /usr/lib64/vmware/plugin/libnvidia-vpx.so for vGPU
Getting the error in the title. Currently only occurs after creating 8 VMs on a single host. Even after moving VMs off of the host i am still getting an error that does not allow me to add a 9th VM. Is anyone else have this problem? We are using Driver/SMI Version 367.92. In addition all of the vGPUs are in Graphics Mode are are reporting in as so. This is occuring on the M10 card under VSphere 6.0.0 U2 and Horizon View 7.0.0. [url][/url]
Getting the error in the title. Currently only occurs after creating 8 VMs on a single host. Even after moving VMs off of the host i am still getting an error that does not allow me to add a 9th VM. Is anyone else have this problem? We are using Driver/SMI Version 367.92. In addition all of the vGPUs are in Graphics Mode are are reporting in as so. This is occuring on the M10 card under VSphere 6.0.0 U2 and Horizon View 7.0.0.
[url][/url]

#1
Posted 07/24/2017 08:38 PM   
First of all it would be intersting to know what vGPU profile you are using. Do you have 1 or 2 M10s in the host? For me it looks that there are no further GPU resources available
First of all it would be intersting to know what vGPU profile you are using. Do you have 1 or 2 M10s in the host? For me it looks that there are no further GPU resources available

#2
Posted 07/25/2017 07:11 AM   
A single M10 card in the host. At first that was my hunch that potentially the GPU resources were all used up so i shifted a few machines off of the host until only 50% of the resources were being used and the issue persisted. We are using the M10-4q profile. So i should be able to squeeze 8 VMs onto the card. 8VMs x 4gb(4q profile) = 32gb that is on the card. Have you been seeing the same issue?
A single M10 card in the host. At first that was my hunch that potentially the GPU resources were all used up so i shifted a few machines off of the host until only 50% of the resources were being used and the issue persisted. We are using the M10-4q profile. So i should be able to squeeze 8 VMs onto the card. 8VMs x 4gb(4q profile) = 32gb that is on the card. Have you been seeing the same issue?

#3
Posted 07/25/2017 12:25 PM   
Now I'm a bit confused. First you mentioned the issue is with the 9th VM and now the issue is there no matter which VM to start? Did you already reboot the host? Probably a TDR on a running VM causing an issue on a GPU core? Regards Simon
Now I'm a bit confused. First you mentioned the issue is with the 9th VM and now the issue is there no matter which VM to start? Did you already reboot the host? Probably a TDR on a running VM causing an issue on a GPU core?

Regards

Simon

#4
Posted 07/25/2017 06:58 PM   
TDR? and ya hopped on a call with nvidia support today they are reviewing the logs. Seems like there may be a process that runs and cleans up the use and takes some time for it to clear that its no longer being used as a resource.
TDR? and ya hopped on a call with nvidia support today they are reviewing the logs. Seems like there may be a process that runs and cleans up the use and takes some time for it to clear that its no longer being used as a resource.

#5
Posted 07/25/2017 07:13 PM   
Scroll To Top

Add Reply