NVIDIA
Hyper-V 2016 RemoteFX vGPU & Tesla M10
Anyone knows when we can expect to be able to deploy a VM in Hyper-V 2016 and have a GRID M10 driver recognized inside the VM? ([b]not[/b] via DDA but via RemoteFX so I can [b]share GPUs across VMs[/b]) I'm looking to deploy mass "slave" Windows VMs on bare-metal hosts and have graphics rendering inside those VMs offloaded to the GPU. GRID vGPU works best since I'm looking at LOTS of VMs per host.
Anyone knows when we can expect to be able to deploy a VM in Hyper-V 2016 and have a GRID M10 driver recognized inside the VM? (not via DDA but via RemoteFX so I can share GPUs across VMs)

I'm looking to deploy mass "slave" Windows VMs on bare-metal hosts and have graphics rendering inside those VMs offloaded to the GPU. GRID vGPU works best since I'm looking at LOTS of VMs per host.

#1
Posted 03/02/2018 01:14 AM   
Hi, I'm not sure if I fully understand your request. What is the issue with the current driver for M10? This works for sure also for RemoteFX. Which driver did you try? I guess you're just using the wrong driver. Even for RemoteFX you need vPC licenses for all the VMs and therefore the driver is only available via the Nvidia Enterprise Portal. Regards Simon
Hi,

I'm not sure if I fully understand your request. What is the issue with the current driver for M10? This works for sure also for RemoteFX.
Which driver did you try? I guess you're just using the wrong driver. Even for RemoteFX you need vPC licenses for all the VMs and therefore the driver is only available via the Nvidia Enterprise Portal.

Regards

Simon

#2
Posted 03/02/2018 08:32 AM   
Just ran into a similar issue. I have Dell R730 that a personally own. With a Tesla M2090 in it. So if I slap an AMD graphics card or and Nvidia 1030 in it I can use Remote FX, but if I want to use my Tesla card I have to pay for the drivers or Nvidia won't let me? Really?
Just ran into a similar issue.
I have Dell R730 that a personally own. With a Tesla M2090 in it.
So if I slap an AMD graphics card or and Nvidia 1030 in it I can use Remote FX, but if I want to use my Tesla card I have to pay for the drivers or Nvidia won't let me?

Really?

#3
Posted 06/26/2018 10:03 PM   
Partly correct. You cannot use Geforce for virtualization. And be aware that RemoteFX is deprecated with 2016 1709 and won't be supported any more in future releases. https://docs.microsoft.com/en-us/windows-server/get-started/windows-server-1803-removed-features regards Simon
Partly correct. You cannot use Geforce for virtualization. And be aware that RemoteFX is deprecated with 2016 1709 and won't be supported any more in future releases.


https://docs.microsoft.com/en-us/windows-server/get-started/windows-server-1803-removed-features


regards
Simon

#4
Posted 06/27/2018 09:59 AM   
Ahhh Okay sorry. :) Do you know what the new solution path is for vGPU going forward with Server 2016? I'm testing in on my R730, but I'm tasked with figuring it out on Server 2016 and Vmware Horizons. I've got plenty of docs for VMware. But I'm not quite sure how I'm going to complete the task of using Nvidia products for vGPU performance on VM's in Server 2016. Unless Microsoft is going to have something new come out some time soon, I guess I'm just pushed straight to Horizons only. :(
Ahhh Okay sorry. :)

Do you know what the new solution path is for vGPU going forward with Server 2016?

I'm testing in on my R730, but I'm tasked with figuring it out on Server 2016 and Vmware Horizons. I've got plenty of docs for VMware. But I'm not quite sure how I'm going to complete the task of using Nvidia products for vGPU performance on VM's in Server 2016. Unless Microsoft is going to have something new come out some time soon, I guess I'm just pushed straight to Horizons only. :(

#5
Posted 06/28/2018 12:14 AM   
For RDSH I would always recommend to use the Tesla M10. You can already use DDA on Server 2016 to "Passthrough the 4 GPUs to 4 RDSH VMs and this should be more than sufficient for a single host.
For RDSH I would always recommend to use the Tesla M10. You can already use DDA on Server 2016 to "Passthrough the 4 GPUs to 4 RDSH VMs and this should be more than sufficient for a single host.

#6
Posted 06/29/2018 11:27 AM   
Hello all, A follow-on question regarding DDA, the M10 and setting up pass through. The M10 has 8 GPU's so technically, I can setup 8 VMs each connected to a M10 GPU. Do I have that right? I have 3 Dell R730's in a failover cluster, each with M10 cards. They are running several VM servers for our school district. I would like to run 8 Win10 VM's for student operations to drive remote access for running Adobe Premier and Maya. Has anyone in this community been successful in a production environment yet? Thanks in advance for any input you may have on this topic.
Hello all,

A follow-on question regarding DDA, the M10 and setting up pass through. The M10 has 8 GPU's so technically, I can setup 8 VMs each connected to a M10 GPU. Do I have that right?

I have 3 Dell R730's in a failover cluster, each with M10 cards. They are running several VM servers for our school district. I would like to run 8 Win10 VM's for student operations to drive remote access for running Adobe Premier and Maya.

Has anyone in this community been successful in a production environment yet?

Thanks in advance for any input you may have on this topic.

#7
Posted 08/09/2018 04:14 PM   
Hi, the M10 has 4 GPUs. For sure you can DDA the 4GPUs to 4 Win10 VMs but M10 is not meant for Adobe Premiere or Maya. You should better wait until MSFT supports "vGPU" in a future Hyper-V release. regards Simon
Hi,

the M10 has 4 GPUs. For sure you can DDA the 4GPUs to 4 Win10 VMs but M10 is not meant for Adobe Premiere or Maya.
You should better wait until MSFT supports "vGPU" in a future Hyper-V release.

regards
Simon

#8
Posted 08/10/2018 05:21 PM   
Scroll To Top

Add Reply