NVIDIA
Microsoft Server 2012R2/RemoteFX compatibility with M6/M60
Simply put, is the Tesla M6/M60 compatible with Microsoft Server 2012R2/RemoteFX for virtualization? A Microsoft blog from a number of years ago listed compatibility with K1/K2 but it has not been updated since. We have no interest in VMWare or Citrix. I am looking at getting a new server or two but I cant find any information. Thanks
Simply put, is the Tesla M6/M60 compatible with Microsoft Server 2012R2/RemoteFX for virtualization?

A Microsoft blog from a number of years ago listed compatibility with K1/K2 but it has not been updated since. We have no interest in VMWare or Citrix.

I am looking at getting a new server or two but I cant find any information.

Thanks

#1
Posted 05/18/2016 02:08 PM   
Hi Trollkeeper, The way REmoteFX works is a bit different (it's sometimes called vGPU), all it needs to use the GPU is a GPU compatible with windows 2012 R2 and the RemoteFX protocol's use of the GPU will be provided by the Microsoft GPU. The M60/M6/M10 all support Windows 2012 R2 so you will be fine as there is no direct interop (probably why info is so light on it on hte internet). I'll try and get some formal documentation written up. The RemoteFX "vGPU" isn't hardware-vGPU (as per Citrix/VMware) and with the OS in between you won't see the same benefits as hardware-vGPU from any GPU vendor. Best wishes, Rachel
Hi Trollkeeper,

The way REmoteFX works is a bit different (it's sometimes called vGPU), all it needs to use the GPU is a GPU compatible with windows 2012 R2 and the RemoteFX protocol's use of the GPU will be provided by the Microsoft GPU. The M60/M6/M10 all support Windows 2012 R2 so you will be fine as there is no direct interop (probably why info is so light on it on hte internet). I'll try and get some formal documentation written up.

The RemoteFX "vGPU" isn't hardware-vGPU (as per Citrix/VMware) and with the OS in between you won't see the same benefits as hardware-vGPU from any GPU vendor.

Best wishes,
Rachel

#2
Posted 05/18/2016 08:15 PM   
Hi RachelBerry Your information is not accurate on RemoteFX "In Windows Server 2012, the RemoteFX vGPU allows render and compute operations in a virtual machine to be hardware accelerated using a physical GPU on the virtualization host. There are multiple components involved in delivering a hardware accelerated experience in Windows Server" link to full article: https://blogs.msdn.microsoft.com/rds/2014/06/06/understanding-and-evaluating-remotefx-vgpu-on-windows-server-2012-r2/ What you are saying is the case with RDP in Server 2012, but when you switch on RemoteFX it does use the hardware. I can confirm the difference on my K1 Grid cards. If I switch down RemoteFX by dropping the network pipe on the client to 100 MBit I get a different virtual graphics adapter installed which does operate as a software based graphics controller. When I switch it up to gigabit I get a different driver installed on the virtual which has hardware acceleration. I tested SolidWorks out which is compatible with RemoteFX. The difference between software and hardware support is very noticeable. The Autodesk suite unfortunately doesn't have a driver to support the K1 virtualization and runs software mode only. Many other software suites only run software based mode as well due to the lack of support for OpenGL, something that is supposed to be supported in Server 2016. DirectX is currently the only supported protocol for RemoteFX. The issue is however if the M6/M60 card is supported. Just because there are drivers for the card does not mean that the all important check box to enable RemoteFX on Hyper-V will be available. And if that checkbox is not available then there is absolutely no use in spending the thousands of dollars on the card as you will get no use of it.
Hi RachelBerry

Your information is not accurate on RemoteFX

"In Windows Server 2012, the RemoteFX vGPU allows render and compute operations in a virtual machine to be hardware accelerated using a physical GPU on the virtualization host. There are multiple components involved in delivering a hardware accelerated experience in Windows Server"

link to full article:

https://blogs.msdn.microsoft.com/rds/2014/06/06/understanding-and-evaluating-remotefx-vgpu-on-windows-server-2012-r2/


What you are saying is the case with RDP in Server 2012, but when you switch on RemoteFX it does use the hardware. I can confirm the difference on my K1 Grid cards. If I switch down RemoteFX by dropping the network pipe on the client to 100 MBit I get a different virtual graphics adapter installed which does operate as a software based graphics controller. When I switch it up to gigabit I get a different driver installed on the virtual which has hardware acceleration.

I tested SolidWorks out which is compatible with RemoteFX. The difference between software and hardware support is very noticeable. The Autodesk suite unfortunately doesn't have a driver to support the K1 virtualization and runs software mode only. Many other software suites only run software based mode as well due to the lack of support for OpenGL, something that is supposed to be supported in Server 2016. DirectX is currently the only supported protocol for RemoteFX.

The issue is however if the M6/M60 card is supported. Just because there are drivers for the card does not mean that the all important check box to enable RemoteFX on Hyper-V will be available. And if that checkbox is not available then there is absolutely no use in spending the thousands of dollars on the card as you will get no use of it.

#3
Posted 05/21/2016 01:20 PM   
[quote="Trollkeeper"]Hi RachelBerry Your information is not accurate on RemoteFX [/quote] Rachel's information is absolutely accurate. RemoteFX virtual graphics adapter uses an API interception mechanism that translates API calls from the Microsoft driver in the guest VM and passes those calls to the hardware driver in the Hyper-V host. What you see with the K2 is the Microsoft driver intercepting and translating those API calls and passing them to the Nvidia driver in the Hyper-V host. It's also worth noting a couple of things The Remote FX virtual graphics adapter is only enabled in fully featured RemoteFX sessions, any other connection method disables it. It's also limited to DirectX, OpenGL is not supported in the 2012R2 release ( Server 2016 does though). So I'm surprised you're seeing Solidworks use the GPU (it may be running on CPU at a very low feature level). [u] [b] Anyhow, you asked about M60 support for Hyper-V.[/b][/u] In the case of the M60 there is currently no official support for bare metal deployments, which this type of installation is becasue it's installed into Server 2012R2 (no support for core services, you must have the GUI). There are several reasons for this which may change in the future. However, if you're happy testing out an unsupported feature, the Tesla M60 can technically be used with Hyper-V 2012R2 to deliver the virtual graphic functionality. You will need an additional graphics adapter in the host to install your OS and you will still require a license for each running VM at the level of vPC Edition. Once you have these licenses and hence access to the license portal you will be able to download the GRID for Windows drivers. Evaluation licenses are available btw. With the upcoming release of Server 2016 Nvidia plans to support Tesla M60, M6 and the newly announced M10 boards for DDA. This feature has been available for trial since Server 2016 TP4, so with the same drivers as above you can test the functionality. If you have an M60 card and require evaluation licenses, you can request them here: http://www.nvidia.com/object/grid-evaluation.html
Trollkeeper said:Hi RachelBerry

Your information is not accurate on RemoteFX



Rachel's information is absolutely accurate.

RemoteFX virtual graphics adapter uses an API interception mechanism that translates API calls from the Microsoft driver in the guest VM and passes those calls to the hardware driver in the Hyper-V host.

What you see with the K2 is the Microsoft driver intercepting and translating those API calls and passing them to the Nvidia driver in the Hyper-V host.

It's also worth noting a couple of things

The Remote FX virtual graphics adapter is only enabled in fully featured RemoteFX sessions, any other connection method disables it.

It's also limited to DirectX, OpenGL is not supported in the 2012R2 release ( Server 2016 does though). So I'm surprised you're seeing Solidworks use the GPU (it may be running on CPU at a very low feature level).


Anyhow, you asked about M60 support for Hyper-V.


In the case of the M60 there is currently no official support for bare metal deployments, which this type of installation is becasue it's installed into Server 2012R2 (no support for core services, you must have the GUI).

There are several reasons for this which may change in the future.

However, if you're happy testing out an unsupported feature, the Tesla M60 can technically be used with Hyper-V 2012R2 to deliver the virtual graphic functionality. You will need an additional graphics adapter in the host to install your OS and you will still require a license for each running VM at the level of vPC Edition. Once you have these licenses and hence access to the license portal you will be able to download the GRID for Windows drivers. Evaluation licenses are available btw.

With the upcoming release of Server 2016 Nvidia plans to support Tesla M60, M6 and the newly announced M10 boards for DDA. This feature has been available for trial since Server 2016 TP4, so with the same drivers as above you can test the functionality.

If you have an M60 card and require evaluation licenses, you can request them here:

http://www.nvidia.com/object/grid-evaluation.html

Jason Southern, Regional Lead for ProVis Sales - EMEA: NVIDIA Ltd.

#4
Posted 05/21/2016 09:36 PM   
[quote="Trollkeeper"]Hi RachelBerry Your information is not accurate on RemoteFX "In Windows Server 2012, the RemoteFX vGPU allows render and compute operations in a virtual machine to be hardware accelerated using a physical GPU on the virtualization host. There are multiple components involved in delivering a hardware accelerated experience in Windows Server" link to full article: https://blogs.msdn.microsoft.com/rds/2014/06/06/understanding-and-evaluating-remotefx-vgpu-on-windows-server-2012-r2/ What you are saying is the case with RDP in Server 2012, but when you switch on RemoteFX it does use the hardware. I can confirm the difference on my K1 Grid cards. If I switch down RemoteFX by dropping the network pipe on the client to 100 MBit I get a different virtual graphics adapter installed which does operate as a software based graphics controller. When I switch it up to gigabit I get a different driver installed on the virtual which has hardware acceleration. I tested SolidWorks out which is compatible with RemoteFX. The difference between software and hardware support is very noticeable. The Autodesk suite unfortunately doesn't have a driver to support the K1 virtualization and runs software mode only. Many other software suites only run software based mode as well due to the lack of support for OpenGL, something that is supposed to be supported in Server 2016. DirectX is currently the only supported protocol for RemoteFX. The issue is however if the M6/M60 card is supported. Just because there are drivers for the card does not mean that the all important check box to enable RemoteFX on Hyper-V will be available. And if that checkbox is not available then there is absolutely no use in spending the thousands of dollars on the card as you will get no use of it. [/quote] RemoteFX "vGPU" will use the GPU so hardware acceleration to some extent will occur but because it's via the OS there is an adidtional ceritfication/support layer and the level of acceleration you get will be below that of vSphere/XenServer vGPU. So whilst a GPU will benefit you, you won't get the same level of benefit as on the hardware vGPU stacks. Does that help make it clearer? Sorry if it wasn't clear... please do ask more questions...
Trollkeeper said:Hi RachelBerry

Your information is not accurate on RemoteFX

"In Windows Server 2012, the RemoteFX vGPU allows render and compute operations in a virtual machine to be hardware accelerated using a physical GPU on the virtualization host. There are multiple components involved in delivering a hardware accelerated experience in Windows Server"

link to full article:

https://blogs.msdn.microsoft.com/rds/2014/06/06/understanding-and-evaluating-remotefx-vgpu-on-windows-server-2012-r2/


What you are saying is the case with RDP in Server 2012, but when you switch on RemoteFX it does use the hardware. I can confirm the difference on my K1 Grid cards. If I switch down RemoteFX by dropping the network pipe on the client to 100 MBit I get a different virtual graphics adapter installed which does operate as a software based graphics controller. When I switch it up to gigabit I get a different driver installed on the virtual which has hardware acceleration.

I tested SolidWorks out which is compatible with RemoteFX. The difference between software and hardware support is very noticeable. The Autodesk suite unfortunately doesn't have a driver to support the K1 virtualization and runs software mode only. Many other software suites only run software based mode as well due to the lack of support for OpenGL, something that is supposed to be supported in Server 2016. DirectX is currently the only supported protocol for RemoteFX.

The issue is however if the M6/M60 card is supported. Just because there are drivers for the card does not mean that the all important check box to enable RemoteFX on Hyper-V will be available. And if that checkbox is not available then there is absolutely no use in spending the thousands of dollars on the card as you will get no use of it.


RemoteFX "vGPU" will use the GPU so hardware acceleration to some extent will occur but because it's via the OS there is an adidtional ceritfication/support layer and the level of acceleration you get will be below that of vSphere/XenServer vGPU. So whilst a GPU will benefit you, you won't get the same level of benefit as on the hardware vGPU stacks.

Does that help make it clearer? Sorry if it wasn't clear... please do ask more questions...

#5
Posted 05/23/2016 09:53 AM   
Hi All, Just following up on the initial question here: We did end up purchasing a Tesla M60 on a Dell PowerEdge R730 server for testing purposes. Dell doesn't install any OS on the servers purchased with M60 GPU, so it arrived with no operating system. Problem is we have not been able get Hyper-V RemoteFX to recognize the M60, all options are greyed out as if this GPU is not detected or is incompatible. We this on both 2012 R2 and 2016 TP5, with same results. The server has an on-board Matrox video card for OS install. The GPU drives were downloaded from NVidia site, we tried multiple driver versions to no avail. Any ideas on what we're doing wrong? Any help is appreciated. Thanks, Thanks, IT_Vancouver
Hi All,

Just following up on the initial question here: We did end up purchasing a Tesla M60 on a Dell PowerEdge R730 server for testing purposes. Dell doesn't install any OS on the servers purchased with M60 GPU, so it arrived with no operating system.

Problem is we have not been able get Hyper-V RemoteFX to recognize the M60, all options are greyed out as if this GPU is not detected or is incompatible. We this on both 2012 R2 and 2016 TP5, with same results.

The server has an on-board Matrox video card for OS install. The GPU drives were downloaded from NVidia site, we tried multiple driver versions to no avail. Any ideas on what we're doing wrong? Any help is appreciated.

Thanks,


Thanks,
IT_Vancouver

#6
Posted 05/31/2016 08:13 PM   
Hi, If you take a look at my post above you'll see that we don't support the M60 in bare metal OS deployments which includes Hyper-V. [quote="JasonSouthern"] In the case of the M60 there is currently no official support for bare metal deployments, which this type of installation is becasue it's installed into Server 2012R2 (no support for core services, you must have the GUI). There are several reasons for this which may change in the future. [/quote] If you still want to test Hyper-V in an unsupported manner, then the drivers you require to use the M60 for graphics in virtualisation require a license, and are not available from NVIDIA.COM. You can only get them from the license portal (see below) You will also need to configure the M60 to run in graphics mode (they ship in compute mode). https://www.youtube.com/watch?v=VAQhiNNFXxQ Access to evaluation licenses is via this route. [quote="JasonSouthern"] If you have an M60 card and require evaluation licenses, you can request them here: http://www.nvidia.com/object/grid-evaluation.html [/quote] Although unsupported you would still need to license it for use. In this scenario you would be comaprable to vSGA from VMware so you would require "virtual PC" edition licenses for each running VM configured to use the GPU (i.e. the RFX adaptor)
Hi,

If you take a look at my post above you'll see that we don't support the M60 in bare metal OS deployments which includes Hyper-V.

JasonSouthern said:

In the case of the M60 there is currently no official support for bare metal deployments, which this type of installation is becasue it's installed into Server 2012R2 (no support for core services, you must have the GUI).

There are several reasons for this which may change in the future.


If you still want to test Hyper-V in an unsupported manner, then the drivers you require to use the M60 for graphics in virtualisation require a license, and are not available from NVIDIA.COM. You can only get them from the license portal (see below)

You will also need to configure the M60 to run in graphics mode (they ship in compute mode).



Access to evaluation licenses is via this route.

JasonSouthern said:
If you have an M60 card and require evaluation licenses, you can request them here:

http://www.nvidia.com/object/grid-evaluation.html


Although unsupported you would still need to license it for use. In this scenario you would be comaprable to vSGA from VMware so you would require "virtual PC" edition licenses for each running VM configured to use the GPU (i.e. the RFX adaptor)

Jason Southern, Regional Lead for ProVis Sales - EMEA: NVIDIA Ltd.

#7
Posted 05/31/2016 09:18 PM   
Hi Guys, I have a question regarding DDA on Windows Server 2016 using pass-through GPU. My Dell PowerEdge T630 has 4 Tesla M60 installed on my host properly configured with the proper Drivers. I have installed Hyper-V normally and First Thing I noticed is that I cant see my physical GPU on the RemoteFX section on the Settings, which at the end I figured out that Tesla M60, out of the box, is not supported for RemoteFX because on the Architecture based on Compute rather than Graphics mode. That is ok for me, since I am not going to use RemoteFX, I want to test DDA instead, so I didnt bother myself with the GPU showing up on my Hyper-V (this is my first Question). Even though I am not using my GPU for RemoteFX, do I still need to see the GPU on hyper-V in order to have DDA fully functional? I have gone through the powershell commands to disable / Dismount and finally mount the GPU to my Windows 10 VM normally without errors. I have installed the Same GPU driver used on my host to my VM, it installed properly, I also see the GPU settings on Perfmon, all seems to be good. The problem is, whenever I try to run a simple GPU intense test, the performance is terrible, it makes me believe that for some reason, even though my TESLA M60 is mounted to the VM, the VM is still using the default Hyper-V Integrated GPU. On my device manager I see both onboard Hyper-V video card and right below the NVIDIA TESLA M60. that makes me also think that I should somehow prioritize the TESLA M60 over the Integrated Hyper-V one. I would like to know what I am missing? No matter what, I need to have the GPU showing up on my Hyper-V console in order to use DDA 100% Functional? Thank you in advance
Hi Guys, I have a question regarding DDA on Windows Server 2016 using pass-through GPU. My Dell PowerEdge T630 has 4 Tesla M60 installed on my host properly configured with the proper Drivers.
I have installed Hyper-V normally and First Thing I noticed is that I cant see my physical GPU on the RemoteFX section on the Settings, which at the end I figured out that Tesla M60, out of the box, is not supported for RemoteFX because on the Architecture based on Compute rather than Graphics mode.
That is ok for me, since I am not going to use RemoteFX, I want to test DDA instead, so I didnt bother myself with the GPU showing up on my Hyper-V (this is my first Question).

Even though I am not using my GPU for RemoteFX, do I still need to see the GPU on hyper-V in order to have DDA fully functional?

I have gone through the powershell commands to disable / Dismount and finally mount the GPU to my Windows 10 VM normally without errors. I have installed the Same GPU driver used on my host to my VM, it installed properly, I also see the GPU settings on Perfmon, all seems to be good. The problem is, whenever I try to run a simple GPU intense test, the performance is terrible, it makes me believe that for some reason, even though my TESLA M60 is mounted to the VM, the VM is still using the default Hyper-V Integrated GPU.
On my device manager I see both onboard Hyper-V video card and right below the NVIDIA TESLA M60. that makes me also think that I should somehow prioritize the TESLA M60 over the Integrated Hyper-V one.

I would like to know what I am missing?

No matter what, I need to have the GPU showing up on my Hyper-V console in order to use DDA 100% Functional?


Thank you in advance

#8
Posted 01/17/2017 03:15 PM   
Hi edenlincoln, I've written up some notes on Hyper-V DDA, https://virtuallyvisual.wordpress.com/2017/01/18/nvidia-grid-and-microsoft-windows-server-oss-and-hyper-v/ It includes how to get the DDA driver - have you got the right driver and eval license? Best wishes, Rachel
Hi edenlincoln,

I've written up some notes on Hyper-V DDA, https://virtuallyvisual.wordpress.com/2017/01/18/nvidia-grid-and-microsoft-windows-server-oss-and-hyper-v/


It includes how to get the DDA driver - have you got the right driver and eval license?

Best wishes,
Rachel

#9
Posted 01/18/2017 02:26 PM   
Hi Rachel, Thanks for your reply. I havent found your post (I was definitely looking at the wrong place). I will take a look and Ill get back to you as soon as I retry the procedure on my Server. Just a quick update on my last test. Environment: DELL POWEREDGE T630 4 TESLA M60 cards (8 GPUS) 1) Successfully selected one for DDA (Dismounted it from host and mounted it to a VM). 1.1) Have successfully installed the Tesla M60 Driver from the host to the VM, installed normally, however I've noticed performance was not good (Seems it was still running under the integrated Matrox onboard card) PS: This test was run without any GRID driver or license isntalled. it was a bare installation from the driver from NVIDIA Website. I have tested both Windows 10 x64 and Server 2016 versions of the M60 driver, no changes on the performance. Second attempt (Once I ran the Linux ISO media from my eval account, Server has crashed once I have changed the GPUs from Compute to Graphics mode). Now, after any new windows Server 2016 installation, once I install the GPU driver (M60) server never comes back up. Question: 1) Which Driver should I use for my test environment? the regular TESLA M60 x64 for windows 10 / Server 2016 or the GRID from the eval profile? 2) Speaking about Hyper-V, Does it matter if I cant find the M60 GPU on my hyper-v in order to get DDA working or the GPU MUST show up on Hyper-V in order to get DDA working on my VM? Last time I pass-through the GPU to the VM and it worked even though performance was terrible, there was no GPU available on the dropbox from Hyper-V manager Settings. Thank you for your help. Eden Oliveira [quote="RachelBerry"]Hi edenlincoln, I've written up some notes on Hyper-V DDA, https://virtuallyvisual.wordpress.com/2017/01/18/nvidia-grid-and-microsoft-windows-server-oss-and-hyper-v/ It includes how to get the DDA driver - have you got the right driver and eval license? Best wishes, Rachel[/quote]
Hi Rachel,

Thanks for your reply. I havent found your post (I was definitely looking at the wrong place). I will take a look and Ill get back to you as soon as I retry the procedure on my Server.

Just a quick update on my last test.

Environment:

DELL POWEREDGE T630
4 TESLA M60 cards (8 GPUS)

1) Successfully selected one for DDA (Dismounted it from host and mounted it to a VM).
1.1) Have successfully installed the Tesla M60 Driver from the host to the VM, installed normally, however I've noticed performance was not good (Seems it was still running under the integrated Matrox onboard card)

PS: This test was run without any GRID driver or license isntalled. it was a bare installation from the driver from NVIDIA Website.
I have tested both Windows 10 x64 and Server 2016 versions of the M60 driver, no changes on the performance.

Second attempt (Once I ran the Linux ISO media from my eval account, Server has crashed once I have changed the GPUs from Compute to Graphics mode). Now, after any new windows Server 2016 installation, once I install the GPU driver (M60) server never comes back up.

Question:

1) Which Driver should I use for my test environment? the regular TESLA M60 x64 for windows 10 / Server 2016 or the GRID from the eval profile?

2) Speaking about Hyper-V, Does it matter if I cant find the M60 GPU on my hyper-v in order to get DDA working or the GPU MUST show up on Hyper-V in order to get DDA working on my VM?
Last time I pass-through the GPU to the VM and it worked even though performance was terrible, there was no GPU available on the dropbox from Hyper-V manager Settings.

Thank you for your help.

Eden Oliveira

RachelBerry said:Hi edenlincoln,

I've written up some notes on Hyper-V DDA, https://virtuallyvisual.wordpress.com/2017/01/18/nvidia-grid-and-microsoft-windows-server-oss-and-hyper-v/


It includes how to get the DDA driver - have you got the right driver and eval license?

Best wishes,
Rachel

#10
Posted 01/18/2017 07:07 PM   
Scroll To Top

Add Reply