NVIDIA
Features missing in unlicensed mode
Currently for one of my customers I'm writing a design for new hardware for new XenApp hosts. One of the requirements is an GPU for graphics acceleration. All XenApp users are task-/knowledge workers. I'm a great fan of the GRID solutions but the additional costs for the licensing component in GRID 2.0/3.0 doesn't make my customer happy. I'm wondering which features will be missing if configuring a passthrough of the 4 GPUs from a M10 to 4 XenApp VM's. As far as I saw in screenshots there's a license mode named 'Tesla unlicensed'. What are the restrictions of that mode? I only need graphics acceleration for Office & web browsers running on a thin client with 2 FHD displays... Thanks for your reply!
Currently for one of my customers I'm writing a design for new hardware for new XenApp hosts. One of the requirements is an GPU for graphics acceleration. All XenApp users are task-/knowledge workers.

I'm a great fan of the GRID solutions but the additional costs for the licensing component in GRID 2.0/3.0 doesn't make my customer happy.

I'm wondering which features will be missing if configuring a passthrough of the 4 GPUs from a M10 to 4 XenApp VM's. As far as I saw in screenshots there's a license mode named 'Tesla unlicensed'.
What are the restrictions of that mode?
I only need graphics acceleration for Office & web browsers running on a thin client with 2 FHD displays...

Thanks for your reply!

#1
Posted 10/24/2016 11:54 AM   
The pdf "GRID LICENSING DU-07757-001 _v4.0 (GRID) | August 2016", "Chapter 3. LICENSING GRID VIRTUAL WORKSTATION WITH PASSTHROUGH" clearly defines - unlicensed Tesla GPUs support a single virtual display head with maximum resolution of 2560×1600.
The pdf "GRID LICENSING DU-07757-001 _v4.0 (GRID) | August 2016", "Chapter 3.
LICENSING GRID VIRTUAL WORKSTATION WITH PASSTHROUGH" clearly defines -
unlicensed Tesla GPUs support a single virtual display head with maximum resolution of 2560×1600.

#2
Posted 10/24/2016 04:54 PM   
Hi asg, the display heads vs. no. of monitors is a bit different for XenApp... there are a couple of blogs that might help - let me know if there are gaps... https://virtuallyvisual.wordpress.com/2016/09/21/nvidia-grid-more-info-on-vapps-and-vpcvws-licensing/ https://virtuallyvisual.wordpress.com/2016/09/05/nvidia-grid-rdsh-licensing-including-xenapp/ Does this help? We are looking to roll up the blog information into the doc. better. Best wishes, Rachel
Hi asg,

the display heads vs. no. of monitors is a bit different for XenApp... there are a couple of blogs that might help - let me know if there are gaps...


https://virtuallyvisual.wordpress.com/2016/09/21/nvidia-grid-more-info-on-vapps-and-vpcvws-licensing/


https://virtuallyvisual.wordpress.com/2016/09/05/nvidia-grid-rdsh-licensing-including-xenapp/


Does this help? We are looking to roll up the blog information into the doc. better.

Best wishes,
Rachel

#3
Posted 10/24/2016 04:59 PM   
Thanks mcerveny and Rachel for your replies. I'd already read the licensing related guides and saw the 'Tesla (unlicensed) mode'. Unfortunately it isn't clear to me what this unlicensed mode exactly means and what the consequences are. Is an unlicensed GPU passthrough'd to a XenApp server worthless? In other words: does the GPU hasn't any effect on graphics performance?
Thanks mcerveny and Rachel for your replies.

I'd already read the licensing related guides and saw the 'Tesla (unlicensed) mode'.

Unfortunately it isn't clear to me what this unlicensed mode exactly means and what the consequences are.

Is an unlicensed GPU passthrough'd to a XenApp server worthless? In other words: does the GPU hasn't any effect on graphics performance?

#4
Posted 10/25/2016 06:23 AM   
Hi Apart from the single monitor and display resolution limitations mentioned above, has your client considered where they will get the M10 drivers from, how they will upgrade / maintain those drivers when new ones are released and also how they will monitor and support the GPUs? Currently, the M10 drivers do not appear to be available from the publicly accessible Nvidia drivers download page, this obviously makes getting started a little tricky. Whether they will be at some point, I'm unsure. I have seen [b]countless[/b] threads on forums asking where they get support for earlier version GPUs for a wide variety of reasons, usually these are the K1 / K2 cards. And the answer is always, "Go back to where you purchased them from", which is typically a partner / re-seller who may not have the in-depth knowledge and skills your client needs to resolve their issue, and even if they do, are they 24x7? I know your client is obviously trying to reduce costs, but SUMs is worth having. As these GPUs will be run in Passthrough, the Hypervisor can't [i]really[/i] see them, and as your client won't be installing a driver in the Hypervisor, they'll have to monitor them from within the individual VMs, rather than through the Hypervisor. Tracking GPU utilisation for scale / performance, although obviously possible, won't be so easy. Just out of interest, is it the cost of the physical GPUs your client doesn't like, or the cost of the licensing model? Regards Ben
Hi

Apart from the single monitor and display resolution limitations mentioned above, has your client considered where they will get the M10 drivers from, how they will upgrade / maintain those drivers when new ones are released and also how they will monitor and support the GPUs?

Currently, the M10 drivers do not appear to be available from the publicly accessible Nvidia drivers download page, this obviously makes getting started a little tricky. Whether they will be at some point, I'm unsure.

I have seen countless threads on forums asking where they get support for earlier version GPUs for a wide variety of reasons, usually these are the K1 / K2 cards. And the answer is always, "Go back to where you purchased them from", which is typically a partner / re-seller who may not have the in-depth knowledge and skills your client needs to resolve their issue, and even if they do, are they 24x7? I know your client is obviously trying to reduce costs, but SUMs is worth having.

As these GPUs will be run in Passthrough, the Hypervisor can't really see them, and as your client won't be installing a driver in the Hypervisor, they'll have to monitor them from within the individual VMs, rather than through the Hypervisor. Tracking GPU utilisation for scale / performance, although obviously possible, won't be so easy.

Just out of interest, is it the cost of the physical GPUs your client doesn't like, or the cost of the licensing model?

Regards

Ben

#5
Posted 10/26/2016 09:20 AM   
Your points make sense BJones. Thanks for that! The customers I work for, are disappointed mostly about the licensing costs. A GPU accelerated desktop session (XenApp) is mainly used for task-/knowledge workers. The hardware asks for a initial investment, but the licensing model doubles the costs. Compared with the GRIDK1/K2 the costs for a GPU accelerated RDS/XA session raised substantially with the new license model. Regarding designers/CAD/CAM and so on, it's common known that a workstation with e.g. a Quadro is expensive and therefore it is evident a virtual workstation is also (much) higher priced. Customers I'm talking with expect GPU acceleration for task workers (apps like browsers and Office) to be 'commodity' with a relative minor costs impact. Maybe the're waiting for Intel Iris Pro being the mainstream option and NV GRID for the state of the art graphic workstation. I think some competition will do good for customers ;)
Your points make sense BJones. Thanks for that!

The customers I work for, are disappointed mostly about the licensing costs. A GPU accelerated desktop session (XenApp) is mainly used for task-/knowledge workers. The hardware asks for a initial investment, but the licensing model doubles the costs. Compared with the GRIDK1/K2 the costs for a GPU accelerated RDS/XA session raised substantially with the new license model.
Regarding designers/CAD/CAM and so on, it's common known that a workstation with e.g. a Quadro is expensive and therefore it is evident a virtual workstation is also (much) higher priced.

Customers I'm talking with expect GPU acceleration for task workers (apps like browsers and Office) to be 'commodity' with a relative minor costs impact.
Maybe the're waiting for Intel Iris Pro being the mainstream option and NV GRID for the state of the art graphic workstation.
I think some competition will do good for customers ;)

#6
Posted 10/26/2016 06:41 PM   
Competition is good for everyone! For consumers it gives them choice and bargaining power. For vendors, it helps drive innovation to stay ahead in the market and for engineers / architects, well it just gives them more tech to play with, which is always a good thing :-) By having the GPU sitting on the same die as the CPU, Intel definately has a great solution there, however it seems they sacrifice Core count to do this (I can't find one with more than 4 Cores?!) which massively impacts server density, which is obviously not great as it leads to additional costs. You're limited to 2GB frame buffer with this as well. Maybe that will change as the technology evolves. Also, don't forget AMD have moved into this arena now as well, they also have a different approach to Nvidia and called it MxGPU (Multi User GPU). They use SR-IOV to do it, rather than the GRID software alternative. However, unless things have recently changed, one of the limitations of SR-IOV is that a VM is bound to that piece of hardware and cannot be migrated to another host in the event of a failure. Yes, currently this migration technology (vMotion / XenMotion) is not available for Nvidia either, however by using a software layer, the Hypervisor vendors can at least have the oppertunity to develop this technology, whereas with hardware, no one has done this to date. Another advantage of using a software layer, is that it's easier to add new features and enhancements, whereas with hardare, obviously not so much. I think it's safe to say that Nvidia did (let's say) get "overly ambitious" with their initial pricing model for GRID 2.0, however they did recognise this which is why it was quickly refined to what it is now, which is much more pleasing. Depending on how the solution is designed, it can be pretty cost effective. For the Task / Knowledge workers, you can be looking at over 100 users per physical server with the right CPU / Memory / GPU combination. Add to that the fact they are no longer chained to their desks, no longer need to have PCs running and maintained, Air Conditioning costs because of all the PCs etc etc, these things all add up and they have a value of some sort. By using a GPU in the servers, the CPUs don't have to work so hard, so you get more users per server, up to a point where you actually need less physical servers to support the same amount of users. The total cost of the solution can be reduced as you don't need to purchase, run, cool, maintain, support and license as much hardware. If you compare a K1 to an M10 with XenDesktop or Horizon, the K1 would support a Max of 32 users per Card. With the same use case, the M10 will support 64, and give a better experience due to better architecture and feature set, plus what I mentioned above about SUMs for support, updates, maintanence feature enhancements etc which you don't get with the older GPUs. Also, don't forget that's Concurrent User, not Total Users. Adding another K1 and M10 into the same server, you're now looking at 64 users (2x K1) compared to 128 (2x M10) if you don't run into CPU contention issues that is. That's double the density, so theroetically, you'd need 50% less physical servers to support the same amount of users. That's less hardware to rack, install / configure, power, cool, maintain, support and less Hypervisors to license, if you run Citrix on-top of vSphere, that's quite a saving! ... Do you mind me asking, how may users is your client looking to support on this platform? Regards Ben
Competition is good for everyone! For consumers it gives them choice and bargaining power. For vendors, it helps drive innovation to stay ahead in the market and for engineers / architects, well it just gives them more tech to play with, which is always a good thing :-)

By having the GPU sitting on the same die as the CPU, Intel definately has a great solution there, however it seems they sacrifice Core count to do this (I can't find one with more than 4 Cores?!) which massively impacts server density, which is obviously not great as it leads to additional costs. You're limited to 2GB frame buffer with this as well. Maybe that will change as the technology evolves. Also, don't forget AMD have moved into this arena now as well, they also have a different approach to Nvidia and called it MxGPU (Multi User GPU). They use SR-IOV to do it, rather than the GRID software alternative. However, unless things have recently changed, one of the limitations of SR-IOV is that a VM is bound to that piece of hardware and cannot be migrated to another host in the event of a failure. Yes, currently this migration technology (vMotion / XenMotion) is not available for Nvidia either, however by using a software layer, the Hypervisor vendors can at least have the oppertunity to develop this technology, whereas with hardware, no one has done this to date. Another advantage of using a software layer, is that it's easier to add new features and enhancements, whereas with hardare, obviously not so much.

I think it's safe to say that Nvidia did (let's say) get "overly ambitious" with their initial pricing model for GRID 2.0, however they did recognise this which is why it was quickly refined to what it is now, which is much more pleasing.

Depending on how the solution is designed, it can be pretty cost effective. For the Task / Knowledge workers, you can be looking at over 100 users per physical server with the right CPU / Memory / GPU combination. Add to that the fact they are no longer chained to their desks, no longer need to have PCs running and maintained, Air Conditioning costs because of all the PCs etc etc, these things all add up and they have a value of some sort.

By using a GPU in the servers, the CPUs don't have to work so hard, so you get more users per server, up to a point where you actually need less physical servers to support the same amount of users. The total cost of the solution can be reduced as you don't need to purchase, run, cool, maintain, support and license as much hardware.

If you compare a K1 to an M10 with XenDesktop or Horizon, the K1 would support a Max of 32 users per Card. With the same use case, the M10 will support 64, and give a better experience due to better architecture and feature set, plus what I mentioned above about SUMs for support, updates, maintanence feature enhancements etc which you don't get with the older GPUs. Also, don't forget that's Concurrent User, not Total Users. Adding another K1 and M10 into the same server, you're now looking at 64 users (2x K1) compared to 128 (2x M10) if you don't run into CPU contention issues that is. That's double the density, so theroetically, you'd need 50% less physical servers to support the same amount of users. That's less hardware to rack, install / configure, power, cool, maintain, support and less Hypervisors to license, if you run Citrix on-top of vSphere, that's quite a saving! ...

Do you mind me asking, how may users is your client looking to support on this platform?

Regards

Ben

#7
Posted 10/26/2016 09:54 PM   
Scroll To Top

Add Reply