NVIDIA
any experience using Cubix expander to increase user density?
Hi all, we're designing an environment for one of our customers - they would like to use Nvidia M60 cards for vGPU-accelerated VDI (Citrix Xendesktop or VMware Horizon view, this is still TBD). One thing is they are very particular about, is user density. They are currently planning 90 heavy users (k260q or k280q profiles) and 70 light users (no GPU acceleration or maybe vSGA). The servers that we're likely to use are Cisco UCS c240 M4 rackmounts, which only support 2 GPUs per box. We're trying to figure out how to increase the density of GPUs, and are evaluating Cubix Xpander - http://www.cubix.com/rps-tesla-m60-jagfastr. This should allow us to go up to 8 Nvidia M60 cards per single c240 M4 server. Have any of you ever seen Cubix Xpander used with any type of VDI solutions? My concern is the bottleneck between the expander and the motherboard in the server (2 x 16 PCIe lanes for 8 cards). Perhaps you can suggest an alternative? Thanks in advance!
Hi all,

we're designing an environment for one of our customers - they would like to use Nvidia M60 cards for vGPU-accelerated VDI (Citrix Xendesktop or VMware Horizon view, this is still TBD).

One thing is they are very particular about, is user density. They are currently planning 90 heavy users (k260q or k280q profiles) and 70 light users (no GPU acceleration or maybe vSGA).

The servers that we're likely to use are Cisco UCS c240 M4 rackmounts, which only support 2 GPUs per box. We're trying to figure out how to increase the density of GPUs, and are evaluating Cubix Xpander - http://www.cubix.com/rps-tesla-m60-jagfastr. This should allow us to go up to 8 Nvidia M60 cards per single c240 M4 server.

Have any of you ever seen Cubix Xpander used with any type of VDI solutions? My concern is the bottleneck between the expander and the motherboard in the server (2 x 16 PCIe lanes for 8 cards). Perhaps you can suggest an alternative?

Thanks in advance!

#1
Posted 04/01/2016 05:37 PM   
I know of a couple of customers using the Magma expansion chassis with the Cisco servers, but you're not going to get 8 M60 per C240. The max number of cards supported with vGPU in a single host is 4, and you need to be aware of hitting the limits in the underlying hardware. You're looking at a practical limit of 12 vga controllers in a PCI switch, so 6 M60's maximum, and that assumes that the SBIOS etc in the c240 can handle it. Generally speaking I woudl expect the SBIOS to support a max of 8 GPU's (so 4xM60) only Cisco can confirm whether they would support this number of devices for you. As I noted above, the max supported in a host for vGPU is 4 cards (8 GPU's). If you take a look at the link below, you'll find server chassis certified for 4 M60. http://www.nvidia.com/object/grid-certified-servers.html
I know of a couple of customers using the Magma expansion chassis with the Cisco servers, but you're not going to get 8 M60 per C240.

The max number of cards supported with vGPU in a single host is 4, and you need to be aware of hitting the limits in the underlying hardware.

You're looking at a practical limit of 12 vga controllers in a PCI switch, so 6 M60's maximum, and that assumes that the SBIOS etc in the c240 can handle it. Generally speaking I woudl expect the SBIOS to support a max of 8 GPU's (so 4xM60) only Cisco can confirm whether they would support this number of devices for you.

As I noted above, the max supported in a host for vGPU is 4 cards (8 GPU's).

If you take a look at the link below, you'll find server chassis certified for 4 M60.


http://www.nvidia.com/object/grid-certified-servers.html

Jason Southern, Regional Lead for ProVis Sales - EMEA: NVIDIA Ltd.

#2
Posted 04/03/2016 09:50 AM   
Scroll To Top

Add Reply