NVIDIA
M60 can it be used for deep learning..
I've been given access to an M60 and wanted to do some training on AI, I know it's primarily aimed at GRID & vGPU but I noticed in the licensing pdf it mentions "Tesla Unlicensed" and there are also Tesla Drivers available for it. Will I be able to use all the cards resources, both GPU's, for deep learning? i.e. will it be treated like a k80 as far as CUDA etc is concerned? Also if I were to try it for it's intended purpose, how limiting is it without a license "[i]unlicensed Tesla GPUs support a single virtual display head with maximum resolution of 2560×1600[/i]" - again will that single virtual display have access to all the resources, just with a limited resolution? Not sure how long I'll have access to it and it's purely for personal training and to gather some experience, so any help or advice will be warmly welcomed..
I've been given access to an M60 and wanted to do some training on AI, I know it's primarily aimed at GRID & vGPU but I noticed in the licensing pdf it mentions "Tesla Unlicensed" and there are also Tesla Drivers available for it.

Will I be able to use all the cards resources, both GPU's, for deep learning? i.e. will it be treated like a k80 as far as CUDA etc is concerned?

Also if I were to try it for it's intended purpose, how limiting is it without a license "unlicensed Tesla GPUs support a single virtual display head with maximum resolution of 2560×1600" - again will that single virtual display have access to all the resources, just with a limited resolution?

Not sure how long I'll have access to it and it's purely for personal training and to gather some experience, so any help or advice will be warmly welcomed..

#1
Posted 05/21/2017 08:16 PM   
Hi Firstly, let's get that licensing resolved... The easiest way to deal with that is to use a 90 day evaluation. You can get that from here: [url]http://www.nvidia.com/object/grid-evaluation.html#utm_source=shorturl&utm_medium=referrer&utm_campaign=grid-eval[/url] Regarding usage of both GPUs... As with a K80, they're both there to be used. Any limitation in multi-GPU utilization is down to your software, not the hardware :-) You'll also more than likely want to make sure that both GPUs are in "Compute" mode, NOT "Graphics" if you're playing with AI. You can do that using the Linux Boot Utility you'll get with the correct M60 driver package after you've registered for the evaluation. Regards Ben
Hi

Firstly, let's get that licensing resolved... The easiest way to deal with that is to use a 90 day evaluation. You can get that from here:

http://www.nvidia.com/object/grid-evaluation.html#utm_source=shorturl&utm_medium=referrer&utm_campaign=grid-eval

Regarding usage of both GPUs... As with a K80, they're both there to be used. Any limitation in multi-GPU utilization is down to your software, not the hardware :-)

You'll also more than likely want to make sure that both GPUs are in "Compute" mode, NOT "Graphics" if you're playing with AI. You can do that using the Linux Boot Utility you'll get with the correct M60 driver package after you've registered for the evaluation.

Regards

Ben

#2
Posted 05/22/2017 08:17 AM   
Thanks for pointing me in the right direction, much appreciated.. 90 day trial will be perfect. After a bit more research it appears, at least from my understanding, the GTX1080ti Strix I bought recently would be more beneficial then the M60 for AI purposes. While I have access to the M60 I'll explore vGPU, this actually more closely aligns with my longer term plans, however knowing I can dual boot and leverage it towards 'compute' without restrictions 'after hours' is useful to know. Thanks again..,
Thanks for pointing me in the right direction, much appreciated.. 90 day trial will be perfect.

After a bit more research it appears, at least from my understanding, the GTX1080ti Strix I bought recently would be more beneficial then the M60 for AI purposes.

While I have access to the M60 I'll explore vGPU, this actually more closely aligns with my longer term plans, however knowing I can dual boot and leverage it towards 'compute' without restrictions 'after hours' is useful to know.

Thanks again..,

#3
Posted 05/22/2017 12:31 PM   
[quote="n31l"]After a bit more research it appears, at least from my understanding, the GTX1080ti Strix I bought recently would be more beneficial then the M60 for AI purposes. [/quote] No arguments from me about that. You can't get around the simple fact that GRID is still stuck on Maxwell, and everything else NVIDIA make is now (and has been since last year?!) a generation and just recently, now 2 generational architectures ahead. Those newer Pascal and Volta architectures are just more efficient, more powerful, faster and more relevant to what you're working with, doesn't matter whether it's GeForce, Quadro or Tesla. Let us know how you get on with the AI stuff :-) Regards Ben
n31l said:After a bit more research it appears, at least from my understanding, the GTX1080ti Strix I bought recently would be more beneficial then the M60 for AI purposes.


No arguments from me about that.

You can't get around the simple fact that GRID is still stuck on Maxwell, and everything else NVIDIA make is now (and has been since last year?!) a generation and just recently, now 2 generational architectures ahead. Those newer Pascal and Volta architectures are just more efficient, more powerful, faster and more relevant to what you're working with, doesn't matter whether it's GeForce, Quadro or Tesla.

Let us know how you get on with the AI stuff :-)


Regards

Ben

#4
Posted 05/22/2017 01:43 PM   
Hi, I wanted to ask the same question, but about the Tesla M6; I figured this is roughly the same question so I might as well ask here. Is the M6, and is the Grid architecture, also meant or suited for deep learning research? I kind of assume they were designed for virtual desktops, but a customer tells me they are easier for them to buy. How would you even use a GPU for deep learning over the grid? Thanks! Matan
Hi,

I wanted to ask the same question, but about the Tesla M6; I figured this is roughly the same question so I might as well ask here. Is the M6, and is the Grid architecture, also meant or suited for deep learning research?

I kind of assume they were designed for virtual desktops, but a customer tells me they are easier for them to buy.

How would you even use a GPU for deep learning over the grid?

Thanks!
Matan

#5
Posted 06/21/2017 08:19 AM   
Just to get the terminology correct ... "GRID" is the software component that lays over a given set of Tesla (Currently M10, M6, M60) (and previously Quadro (K1 / K2)) GPUs. In its most basic form (if you can call it that), the GRID software is currently for creating FrameBuffer profiles when using the GPUs in "Graphics" mode, which allows users to share a portion of the GPUs FrameBuffer whilst accessing the same physical GPU. GRID is always being enhanced and developed to offer better performance, feature enhancements and functionality, and this is why NVIDIA have opted for a software defined model, as opposed to a hardware model, where there are far more limitations. No, the M10, M6 and M60 are not specifically suited for AI. However, they will work, just not as efficiently as other GPUs. NVIDIA creates specific GPUs for specific workloads and industry (technological) areas of use, as each area has different requirements. Slightly off topic, but if you want the best (official) resources for AI, then you're looking at a DGX-1 (https://www.nvidia.com/en-us/data-center/dgx-1/) or a DGX Station (https://www.nvidia.co.uk/data-center/dgx-station/). The DGX-1 uses P100s in combination with NVLINK. However, the P100s have very recently been replaced with V100s. As it is a brand new offering, the DGX Station starts with V100s and again, uses NVLINK. Note that they use "Compute" focused GPUs, not Graphics focused. So, in answer to your question, yes, you can use the M6, and if you're in "Graphics" mode you can share FrameBuffer between VMs and make better use of the GPU. However for the best performance, guide them to a more focused GPU line. Regards
Just to get the terminology correct ... "GRID" is the software component that lays over a given set of Tesla (Currently M10, M6, M60) (and previously Quadro (K1 / K2)) GPUs. In its most basic form (if you can call it that), the GRID software is currently for creating FrameBuffer profiles when using the GPUs in "Graphics" mode, which allows users to share a portion of the GPUs FrameBuffer whilst accessing the same physical GPU. GRID is always being enhanced and developed to offer better performance, feature enhancements and functionality, and this is why NVIDIA have opted for a software defined model, as opposed to a hardware model, where there are far more limitations.

No, the M10, M6 and M60 are not specifically suited for AI. However, they will work, just not as efficiently as other GPUs. NVIDIA creates specific GPUs for specific workloads and industry (technological) areas of use, as each area has different requirements.

Slightly off topic, but if you want the best (official) resources for AI, then you're looking at a DGX-1 (https://www.nvidia.com/en-us/data-center/dgx-1/) or a DGX Station (https://www.nvidia.co.uk/data-center/dgx-station/). The DGX-1 uses P100s in combination with NVLINK. However, the P100s have very recently been replaced with V100s. As it is a brand new offering, the DGX Station starts with V100s and again, uses NVLINK. Note that they use "Compute" focused GPUs, not Graphics focused.

So, in answer to your question, yes, you can use the M6, and if you're in "Graphics" mode you can share FrameBuffer between VMs and make better use of the GPU. However for the best performance, guide them to a more focused GPU line.

Regards

#6
Posted 06/22/2017 07:54 AM   
Scroll To Top

Add Reply