NVIDIA
NVIDIA Tesla P40 dynamic GPU memory allocation possible?
Hello there is the GPU memory of a P40 fix allocated by a VM or is this dynamic? p40 has 24GB gpu ram so let's take following profile p40-1q 1024 mb framebuffer 2 virtual display heads max resolution 4096x2160 max vgpus per gpu: 24. so maximum 24 vms can run together on this GPU or is overprovisioning possible?
Hello there

is the GPU memory of a P40 fix allocated by a VM or is this dynamic? p40 has 24GB gpu ram

so let's take following profile
p40-1q
1024 mb framebuffer
2 virtual display heads
max resolution 4096x2160
max vgpus per gpu: 24.

so maximum 24 vms can run together on this GPU or is overprovisioning possible?

#1
Posted 01/10/2018 09:04 AM   
FB allocation is always fix so you cannot overprovision FB. That said you calculation is correct. 24x1Q profile is the maximum possible on P40. Regards Simon
FB allocation is always fix so you cannot overprovision FB. That said you calculation is correct.
24x1Q profile is the maximum possible on P40.

Regards

Simon

#2
Posted 01/10/2018 10:38 AM   
thank you very much for your fast respond! that helped me much! can you explain to me what is the bar1 mem then? what i found from http://developer.download.nvidia.com/compute/cuda/6_0/rel/gdk/nvidia-smi.331.38.pdf "BAR1 Memory Usage BAR1 is used to map the FB (device memory) so that it can be directly accessed by the CPU or by 3rd party devices (peer-to-peer on the PCIe bus)." but i don't get this one.. what's the thing with directly acces by the CPU?
thank you very much for your fast respond! that helped me much!

can you explain to me what is the bar1 mem then?

what i found from

http://developer.download.nvidia.com/compute/cuda/6_0/rel/gdk/nvidia-smi.331.38.pdf


"BAR1 Memory Usage
BAR1 is used to map the FB (device memory) so that it can be directly accessed by the CPU or by 3rd
party devices (peer-to-peer on the PCIe bus)."

but i don't get this one.. what's the thing with directly acces by the CPU?

#3
Posted 01/10/2018 10:51 AM   
and how much tesla p40 can you put in one server with 4 PCI slots? 2 or 4? from datasheet: --> Form Factor: PCIe 3.0 Dual Slot (rack servers)
and how much tesla p40 can you put in one server with 4 PCI slots? 2 or 4?

from datasheet:
--> Form Factor: PCIe 3.0 Dual Slot (rack servers)

#4
Posted 01/10/2018 11:00 AM   
Depends on the server hardware. Most vendors support up to 3 P40s in their current 2HE hosts. See our HCL: http://www.nvidia.com/object/grid-certified-servers.html
Depends on the server hardware. Most vendors support up to 3 P40s in their current 2HE hosts.
See our HCL: http://www.nvidia.com/object/grid-certified-servers.html

#5
Posted 01/10/2018 09:05 PM   
is it possible to run 1xp40 and 2xp4 in only one server at once?
is it possible to run 1xp40 and 2xp4 in only one server at once?

#6
Posted 01/12/2018 02:52 PM   
Technically this might be possible but OEMs won't support different Tesla boards in the same server. Therefore you should use the same boards for a specific server.
Technically this might be possible but OEMs won't support different Tesla boards in the same server. Therefore you should use the same boards for a specific server.

#7
Posted 01/13/2018 11:26 AM   
Scroll To Top

Add Reply