NVIDIA
Capture vGPU performance data in csv format using nvidia-smi
Intent: Capture the GPU utilization, GPU memory utilization, Total GPU memory, GPU memory free, GPU memory used performance statistics to a CSV formatted file. In the XenServer dom-0 console, issue the following command to get the PCIe bus IDs for the vGPU enabled GPUs. root@nvgrid:~ # nvidia-smi | grep GRID | 1 GRID K2 On | 0000:09:00.0 Off | Off | | 2 GRID K2 On | 0000:44:00.0 Off | Off | To capture the GPU utilization for all vGPUs enabled virtual machines on PCIe ID "0000:09:00.0" and display to the console: root@nvgrid:~ # nvidia-smi --query-gpu=utilization.gpu,utilization.memory,memory.total,memory.free,memory.used --format=csv --id=0000:09:00 -l 1 To redirect the output to a file: root@nvgrid:~ # nvidia-smi --query-gpu=utilization.gpu,utilization.memory,memory.total,memory.free,memory.used --format=csv --id=0000:09:00 -l 1 -f ./GPU-09-stats.csv The data will be captured in one second intervals and written to the file. To monitor the other vGPU enabled GPUs, replace the "--id" value "0000:09:00" with the PCIe ID you wish to monitor. NOTE: GPUs designated for use as PCI pass-through will not be displayed in the output from nvidia-smi as access to those GPUs are handled by the hypervisor. (a future post will address the monitoring of GPU pass-through enabled virtual machines)
Intent:
Capture the GPU utilization, GPU memory utilization, Total GPU memory, GPU memory free, GPU memory used performance statistics to a CSV formatted file.

In the XenServer dom-0 console, issue the following command to get the PCIe bus IDs for the vGPU enabled GPUs.

root@nvgrid:~ # nvidia-smi | grep GRID
| 1 GRID K2 On | 0000:09:00.0 Off | Off |
| 2 GRID K2 On | 0000:44:00.0 Off | Off |

To capture the GPU utilization for all vGPUs enabled virtual machines on PCIe ID "0000:09:00.0" and display to the console:

root@nvgrid:~ # nvidia-smi --query-gpu=utilization.gpu,utilization.memory,memory.total,memory.free,memory.used --format=csv --id=0000:09:00 -l 1

To redirect the output to a file:

root@nvgrid:~ # nvidia-smi --query-gpu=utilization.gpu,utilization.memory,memory.total,memory.free,memory.used --format=csv --id=0000:09:00 -l 1 -f ./GPU-09-stats.csv

The data will be captured in one second intervals and written to the file.

To monitor the other vGPU enabled GPUs, replace the "--id" value "0000:09:00" with the PCIe ID you wish to monitor.

NOTE: GPUs designated for use as PCI pass-through will not be displayed in the output from nvidia-smi as access to those GPUs are handled by the hypervisor.

(a future post will address the monitoring of GPU pass-through enabled virtual machines)

Jeremy Main
Lead Solution Architect - NVIDIA GRID
GRID Resources : http://www.nvidia.com/object/grid-enterprise-resources.html
GPUProfiler : http://gpuprofiler.com/

#1
Posted 05/19/2014 11:19 AM   
Scroll To Top

Add Reply