Skip to content

Example Configurations with Benchmarks

The following configuration was used to generate PCoIP Ultra benchmarks presented below:

Host Platform: Supermicro SYS1019GP-TT (Xeon Gold 6248 2.5 GHz, 20 cores)

Hypervisor: VMware ESXi 6.7

Virtual Machine: Windows 10, 96 GB RAM, grid_rtx6000_12q, Cloud Access Software 21.07.4

Graphics: NVIDIA RTX 6000, GRID 11.1

Network: 1 Gbps LAN

Client Endpoint: Intel NUC10FNHi5, Windows 10

Display: 4K/UHD 3840 x 2160

PCoIP Agent: Version 21.07.4

PCoIP Client: Version 21.07

Video Content: Big Buck Bunny 1080p 24fps (Opening 2 minutes)

Network Bandwidth Consumption

The network bandwidth used in conjunction with PCoIP Ultra depends on several factors including PCoIP Ultra mode (CPU Offload, GPU Offload or Auto-Offload modes), display resolution and configured image quality policy. The table below shows the average bandwidth consumption (in Mbps) for the opening scene of Big Buck Bunny at native 1080p resolution and 1080p scaled to 1440p and full-screen 2560p (4K/UHD) playback at default PCoIP Image Quality settings of Q80.

Auto-Offload* (YUV 4:2:0) Auto-Offload* (YUV 4:4:4) CPU Offload
1080p 13.0 19.0 34.4
1440p 17.3 27.5 43.1
2560p 25.0 43.8 61.4

*PCoIP Ultra GPU-Offload mode consumes comparable bandwidth to PCoIP Ultra Auto-Offload mode.

On constrained networks, the PCoIP protocol dynamically adjusts image quality and frame rate to suit the available network bandwidth. In cases where it is preferred to proactively constrain the bandwidth as might be required to limit service provider network charges, PCoIP image quality and frame rate can be capped. Refer to PCoIP settings section for policies that enforce bandwidth and/or frame rate limits.

The following table provides an example of network bandwidth savings and consumption (in Mbps) using PCoIP Ultra Auto-Offload at different image quality settings. The below rates are at 1080p Auto-Offload (YUV 4:2:0).

Q60 Q70 Q80 Q90
Bandwidth 3.7 6.6 13.0 34.8

Host CPU Utilization

PCoIP Ultra Auto Offload mode offers the highest host CPU efficiency by leveraging NVIDIA NVENC technology for display processing when the encoded pixel rate exceeds a programmed threshold.

The table below shows host virtual machine average CPU utilization when using PCoIP Ultra Auto-Offload and PCoIP Ultra CPU-Offload as measured by the VMware ESXi Performance Monitor at default image quality settings for various virtual machine vCPU allocations.

4 vCPU 8 vCPU 16 vCPU 24 vCPU
1080p (Auto-Offload) 17% 10% 4% 2%
4K/UHD (Auto-Offload) 19% 11% 4% 3%
1080p (CPU-Offload) 38% 18% 9% 2%
4K/UHD (CPU-Offload) 58% 28% 14% 3%

PCoIP Hyper-Threading

PCoIP Ultra CPU Offload does not distinguish between real and virtual cores. PCoIP will take advantage of hyper-threading but N hyper-threaded cores do not provide the same performance as the same number of N physical cores.

For troubleshooting information around implementing the PCoIP Ultra protocol enhancements, see the knowledge base article: Troubleshooting PCoIP Ultra.