hard facts benchmarking grid
play

Hard Facts - Benchmarking GRID- Accelerated Remote Desktop User - PowerPoint PPT Presentation

Hard Facts - Benchmarking GRID- Accelerated Remote Desktop User Experience Ruben Spruijt Benny Tritsch Field CTO @ Frame Principal Consultant @ DrTritsch.com @rspruijt @drtritsch ruben@fra.me benny@rdsgurus.com VDI Design Benchmarking


  1. Hard Facts - Benchmarking GRID- Accelerated Remote Desktop User Experience Ruben Spruijt Benny Tritsch Field CTO @ Frame Principal Consultant @ DrTritsch.com @rspruijt @drtritsch ruben@fra.me benny@rdsgurus.com

  2. VDI Design Benchmarking Communities Workspace Analytics REX Analytics Community Advisor Investor Advisory Board

  3. Executive Field CTO Communities Workspace Public Cloud Computing Community Advisor 2008-2017

  4. www.teamRGE.com

  5. “Sharing your knowledge doesn't put your job at risk. It empowers you to perform at a higher level, Iron sharpens iron .” #CommunityPower

  6. Session topics 1. Windows, GPUs and GPU options 2. How to benchmark, tooling and lab setup 3. Benchmark results

  7. . Is Windows remoting still relevant Why do we need GPUs?

  8. . Windows everywhere is dead!?

  9. . Windows isn’t dead!

  10. Cloud Adoption Rate and the “Long Tail” 100% #Windows Apps Acceptance of Web / mobile Platforms 50% “Long Tail” 15% Time

  11. “ “After a nuclear war, it'll be cockroaches and Windows apps” “ Shawn Bass – Team Remote Graphics Experts – TeamRGE.com

  12. WWW.VDILIKEAPRO.COM

  13. Virtual Client Computing upstart in Cloud and Mobile 1 st world

  14. How interested are you in DaaS or Remote Application as a Service offerings? 50 45 Investigating 40 35 30 Percentage Not interested at all 25 20 15 Already using 10 5 0 2014 2015 2017 N=584

  15. Popular Graphics Applications

  16. How much video framebuffer (memory) do you use for OS and Applications? Why is this important?!

  17. GPU usage for normal user - Ruben

  18. GPU usage for normal user - Ruben

  19. Virtual Desktop Virtual Workstation Power Users and Designers Task and Knowledge Workers ▪ ▪ 2D/3D graphics, CAD/PLM/BIM Office, CRM, ERP , Unified Comm. ▪ ▪ High-end compute resources Basic compute resources ▪ ▪ ▪ 4-64+ GB RAM | Xeon MP ▪ 1-4GB RAM | 256GB-512GB SSD ▪ Core i5/Core i7 ▪ Multiple SSD, PCIe Flash 512GB+ ▪ NVIDIA Quadro K2000-M6000 ▪ Geforce GT(X) – Quadro 420/620/K1200 1500-10K$+ Workstation range 700$-1500$ Desktop range ▪ ▪

  20. GPU options

  21. GPU Manufacturers Multiuser GPU GRID vGPU Iris Pro Graphics + GVT SR-IOV + Pass-Thru GPU TESLA M6 / M60 / M10 (“Maxwell”) Xeon E3-1200 v4 CPU + Iris Pro “Pure Hardware” “Software Stack” “Broadwell” CPU + GPU = APU Dedicated vRAM + GPU Dedicated vRAM, shared GPU “ Skylake ” Glossary GPU = Graphics Processing Unit GVT = Graphics Virtualization Technology (Intel) SR-IOV = Single Root I/O Virtualization Download whitepaper at http://www.teamrge.com APU = Accelerated Processing Unit

  22. Do you use offloading technology or GPUs in your Virtual Workspace environment? 100 90 80 70 60 No Percentage 50 40 30 NVIDIA GRID 20 10 N=584 AMD www.VDILIKEAPRO.com 0 2013 2014 2015 2017

  23. “We need more GPU options in public cloud(s), competition is important & healthy!” Ruben Spruijt – Field CTO - Frame

  24. LATENCY Benchmarking

  25. GPU-Accelerated Remoting

  26. Relevant Remote End User Experience Factors Remoting protocol (Codec, protocol stack, streaming) • Application type (GDI, DirectX, OpenGL, video, …) • Host (server hardware & hypervisor, GPU support) • Guest VM (Windows version, remoting components) • Endpoint (client hardware & software, screen resolution) • Network (TCP/UDP , bandwidth, latency, packet loss, VPN) • Control plane (connection broker, gateway, …) • In shared environments: other users (noisy neighbors) •

  27. Benchmarking Workflow Build Measure Analyze

  28. “EUC platform testing is great in on-premises world from a sizing and best-practices perspective but doesn’t add much value in public clouds”

  29. Building a Remote Desktop Benchmarking Lab Endpoint Device Host Guest VM + Network Test Software WanEmu Telemetry Telemetry Firewall Record Control Control Data Recorder Lab Controller Archive “REX Tracker”

  30. Producing Synthetic User Workloads Primary Workload Sequences Secondary Workload Sequences 45-90 sec 60-90 min Start applications Start application Start applications Media Formats Personas • GDI • Task Worker • Video • Info Worker Save telemetry data • Flash • Power User • HTML5 • Office User • DirectX • Knowledge Worker • OpenGL • Media Designer Save telemetry data

  31. Measuring Remote End User Experience (REX) Perceived user experience ▪ User interface response times - click to noticeable action and application ▪ start times Graphical output performance ▪ Screen refresh cycles - frame rates, flicker ▪ Supported graphics and media formats ▪ Dropouts, blurriness and artefacts - media quality ▪ Remote end user experience cannot be represented by a single score Solution: REX Analytics = screen videos + correlated telemetry data

  32. Comparison – REX Analyzer

  33. RDANALYZER v2.0

  34. Remote Display Analyzer 2.0 - preview

  35. Remote Display Analyzer 2.0 - preview

  36. PROJECT CIRRUS

  37. “EUC platform testing and UX benchmarking aren’t the same. Different goal and different end- result”

  38. LATENCY Latency

  39. Relevant Network Factors Remoting Protocols (RDP, ICA/HDX, PCoIP, Blast, …) Speed of light Bandwidth Packet Loss Latency Data transfer rate of a Delay; amount of time to traverse Discarding of data network connection a system packets (in percent) >300ms half way around globe >500ms for VSAT satellite links

  40. It’s Einstein’s Fault… 40,000km VF% Cable 74 – 79 Cat-7 twisted pair 77 RG-8/U 67 optical fiber 65 RG-58A/U ~ 130ms 65 Cat-6A twisted pair 64 Cat-5e twisted pair 58.5 Cat-3 twisted pair Minimum velocity factors c ≈ 300,000km/sec for network cables Speed of light

  41. Typical Mobile Network

  42. HDX 3D Pro – Win7 VM and Win7 Client Device Latency (in milliseconds) Bare metal client | DWM client on 13 18 25 10 Network w/ NVenc | w/ DXVA | DWM server & client off 13 37 18 6 37 6 10 Latency w/ NVenc | no DXVA | DWM server & client off 13 37 18 6 37 18 10 w/ NVenc | no DXVA | DWM server off, client on 13 37 18 6 37 18 25 10 w/ NVenc | no DXVA | DWM server on, client off 13 37 18 25 6 37 18 10 no NVenc | no DXVA | DWM server on, client off 13 37 18 25 35 37 18 10 no NVenc | no DXVA | DWM server & client on 13 37 18 25 35 37 18 25 10 0 25 50 75 100 125 150 175 200 225 Input (mouse click) Network (send) Render Pipeline (app) Sync Server (DWM/vSync) Capture/Encode Network (receive) Decode Sync Client (DWM/vSync) Display Lag (TFT/LCD) NVIDIA Click-to-Photon benchmark

  43. Testing DDA – Azure N-Series VMs NV6 NV12 NV24 6 12 24 CPU Cores (E5-2690v3) 64 112 224 RAM (GB) 340 680 1.440 SSD (GB)* Azure Azure Azure Network 1 x M60 GPU 2 x M60 GPUs 4 x M60 GPUs GPU Resources (1/2 physical card) (1 physical card) (2 physical cards) $1.60/hr $3.19/hr $6.38/hr Price (West Europe) $1,190.40/mo $2,373.36/mo $4,746.72/mo

  44. Benchmarking results

  45. What is the most important thing we should benchmark from a UX perspective?! WE WANT YOUR INPUT!

  46. SUMMARY

  47. Session topics 1. Windows, GPUs and GPU options 2. How to benchmark, tooling and lab setup 3. Benchmark results

  48. THANKS! Ruben Spruijt Benny Tritsch Field CTO @ Frame Principal Consultant @ DrTritsch.com @rspruijt @drtritsch ruben@fra.me benny@rdsgurus.com

Recommend


More recommend