performance enhancement in
play

Performance Enhancement in OpenStack for Elastic Hadoop Cluster - PowerPoint PPT Presentation

OpenStack Performance Enhancement in OpenStack for Elastic Hadoop Cluster CMSoft Cloud Computing Products Department Lei Xu, Lina Hu, Weizhong Yu The Open Infrastructure Summit P A R T o n e Problems in High Performance Computing The


  1. OpenStack Performance Enhancement in OpenStack for Elastic Hadoop Cluster CMSoft Cloud Computing Products Department Lei Xu, Lina Hu, Weizhong Yu

  2. The Open Infrastructure Summit P A R T o n e Problems in High Performance Computing

  3. The Open Infrastructure Summit Run Hadoop Cluster on OpenStack? BigData Team Cloud Team VM…I think BM is ➢ Datanode service (hdfs) cannot run properly ? ➢ more suitable for you Poor computing performance. ➢ Network throughput not up to standard. ➢

  4. The Open Infrastructure Summit ◆ Yarn and Jstorm need greatest ◆ All bigdata service need high computing resource throughput network resources DISK ◆ HDFS and HBase need stable storage resources NETWORK CPU/RAM

  5. The Open Infrastructure Summit Traditional ways in OpenStack ? Feature II : HugePage Feature IV : PCI Passthrough Feature III : SRIOV Port Feature I : CPU Pin

  6. The Open Infrastructure Summit How to find the best way to run bigdata service on VMs ?

  7. The Open Infrastructure Summit P A R T t w o Performance Enhancement in Disk

  8. The Open Infrastructure Summit ① Use high performance cloud storage High performance cloud storage such as SSD distributed storage or FC-SAN storage, can meet requirements for read & write IOPS and bandwidth requirements. But when the number of mounted disks is too large, performance degradation occurs. Moreover, cloud storage is affected by network quality, network jitter can affect disk read and write, and even cause system read-only. ② Use local disk storage Use local disk on compute node server for instances, as data disk. Usually be able to meet IOPS and bandwidth requirements and more stable. The drawback is that the instance cannot be migrated.

  9. The Open Infrastructure Summit How to mount local disk to VMs in OpenStack ? 1. Use Ephemeral Storage Mount local disk on nova instances dictionary and use ephemeral in flavor to give local disk. +----+-------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+-------+-----------+------+-----------+------+-------+-------------+-----------+ | 1 | Hdfs1 | 8192 |200 | 500 | 0 | 4 | 1.0 | True | +----+-------+-----------+------+-----------+------+-------+-------------+-----------+ Problems: Flavor Flavor ⚫ Use qcow2 file for ephemeral disk disk , file-backend will make a -> Dictionary: /var/lib/nova/instances -> LV LV: instance disk.local compromise with performance. -> VG VG: nova -> PV PV: /dev/sdl /dev/sdm ⚫ Cannot attach more than one disk.swap ephemeral disk, just use LVM to meet large disk space need. disk.config Instance Instance disk.info console.log

  10. The Open Infrastructure Summit How to mount local disk to VMs in OpenStack ? 2. Use Cinder BlockDeviceDriver Reporting local disk device info and use virtio (virtio-blk) to mount disk to instance. | cinder-volume | compute1@local-disk | nova | enabled | up | | cinder-volume | compute2@local-disk | nova | enabled | up | Problems: ⚫ Cinder-volume service will BlockDeviceDriver increase with compute nodes, which will put pressure on Cinder and MQ. /dev/vdX ⚫ Although multiple block devices /dev/vdX can be mounted, there is no /dev/vdX /dev/vdX /dev/vdX optimization measures, virtio virtio virtio virtio virtio performance will be degraded. /dev/sdb /dev/sdc /dev/sde /dev/sdm /dev/sdb compute1 co compute2 co

  11. The Open Infrastructure Summit How to mount local disk to VMs in OpenStack ? 3. Use PCI Passthrough in OpenStack Passing through raid device including disks to instance. 04:00.0 RAID bus controller [0104]: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] [1000:005b] 04:00.1 RAID bus controller [0104]: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] [1000:005b] pci_passthrough_whitelist = [{"vendor_id": "1000", "product_id": "005b"}] /dev/sdb /dev/sdb Problems: /dev/sdc RAID /dev/sdc ⚫ PCI Passthrough with /dev/sdd /dev/sdd RAID device, cannot be /dev/sdm mounted by disk unit. Instance1 /dev/sdm PCI Passthrough /dev/sdb /dev/sdn compute node co /dev/sdc /dev/sdo RAID /dev/sdd /dev/sdp Instance2 /dev/sdm /dev/sdy

  12. The Open Infrastructure Summit How to enhance the local disk mounting function ? 1. Use SCSI LUN passthrough instead of PCI passthrough SCSI LUN Mounting SCSI LUN ounting <disk type='block' device='lun'> <driver name='qemu' type='raw' cache='none'/> <source dev='/dev/sdq'/> Advantages : <target dev='sdd' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> ⚫ The SCSI LUN transparent transmission </disk> mode uses virtio-scsi on the frontend of the virtual machine. The backend transfers the SCSI commands directly to the corresponding lun device. The IO path does /dev/sdX not change. /dev/sdX /dev/sdX virtio-scsi /dev/sdb /dev/sdc /dev/sde

  13. The Open Infrastructure Summit How to enhance the local disk mounting function ? 2. Use Iothread Pin for multiple device mounting Iothread othread setting setting Virtio irtio-scsi con scsi controller pin troller pin <iothreads>4</iothreads> <controller type='scsi' index='0' model='virtio-scsi'> <iothreadids> <driver iothread='1'/> <iothread id='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <iothread id='2'/> <iothread id='3'/> <iothread id='4'/> </iothreadids> Advantages : <cputune> Iothread is a thread that handles IO separately, which is <shares>32768</shares> independent of the Qemu main event loop, thus reducing lock <iothreadpin iothread='1' cpuset='1'/> <iothreadpin iothread='2' cpuset='2'/> competition and interference from other emulated devices, <iothreadpin iothread='3' cpuset='3'/> focusing on responding to virtual machine IO events. <iothreadpin iothread='4' cpuset='4'/> </cputune>

  14. The Open Infrastructure Summit Local Disk mounting in Nova The API sends disk nums to scheduler The user requests to add ssd/hdd disk Nova- Scheduler Nova- API User Report local disk information Choose proper compute nodes Set Iothread number Update instance information Add Virtio-scsi driver Set Controller Iothread Pin Libvirt driver Nova- Compute Guest Domain

  15. The Open Infrastructure Summit Random Write & Read Test (IOPS) PCI Passthrough: 400 512k/4k random/ sequential write & read close to 350 bare disk performance. 300 250 200 SCSI LUN+Iothread: 150 Large block (512k) performance is good, small block 100 (4k) sequential read & write performance not bad. 50 0 4K Random Write 512K Random Write 4K Random Read 512K Random Read Bare Disk PCI Passthrough SCSI Lun + Iothread Pin Virtio-blk+Iothread Pin Sequential Write & Read Test (Bandwidth) 300 250 200 Virtio-blk +Iothread: 150 Random read & write is relatively good, sequential read & write is poor. 100 50 0 4K Sequential Write 512K Sequential Write 4K Sequential Read 512K Sequential Read Bare Disk PCI Passthrough SCSI Lun + Iothread Pin Virtio-blk+Iothread Pin

  16. The Open Infrastructure Summit PA R T t h r e e Performance Enhancement in CPU/RAM

  17. The Open Infrastructure Summit Elastic expansion Auto-scaling Physical machine/Virtual machine has abundant resources Scale up/Vertically Scale: Physical machine/Virtual machine has no free resource Scale out/Horizontally Scale: Increase the number of virtual or physical machines

  18. The Open Infrastructure Summit Live Vertical Scaling up of VM Vertical scaling live-resize cpu/ram Added VM VM

  19. The Open Infrastructure Summit Implementation method of vertical scaling Rest API POST /servers/<id>/action { "live-resize" : { "flavorRef" : "2", } } <maxMemory slots='16' unit='KiB'>4194304</maxMemory> Python-novaclient <memory unit='KiB'>1048576</memory> nova live-resize <server> <flavor> <currentMemory unit='KiB'>1048576</currentMemory> <vcpu placement='static' current='1'>4</vcpu>

  20. The Open Infrastructure Summit Live Resize in nova The user requests to add ram/cpus The api sends resizable flavor to conductor nova- conductor nova- api User Get instance and flavor from database Check state of instance Set vcpu number Check on live-resize constraints Add memory number Update instance metadata Libvirt driver nova- compute Guest Domain

  21. The Open Infrastructure Summit P A R T f o u r Performance Enhancement in Network

  22. The Open Infrastructure Summit How to enhance network performance in OpenStack ? 1. OVS/OVS-DPDK Use ovs-dpdk acceleration in Neutron. OVS-DPDK transfers network traffic forwarding from Kernel Model to User Model.

  23. The Open Infrastructure Summit How to enhance network performance in OpenStack ? 2. SRIOV Use sriov port in Neutron and give child baremetal nic to instances. Make several VFs from PF ( Physical NIC), and Passthrough to VMs with Intel VT-d technology.

Recommend


More recommend