HYPER COOL INFRASTRUCTURE OPENSTACK SUMMIT BOSTON | MAY 2017 RANDY RUBINS Sr Cloud Consultant, Red Hat May 10, 2017
WHY HYPER "COOL" INFRA? Did not like "CONVERGED" Needed to preserve a "C-word" Could have used "COMPLEX" or "CRAMMED" Ended up with a four-letter word that helped get my presentation accepted
AGENDA What is HCI? Drivers and Use Cases Red Hat Hyperconverged Solutions Architectural Considerations Implementation Details Performance and Scale Considerations Futures Q & A
HYPER-CONVERGED INFRASTRUCTURE "Hyperconvergence moves away from multiple discrete systems that are packaged together and evolve into software-defined intelligent environments that all run in commodity, off-the-shelf x86 rack servers. Its infrastructures are made up of conforming x86 server systems equipped with direct-attached storage. It includes the ability to plug and play into a data center pool of like systems." - Wikipedia
HYPER-CONVERGED INFRASTRUCTURE "Hyperconvergence delivers simplification and savings by consolidating all required functionality into a single infrastructure stack running on an efficient, elastic pool of x86 resources." and "... hyperconverged infrastructure delivers on the promise of the SDDC at the technological level." - Hyperconverged.org
HYPER-CONVERGED INFRASTRUCTURE "The “hyper” in hyperconvergence comes from the hypervisor, or more generically, virtualization technology. Hyperconvergence means to bring the virtual aspects of infrastructure together with the physical, resulting in a single solution... The servers, storage and virtualization stack are not only bundled together, but are completely integrated and transparent to the administrator.... Hyper(visor) + Convergence = Hyperconvergence." - Scale Computing
HYPER-CONVERGED INFRASTRUCTURE DRIVERS USE CASES Smaller hardware footprint D-NFV Lower cost of entry vCPE Standardization ROBO Maximized capacity utilization Lab/Sandbox
HYPERCONVERGED SOLUTIONS PRIVATE CLOUD + TRADITIONAL VIRTUALIZTION + CONTAINERIZED CLOUD APPS +
RHV-S (GRAFTON) grafton-0 grafton-1 grafton-2 hosted-engine +
REQUIREMENTS/LIMITATIONS RHV-S Currently in Beta/LA - subject to change in GA version Full details can be found here: http://red.ht/2qKwMKY Valid subscriptions for RHV 4.1 & RHGS 3.2 1 Exactly 3 physical nodes with adequate 2 memory and storage 2 network interfaces (gluster back-end, and 3 ovirtmgmt) RAID10/5/6 supported/recommended 4 1 spare hot drive recommended per node 5 RAID cards must use flash backed write cache 6 3-4 gluster volumes ( engine, vmstore, data, 7 shared_storage geo-replicated volume) 29 - 40 VMs supported 8 4 vCPUs / 2TB max per VM supported 9
OSP-HCI undercloud overcloud-ctrl-mon-0 overcloud-ctrl-mon-2 overcloud-ctrl-mon-1 overcloud-comp-osd-0 overcloud-comp-osd-1 overcloud-comp-osd-2 +
REQUIREMENTS/LIMITATIONS OSP-HCI Currently in Tech Preview, soon to reach fully-supported status, GA being evaluated. Full details can be found here: http://red.ht/2jXvxkB Valid subscriptions for RHOSP 10 & RHCS 2.0 1 (1) OSP undercloud (aka "director") - can be a VM 2 (3) "OSP controller + Ceph MON" nodes 3 (3+) "OSP compute + Ceph OSD" nodes with adequate memory 4 and storage 10Gbps network interfaces for Ceph storage and OpenStack tenant 5 networks Up to 1 datacenter rack (42 nodes) for "OSP compute + Ceph OSD" 6
RHV-S + OSP-HCI grafton-2 grafton-0 grafton-1 cloudforms hosted-engine ansible-tower undercloud overcloud-ctrl-mon-0 overcloud-ctrl-mon-1 overcloud-ctrl-mon-2 overcloud-comp-osd-0 overcloud-comp-osd-1 overcloud-comp-osd-2
HYPER COOL INFRA grafton-2 grafton-0 grafton-1 cloudforms hosted-engine ansible-tower undercloud overcloud-ctrl-mon-0 overcloud-ctrl-mon-1 overcloud-ctrl-mon-2 overcloud-comp-osd-0 overcloud-comp-osd-1 overcloud-comp-osd-2
IMPLEMENTATION DETAILS RHV-S Install RHEL 7.3 and RHV 4.1 on (3) grafton nodes 1 Configure public key authentication based SSH 2 Deploy gluster via cockpit plugin / gdeploy 3 Deploy hosted-engine via cockpit plugin 4 Enable gluster functionality on hosted-engine 5 Create networks for gluster storage, provisioning, 6 and the rest of the OSP isolated networks Create master storage domain 7 Add remaining (2) hypervisors to hosted-engine 8 Upload RHEL 7.3 guest image 9 Create RHEL 7.3 template 10
IMPLEMENTATION DETAILS OSP-HCI Deploy director (undercloud) on RHV-S using RHEL 7.3 template
IMPLEMENTATION DETAILS OSP-HCI Install and configure director via ansible-undercloud playbook ... � ��� undercloud � ��� files � � ��� certs � � � ��� build_undercloud_cert.sh � � � ��� cacert.pem � � � ��� openssl-undercloud.cnf � � � ��� privkey.pem � � ��� stack.sudo � � ��� undercloud.pem � ��� tasks � � ��� main.yml � ��� templates � ��� hosts.j2 � ��� instackenv.json.j2 � ��� resolv.conf.j2 � ��� undercloud.conf.j2 ��� undercloud.yml ...
IMPLEMENTATION DETAILS OSP-HCI Prepare and upload overcloud images - name: extract overcloud images become_user: stack unarchive: copy: false src: /usr/share/rhosp-director-images/overcloud-full-latest-{{ osp_version }}.tar dest: /home/stack/images/ - name: extract ironic python agent images become_user: stack unarchive: copy: false src: /usr/share/rhosp-director-images/ironic-python-agent-latest-{{ osp_version }}.tar dest: /home/stack/images/ - name: set root password on overcloud image shell: export LIBGUESTFS_BACKEND=direct && virt-customize -a /home/stack/images/overcloud-full.qcow2 --root- password password:{{ admin_password }} - name: upload overcloud images become_user: stack shell: source ~/stackrc && openstack overcloud image upload --image-path /home/stack/images --update-existing ignore_errors: true
IMPLEMENTATION DETAILS OSP-HCI Customize tripleo heat templates based on Reference Architecture doc NOTE: Use Github repo https://github.com/RHsyseng/hci ��� numa-systemd-osd.sh [stack@director ~]$ tree custom-templates/ ��� post-deploy-template.yaml custom-templates/ ��� ceph.yaml ��� rhel-registration ��� certs � ��� environment-rhel-registration.yaml � ��� build_overcloud_cert.sh � ��� rhel-registration-resource- � ��� cacert-oc.pem registry.yaml � ��� openssl-oc.cnf � ��� rhel-registration.yaml � ��� overcloud.pem � ��� scripts � ��� privkey-oc.pem � ��� rhel-registration ��� compute.yaml � ��� rhel-unregistration ��� custom-roles.yaml ��� scripts ��� enable-tls.yaml � ��� configure_fence.sh ��� first-boot-template.yaml � ��� deploy.sh ��� inject-trust-anchor.yaml � ��� ironic-assign.sh ��� layout.yaml � ��� nova_mem_cpu_calc.py ��� network.yaml � ��� nova_mem_cpu_calc_results.txt ��� nic-configs ��� wipe-disk.sh � ��� compute-nics.yaml � ��� controller-nics.yaml
IMPLEMENTATION DETAILS OSP-HCI Add resource isolation and tuning to custom templates NOTE: Follow Chapter 7 of OSP10/RHCS2 Reference Architecture Guide(!)
IMPLEMENTATION DETAILS OSP-HCI Forced to use a KVM host and virtual-bmc ipmi-to-libvirt proxy due to lack of oVirt/RHV ironic driver. RFE: https://bugs.launchpad.net/ironic-staging-drivers/+bug/1564841 RHV 4.1 KVM instances w/vbmc
IMPLEMENTATION DETAILS OSP-HCI Create instackenv.json file and register (3) KVM instances and (3) OSP baremetal nodes and run introspection osp-ctrl/ceph-mon (KVM) osp-comp/ceph-osd (BM) { { "name": "hci-ctrl0", "name": "hci-comp0", "pm_type": "pxe_ipmitool", "pm_type": "pxe_ipmitool", "mac": [ "mac": [ "52:54:00:b7:c2:7d" "84:2b:2b:4a:0c:3f" ], ], "cpu": "1", "cpu": "1", "memory": "4096", "memory": "4096", "disk": "50", "disk": "50", "arch": "x86_64", "arch": "x86_64", "pm_user": "root", "pm_user": "root", "pm_password": "calvin", "pm_password": "calvin", "pm_addr": "192.168.2.10", "pm_addr": "192.168.0.104", "pm_port": "6230", "capabilities": "node:comp0,boot_option:local" "capabilities": "node:ctrl0,boot_option:local" } }
Recommend
More recommend