Octavian Ciuhandu Peter Pouliot Cloudbase Solutions Microsoft
+ Built and maintained by a small team of operating system tools, puppet, or shell highly skilled engineers scripting. + Undercloud of KVM on Centos providing + Executes a Tempest run for every virtualized Ubuntu Devstack Controllers upstream commit + Two Physical Hyper-V nodes per Controller + Automation and Scripts: + https://github.com/openstack-hyper-v dynamically assembled + https://github.com/cloudbase + All layers automated through native
+ 2012 6 additional racks located 5 blocks from original site + acquired for expansion of additional compute capacity. Began recycling hardware originally used for the first + Hyper-V/ OpenStack cloud deployment from 2011. Metropolitian Area Network deployed between NERD + site and nearby MS colocation facility using previously Combination of HP Dell servers, blades and storage. dark fiber establishing a 1g connection between the sites. + Acquired 3 Racks located in corporate LAB environment 120 nodoes of HP 1u nodes recycled from new facility within the Microsoft NERD Center and added as Hyper-V/KVM compute capacity. + Acquire dedicated physical network infrastructure including internet connectivity allowing for complete isolation of infrastructure and full control of all network infrastructure dedicated to OpenStack CI efforts. 2013 + OpenStack Hyper-V CI begins operation Mid Year + First official votes “IceHouse”
+ 2014 - 2015 Acquire quanta high density compute and storage, as well as + more 10/40G network infrastructure. Additional server and networking resources acquired allowing for + expansion in the remaining 4 racks as well increase capacity to 17 Begin rebuild of CI in parallel to the operation of existing CI. + Racks of server equipment. Network backplane upgraded to 40G, and 10G to the host for CI + Network resources were exhausted preventing use of all Operations. + equipment. Undercloud KVM compute operating on all SSD, Hyper-V compute + Microsoft purchases new network infrastructure allowing us to upgraded to single disk 4TB high capacity storage. upgrade site to site communication to 10G backplane add per rack physical network segmentation and consumption of compute resources + Some newer hardware acquired through recycling allowing for replacement of some of the older infrastructure. + Mid 2015 begin acquiring 10G HBA interfacing allowing for Tesing of OVS on Windows and 10G to the Host 2016
+ Needed to be able to automate multiple linux distributions, Windows Flavors, Hardware Models, Supporting applications, processes. + Because of lack of Nesting for Windows Physical compute was required. + Due to old hardware needed to be able to rebuild supporting components immediately from scratch if hardware failed. + Workload platforms = Centos (KVM physical)+Ubuntu(Devstack vm)+ Windows 2012 (Two Physical Hyper-V).
+ Began automating deployment and configuration of Hyper-V using puppet 2.7+ on windows server 2012. + Created a basic framework for deploying OpenStack nova using public python binaries, with locally served files on Windows, vswitch and NTP configuration as well as installation of supporting binaries. + After initial configuration moved to building supporting linux deployment automation + Created dynamic PXE tool kit by which any linux distribution can be automated using simple ERB templates across versions
+ Automated IPAM infrastructure to allow for bulk loading MAC address information for hosts via yaml and dynamically create network segments, static leases and names for all host network interfaces, and ip subnet ranges. + Created a Jenkins infrastructure to allow for automation of Operational tasks within the infrastructure, such switch/pdu manipulation, hardware/device firmware upgrades. + Added formatting/sanity/deployment testing of existing automation to help with maintaining puppet automation using containers on Linux/Windows.
As of 27 Jan. 2016 Cloudbase 14% + 11 CI’s: nova, neutron, compute -hyperv, EMC 9% networking-hyperv, cinder (iSCSI, SMB- OpenStack Community 44% Windows, SMB Linux), manilla, os-win, ovs VMware 6% Big Switch 6% + We currently maintain a production Mellanox 5% OpenStack CI facility with 700+ servers Nuage Networks 5% Dell Freescale Cisco 3% 4% 4% http://stackalytics.com/?release=all&project_type=all&metric=ci
Icehouse Juno Kilo Liberty Mitaka Newton Ocata Pike Community: 3381 5446 6049 5701 6296 6988 3200 1892 Cloudbase: 810 1498 1599 2050 3205 3537 1236 1145 Percentage: 23.96% 27.51% 26.43% 35.96% 50.91% 50.62% 38.63% 60.52%
+ Nova + Compute-HyperV + Neutron + Networking-HyperV + Manila Networking-HyperV Agent + OS-Win OVS (on Windows) – silent runs to validate + OS-Brick the CI + Cinder + Oslo projects – Unit tests – silent runs to iSCSI validate the CI + Cloudbase-init SMB3 Windows
+ Upgrade to: + Redesigned the CI functionality: • • quanta servers MaaS for bare-metal deployments • • 10 Gbps networking Juju for orchestrating the application • deployments 40 Gbps backplane + Separate CI accounts and distributed zuul • Dedicated dataplane, completely isolated + Part of CI on upgraded undercloud- overcloud model while others are direct on bare-metal deployments
+ deployed in HA mode using 3 VMs MaaS + deployed in HA mode using 2 bare-metal + Separate users for different teams servers managing CIs + HAProxy with floating IP, used for both + Separate models for different MaaS services and PosgreSQL backend infrastructure services and Cis + 3 KVM nodes to provide VMs for MaaS for Jumpbox + VM with separate users and each user has services that do not require a lot of resources access granted only to the Juju model he requires Juju
User Admin User OVS + controller + neutron-ovs + default + Ovs + infra + undercloud User ci-deployer + Zuul + ci-deployer User cloudbaseinit + cloudbaseinit + cloudbaseinit-dev
+ Extended existing zuul charm from charm-store + One zuul instance as gearman server: • deployed on bare-metal + Two zuul instances as zuul mergers as lxd containers + One zuul server instance per CI as lxd containers + https://github.com/cloudbase/zuul-charm
Charms used to deploy + Jenkins + Logs server + Ubuntu repository cache + Pypi cache using devpi in HA mode on 2 nodes + Windows Server 2016 instances + Jenkins slaves for each Windows Server 2016 instance
+ Single openstack controller + Controller services deployed in lxd containers + 25 kvm compute nodes + Updated the existing charms to support linux-bridge with flat networking for the undercloud + MySQL DB for managing the VLAN ranges allocation for all overcloud tests
+ Active for most CIs github repos changes + Workflow: + Apt and Pypi caches are used + Hyper-V nodes are re-used: • VLAN range is reserved • • VM is launched for devstack controller Python installation is fully removed • • 2 bare-metal Hyper-V nodes are All build, logs, VMs and other possible allocated artifacts are cleaned up • Cinder also uses a third windows node for OpenStack Windows services are set to cinder volume service manual, created only if not already • present Parallel build for all 3 nodes + Image is periodically updated with latest
+ Full bare-metal deployments for each run • Hyper-V CI Charm + Active-directory node re-used • Devstack CI Charm + No nodes are re-used + Nodepool – ensures that a minimum + CI-Deployer – adds resiliency and node number of nodes have the OS already deployed and are ready to be consumed by removal features to the charms the charms deployment workflow + Custom charms for maximum flexibility:
+ Polling the github repo, will implement a • Integration tests github hook for job triggering Deploys a controller and 2 Windows Server 2016 compute nodes using the + Contains 2 steps: OVS installer • Runs full tempest suite tests Unit testing Uploads integration test logs Builds project and installer • Currently Unit tests report upstream, Runs unit tests integration tests are run silently for CI Uploads unit test logs reliability testing If successful uploads installer and triggers integration tests
+ Zabbix: • OS Level • OpenStack services • Hyper-V nodes • Networking + Racktables for CI hardware inventory and status, integrated with Zabbix + In process of deployment using charms in dedicated model
+ Support tools: • https://github.com/cloudbase/ci-deployer + Charms: • https://github.com/cloudbase/zuul-charm • https://github.com/cloudbase/active-directory-charm • https://github.com/cloudbase/devstack-charm • https://github.com/cloudbase/hyperv-charm + CI code: • https://github.com/cloudbase/?utf8=%E2%9C%93&q=-ci • https://github.com/cloudbase/common-ci/pull/11/files
www.cloudbase.it `
Recommend
More recommend