openstack on kubernetes make openstack and kubernetes
play

OpenStack on Kubernetes: Make OpenStack and Kubernetes Fail-Safe - PowerPoint PPT Presentation

OpenStack on Kubernetes: Make OpenStack and Kubernetes Fail-Safe Seungkyu Ahn (ahnsk@sk.com) Jaesuk Ahn (jay.ahn@sk.com) Open System Lab Network IT Convergence R&D Center SK Telecom Wil Reichert (wil@solinea.com) Solinea What will


  1. OpenStack on Kubernetes: Make OpenStack and Kubernetes Fail-Safe Seungkyu Ahn (ahnsk@sk.com) 
 Jaesuk Ahn (jay.ahn@sk.com) Open System Lab Network IT Convergence R&D Center SK Telecom Wil Reichert (wil@solinea.com) Solinea

  2. What will happen • Introduction • Kubernetes/OpenStack • Demo Starts • CI • Demo Ends

  3. Introducing Our Company SKT Solinea • No. 1 Mobile Service Provider in Korea • professional services partner that with 50% market share accelerates enterprise cloud • We has been at the forefront of adoption developing and commercializing new • Technology agnostic, always wireless technologies (recently, 4G LTE 5band CA with max 700Mbps) working in the best interest of our clients • We are exploring more than network; Especially around AI and Media. • Our clients are primarily Global • We actively participate open source Fortune 1000 organizations in project; OCP, TIP, ONOS, Ceph, multiple industry verticals OpenStack, etc.

  4. This is Totally Community Effort Wil: CI/CD & K8S Containers! Sungkyu Ahn: 
 OpenStack & K8S Jawon Choo: 
 Jaesuk Ahn: 
 Large Contributing OpenStack & Kolla OpenStack & K8S OpenStack Operator WG Cloud Native! Dan Kim: 
 Robert Choi: OpenStack & K8S OpenStack & Automation

  5. Current (previous) Way Triage Flexible Configuration Requirements SPEC + Monitoring Deployment Architecture DEV Trouble Sh. -Network -Storage TEST Analysis -Appliance Integration UPSTREAM -Configuration Tuning Patch Community Code Hardware/Appliance Upgrade Purchase Tuning OpenStack Package Deployment + Capacity Mgmt. Configuration Management QA Scale-out Deployment Automation Operation Deployment Development Production OpenStack

  6. • Even Update (patch) is challenging • Upgrade - gosh, what I can say. • Deployment issue - snow-flake env. vs. cattle • Not single huge scale OpenStack, but many small/medium OpenStacks • Lack of flexible configuration management capability in “standardized manner” • Very difficult to integrate with our own stuffs (Ceph, SDN Controller, Datacenter Operation Platform, etc) Previous Product Pain-Points

  7. More to Do TEST Triage Flexible Configuration Requirements SPEC + Monitoring Deployment Architecture DEV Trouble Sh. -Network -Storage TEST Analysis -Appliance Integration -Configuration Tuning UPSTREAM Patch Community Code Hardware/Appliance Upgrade Purchase Tuning OpenStack Package Deployment + Capacity Mgmt. Configuration Management QA Scale-out Deployment Automation Operation Deployment Development Production OpenStack

  8. Continuous Loop Triage Flexible Configuration Requirements SPEC + Monitoring Deployment Architecture DEV Trouble Sh. -Network -Storage TEST Analysis -Appliance Integration UPSTREAM -Configuration Tuning Patch Hardware/Appliance Upgrade Community Code Purchase Tuning OpenStack Package Deployment + Capacity Mgmt. Configuration Management QA Scale-out Deployment Automation Operation Deployment Development OpenStack Production

  9. Better way to deliver OpenStack and manage its Lifecycle • Reducing Overhead: Dependency Management • “Easy and Fast” Multiple Deployment in “Standardized” way • Upgrade/Update/Rollback • Easy Scaling and Healing Why OpenStack on Kubernetes?

  10. • Kubernetes (Control Plance Orchestration) • Helm (Application Lifecycle Management Automation) • CI/CD Pipeline leveraging Jenkins • OpenStack-helm (Managing OpenStack on Kubernetes) • Kolla (Containerizing OpenStack) • ONOS/SONA (OpenStack Network Management) • Ceph (Storage) Key Technologies

  11. • First Production within 2017 (IT Infrastructure) • Production-Ready by the end of 2017 • Expanding to more deployment (Media, NFV) in 2018 
 & Putting more APPs on this “Streamline” Our Plan

  12. Overall Architecture

  13. git Helm Repo Tiller Helm Cli Kubectl Today’s Demo System Jenkins 
 Jenkins 
 Kubernetes 
 Master Slave Master Kubernetes 
 Node

  14. What is the HA target? Kubernetes Master • Etcd • API server (load balance) • Scheduler (leader election) • Controller manager (leader election) 
 OpenStack Controller (Keystone, Glance, Nova, Cinder, Neutron) • API server (load balance) • Scheduler (Nova, Cinder) • MariaDB • RabbitMQ • Neutron network node (SONA)

  15. Kubernetes 3-Masters flanneld flanneld flanneld flanneld flanneld kube kube kube kube kube proxy proxy proxy proxy proxy etcd etcd etcd ceph apiserver apiserver apiserver scheduler scheduler scheduler controller controller controller manager manager manager kubelet kubelet kubelet kubelet kubelet node00 kube-master01 node01 kube-master02 kube-master03

  16. Kubelet flanneld Kubelet KUBELET_OPTS="--kubeconfig=/etc/kubernetes/kubelet.conf \ --require-kubeconfig=true \ --hostname-override=kube-master01 \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --pod-manifest-path=/etc/kubernetes/manifests \ --allow-privileged=true \ --v=0 \ --register-schedulable=false \ --cluster-dns=10.96.0.10 \ --cluster-domain=cluster.local"

  17. etcd Kubelet flanneld etcd yaml --name kube-master01 --initial-advertise-peer-urls http://192.168.30.13:2380 --listen-peer-urls http://192.168.30.13:2380 --advertise-client-urls http://192.168.30.13:4001 --listen-client-urls http://192.168.30.13:2379, 
 http://127.0.0.1:2379,http://192.168.30.13:4001, 
 http://127.0.0.1:4001 --data-dir /var/etcd/data --initial-cluster-token 5d3903915c2cda30174970d784075f0a --initial-cluster kube-master01=http://192.168.30.13:2380, 
 kube-master02=http://192.168.30.14:2380, 
 kube-master03=http://192.168.30.15:2380 --initial-cluster-state new

  18. kube-apiserver Kubelet etcd flanneld kube-apiserver yaml - /usr/local/bin/kube-apiserver --etcd-servers=http://127.0.0.1:2379 --storage-backend=etcd3 --insecure-bind-address=127.0.0.1 --insecure-port=8080 --secure-port=6443 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount, PersistentVolumeLabel,DefaultStorageClass,ResourceQuota --service-cluster-ip-range=10.96.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --token-auth-file=/etc/kubernetes/pki/kube-token --service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem --allow-privileged --anonymous-auth=false

  19. kube-apiserver Kubelet etcd manager controller 
 flanneld kube-controller-manager yaml - kube-controller-manager --master=127.0.0.1:8080 --cluster-cidr=172.16.0.0/16 --cluster-name=kubernetes --allocate-node-cidrs=true --service-account-private-key-file=/etc/kubernetes/pki/ apiserver-key.pem --root-ca-file=/etc/kubernetes/pki/ca.crt --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --v=0 --leader-elect=true

  20. flanneld Kubelet controller 
 manager scheduler etcd kube-apiserver kube-scheduler yaml - /usr/local/bin/kube-scheduler --master=127.0.0.1:8080 --v=0 --leader-elect=true

  21. kube-proxy Kubelet kube-apiserver etcd scheduler manager controller 
 flanneld kube-proxy yaml securityContext: privileged: true command: - /bin/sh - -c - /usr/local/bin/kube-proxy --kubeconfig=/run/kubeconfig --cluster-cidr=10.96.0.0/16 --v=0

  22. RabbitMQ MariaDB Keystone Glance Cinder NOVA Neutron NOVA Neutron VM VM OpenStack Controller & Compute kubernetes worker-nodes 
 kubernetes worker-nodes (label: controller) (label: compute)

  23. (nova-api) VM Process 3 OpenStack 
 (nova-api) Process 2 OpenStack 
 VM Neutron NOVA (nova-api) Process 1 OpenStack 
 OpenStack Controller & Compute kubernetes worker-nodes 
 kubernetes worker-nodes (label: controller) (label: compute)

  24. (nova-api) (nova-api) Process 3 OpenStack 
 (nova-api) Process 3 OpenStack 
 (nova-api) Process 2 OpenStack 
 Process 1 OpenStack 
 VM VM Neutron NOVA OpenStack Controller & Compute kubernetes worker-nodes 
 kubernetes worker-nodes (label: controller) (label: compute)

  25. Database clustering (3 node) jobs/mariadb-seed po/mariadb-0 po/mariadb-1 po/mariadb-2 joiner (3th mariadb) --wsrep_cluster_address=gcomm://172.16.56.7,172.16.75.5,172.16.8.15

  26. Neutron network (1 nic) Network eth0 eth0 addif linux virtual iface br-data br-data addif 192.168.30.33 192.168.30.34 veth1 veth1 link add (veth) veth0 ovs internal iface veth0 br-ex phy-br-ex add-port br-ex phy-br-ex ovs patch br-int int-br-ex patch-int qvoxxx patch qrouter-xxx br-tun qg-xxx qr-xxx br-int patch-tun vxlan-xxx int-br-ex patch-int tapxxx ovs vxlan qdhcp-xxx qvbxxx patch qbrxxx tabxxx br-tun vxlan-xxx patch-tun namespace local ip, remote ip, vni: flow VM Compute Node Network Node

  27. OpenStack-Helm Neutron chart … network: 
 interface: 
 external: veth0 
 default: br-data 
 ml2: 
 agent: 
 tunnel_types: vxlan 
 type_drivers: 
 - flat 
 - vxlan 
 ovs: 
 auto_bridge_add: null 
 bridge_mappings: null 
 neutron: 
 default: 
 l3_ha_network_type: vxlan 
 debug: 'True’ …

Recommend


More recommend