docker orchestration beyond the basics
play

Docker Orchestration: Beyond the Basics Aaron Lehmann Software - PowerPoint PPT Presentation

Docker Orchestration: Beyond the Basics Aaron Lehmann Software Engineer, Docker About me Software engineer at Docker Maintainer on SwarmKit and Docker Engine open source projects Focusing on distributed state, task scheduling,


  1. Docker Orchestration: Beyond the Basics Aaron Lehmann Software Engineer, Docker

  2. About me • Software engineer at Docker • Maintainer on SwarmKit and Docker Engine open source projects • Focusing on distributed state, task scheduling, and rolling updates 2

  3. Swarm mode

  4. Swarm mode is Docker’s built in orchestration • Docker can orchestrate containers over multiple machines without extra software • Example: running a instances of a web service on several machines 4

  5. Getting started with swarm mode • Initialize a new swarm: mgr-1$ docker swarm init • Join an existing swarm: worker-1$ docker swarm join --token <token> 
 192.168.65.2:2377 5

  6. Swarm mode: Services • Swarm mode deals with services , not individual containers • Each service creates one or more replica tasks , which are run as containers • On manager, create a new service for a search microservice application: mgr-1$ docker service create -p 8080:8080 --name search \ --replicas 4 searchsvc:v1.0 mgr-1$ docker service ls ID NAME REPLICAS IMAGE COMMAND 2xtw9qipmbe9 search 4/4 searchsvc:v1.0 6

  7. Swarm mode: Nodes • Worker nodes just run service tasks • Manager nodes manage the swarm mgr-1$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS drwxwi4h2fb0tcrwgmpmma2x0 * mgr-1 Ready Active Leader 1mhtdwhvsgr3c26xxbnzdc3yp mgr-2 Ready Active Reachable 516pacagkqp2xc3fk9t1dhjor mgr-3 Ready Active Reachable 9j68exjopxe7wfl6yuxml7a7j worker-1 Ready Active 03g1y59jwfg7cf99w4lt0f662 worker-2 Ready Active dxn1zf6l61qsb1josjja83ngz worker-3 Ready Active 7

  8. Swarm mode topology Manager Manager Manager Worker Worker Worker Worker Worker Worker Search Billing Search Search Search Billing service service service service service service container container container container container container 8

  9. Swarm mode topology Manager Manager Manager Worker Worker Worker Worker Worker Worker Search Billing Search Search Search Billing service service service service service service container container container container container container 9

  10. Swarm mode topology Manager Manager Manager Worker Worker Worker Worker Worker Search Billing Search Search Search Billing service service service service service service container container container container container container 10

  11. High availability

  12. High availability • Survive failures of some portion of workers and managers • If a worker fails, its assigned tasks are rescheduled elsewhere 12

  13. High availability • What about manager failures? • Managers are part of a Raft cluster that replicates the state of the swarm 13

  14. Raft • Raft is a protocol for maintaining a strongly consistent distributed log • Way to avoid a single point of failure 14

  15. Raft concepts • Quorum : A majority of managers • Leader : Randomly chosen manager that can add information to the distributed log • Election : The process of choosing a new leader 15

  16. High availability • The leader is the manager that: • Makes the scheduling decisions • Keeps track of node health • Handles API calls 16

  17. High availability • If the leader fails, another manager is elected in its place • For Raft to function, more than half the managers (a quorum ) must be reachable 17

  18. How many managers for a swarm? • A single manager is fine in some scenarios • Any swarm meant to survive a manager failure should have 3 or 5 managers • No scaling benefit to adding additional managers • Each one replicates a full copy of the swarm's state 18

  19. Manager fault tolerance Number of managers Majority Tolerated Failures 1 1 0 2 2 0 19

  20. Manager fault tolerance Number of managers Majority Tolerated Failures 1 1 0 2 2 0 20

  21. Manager fault tolerance Number of managers Majority Tolerated Failures 1 1 0 2 2 0 3 2 1 4 3 1 21

  22. Manager fault tolerance Number of managers Majority Tolerated Failures 1 1 0 2 2 0 3 2 1 4 3 1 22

  23. Manager fault tolerance Number of managers Majority Tolerated Failures 1 1 0 2 2 0 3 2 1 4 3 1 5 3 2 6 4 2 23

  24. Manager fault tolerance Number of managers Majority Tolerated Failures 1 1 0 2 2 0 3 2 1 4 3 1 5 3 2 6 4 2 24

  25. Manager fault tolerance Number of managers Majority Tolerated Failures 1 1 0 2 2 0 3 2 1 4 3 1 5 3 2 6 4 2 7 4 3 8 5 3 9 5 4 25

  26. Where to deploy the managers • Managers must have static IP addresses • Managers should have very reliable connectivity to each other • Swarms that span a big geographic area aren't recommended • Looking at federation as an eventual solution for multi- region • Spreading managers across a cloud provider's "availability zones" in one region may make sense 26

  27. Advertised IP addresses • All managers must be reachable by all other managers • Managers need to know their own IP addresses so they can tell other managers how to reach them • The address is autodetected if there is only one network device, or in the process of joining an existing swarm 27

  28. Advertised IP addresses • If the address can't be autodetected, provide 
 --advertise-addr when running 
 docker swarm init • Many swarm instability issues are actually caused by managers not being able to communicate 28

  29. What to do if quorum is lost • Suppose two out of three managers fail • The swarm won't be able to schedule tasks or perform administrative functions • You will see timeouts from commands like 
 docker node ls if this happens 29

  30. What to do if quorum is lost • What if these managers are gone forever? • docker swarm init --force-new-cluster on the surviving manager recovers from this state • This modifies the swarm so that it only has a single manager • From that point, new managers can be added 30

  31. Protecting managers from accidental overloading • By default, managers will be assigned tasks just like workers • This makes sense on a laptop-scale deployment • Best practice for serious deployments: avoid running container workloads on managers 31

  32. Protecting managers from accidental overloading • Drain the managers to prevent them from running service tasks: mgr-1$ docker node update --availability=drain <manager id> • Alternatively, set the node.role == worker constraint on all services 32

  33. Rolling updates • Important to avoid downtime during updates • docker service update is a rolling update by default • Parameters: • Update delay ( --update-delay ) • Update failure action: pause or continue 
 ( --update-failure-action ) • Parallelism ( --update-parallelism ) 33

  34. Rolling updates { Prepare Health Update Prepare Start new new checks delay new Stop old Stop old parallelism Update Prepare Start Health Update Prepare new new checks delay new Stop old Stop old Time 34

  35. Security

  36. Security model • All swarm connections are encrypted and authenticated with mutual TLS • Each node is identified by its certificate (CN = node ID) • The certificate authorizes the node to act as either a worker or manager (OU = swarm-manager or OU = swarm-worker ) • By default, each manager operates as a certificate authority with the same CA key 36

  37. Security around adding nodes • How does a new node authenticate itself before having a certificate? • It presents a join token which is provided to 
 docker swarm join 37

  38. Security around adding nodes • The join token contains a secret that authorizes the new node to receive either a worker or manager certificate • It also contains a digest of the root CA certificate, for protection against man-in-the-middle attacks • The node does not use or store the join token after joining 38

  39. Node joining example: adding a new worker • On a manager, retrieve the join token: mgr-1$ docker swarm join-token worker To add a worker to this swarm, run the following command: docker swarm join \ 
 --token SWMTKN-1-5f7umqonkff6je2l1kqpxdsok3bwipn73hlr5dxtvx4lusy809 -5yn6jy5zqqq3tnummvq365y7m \ 
 172.17.0.2:2377 39

  40. Node joining example: adding a new worker • Run the command on the new worker: worker-1$ docker swarm join --token \ SWMTKN-1-5f7umqonkff6je2l1kqpxdsok3bwipn73hlr5dxtvx4lusy809 -5yn6jy5zqqq3tnummvq365y7m \ 
 172.17.0.2:2377 This node joined a swarm as a worker. 40

  41. Node joining flow Join token, certificate request Signed certificate Joining node Manager Node registration Task assignments = TLS with no client certificate = Mutually authenticated TLS 41

  42. Rotating join tokens • The join tokens remain valid until they are rotated • It is good practice to periodically rotate them • docker swarm join-token --rotate worker generates a new worker token to replace the old one • docker swarm join-token --rotate manager generates a new manager token to replace the old one 42

Recommend


More recommend