Tales Of The Kubernetes Ingress Networking: Deployment Patterns For - - PowerPoint PPT Presentation

tales of the kubernetes ingress networking deployment
SMART_READER_LITE
LIVE PREVIEW

Tales Of The Kubernetes Ingress Networking: Deployment Patterns For - - PowerPoint PPT Presentation

Tales Of The Kubernetes Ingress Networking: Deployment Patterns For External Load Balancers 1 How To Access The Slides? Slides (HTML): https://containous.github.io/slides/webinar-cncf-jan-2020 Slides (PDF):


slide-1
SLIDE 1

Tales Of The Kubernetes Ingress Networking: Deployment Patterns For External Load Balancers

1
slide-2
SLIDE 2

How To Access The Slides?

Slides (HTML): https://containous.github.io/slides/webinar-cncf-jan-2020 Slides (PDF): https://containous.github.io/slides/webinar-cncf-jan-2020/slides.pdf Source on : https://github.com/containous/slides/tree/webinar-cncf-jan-2020

2
slide-3
SLIDE 3

How To Use The Slides?

Browse the slides: Browse the slides: Use the arrows Change chapter: Left/Right arrows Next or previous slide: Top and bottom arrows Overview of the slides: Overview of the slides: keyboard’s shortcut "o" Speaker mode (and notes): Speaker mode (and notes): keyboard’s shortcut "s"

3
slide-4
SLIDE 4

Whoami

Manuel Zapf: Head of Product Open Source @ Containous  @mZapfDE  SantoDE

4
slide-5
SLIDE 5

Containous

We Believe in Open Source We Deliver Traek, Traek Enterprise Edition and Maesh Commercial Support 30 people distributed, 90% tech https://containo.us

5
slide-6
SLIDE 6

Once Upon A Time

There was Kubernetes cluster.

6 . 1
slide-7
SLIDE 7

This Cluster Had Nodes And Pods

Node Node Node Pod Pod Pod Pod

6 . 2
slide-8
SLIDE 8

But Pods Had Private IPs

How to route trafc to these pods? And between pods on different nodes?

Node Node Node Pod Pod Pod Pod

? ? ?

6 . 3
slide-9
SLIDE 9

Services Came To The Rescue

Their goal: Expose Pods to allow incoming trafc

Node Node Node Pod Pod Pod Pod

6 . 4
slide-10
SLIDE 10

Services Are Load-Balancers

Services have 1-N Endpoints EndPoints are determined by Kubernetes API

Pod Pod Endpoint Endpoint Service

One exception: Services of types ExternalName

6 . 5
slide-11
SLIDE 11

Different Kinds Of Services

for different communications use cases: From inside inside: type "ClusterIP" (default). From outside

  • utside: types "NodePort" and "LoadBalancer".
6 . 6
slide-12
SLIDE 12

Services: ClusterIP

Virtual IP, private to the cluster, cluster)-wide (e.g. works from any node to any other node)

Node Node Node Pod Pod Pod Service ClusterIP

6 . 7
slide-13
SLIDE 13

Services: NodePort

Uses public IPs and ports of the nodes, kind of "Routing grid"

Node Node Node Pod Service NodePort Port 30500 Port 30500 Port 30500 Client

6 . 8
slide-14
SLIDE 14

Services: LoadBalancer

Same as NodePort,excepts it requires (and uses) an external Load Balancer.

Node Node Node Pod Pod Pod External Load Balancer Client

6 . 9
slide-15
SLIDE 15

Services Are Not Enough

Context: Exposes externally a bunch of applications Challenge: overhead of allocation for LBs. For each application: One LB resource (either a machine or a dedicated appliance) At least one public IP DNS nightmare (think about the CNAMEs to create… ) No centralization of certicates, logs, etc.

6 . 10
slide-16
SLIDE 16

And Then Came The Ingress

Example with Traek as Ingress Controller:

6 . 11
slide-17
SLIDE 17

Notes About The Ingresses

7 . 1
slide-18
SLIDE 18

Ingress Are Standard Kubernetes Applications

Deployed as Pods (Deployment or as DaemonSet) Exposed with a Service: You still need access from the outside But only one service to deal with (ideally)

7 . 2
slide-19
SLIDE 19

Ingress Have Services Too

Node Node Node Ingress Controller (Pod) Pod Pod Service ClusterIP Service LoadBalancer Public Domain/IP

7 . 3
slide-20
SLIDE 20

Why Should I Care?

Simplied Setup: Single entrypoint, less conguration, better measures Less resources used Separation of concerns: differents algorithms for load balancing, etc.

7 . 4
slide-21
SLIDE 21

Why Challenges Does It Make?

Designed for (simple) HTTP/HTTPS cases TCP/UDP can be used, but are not rst-class citizens "Virtual Host First" centric Feels like you must carefully select your (only) Ingress Controller

7 . 5
slide-22
SLIDE 22

So What?

Kubernetes gives you freedom: You can use multiple Ingress Controllers! Kubernetes gives you choices: So much deployment patterns that you can do almost anything

7 . 6
slide-23
SLIDE 23

External Load Balancers

8 . 1
slide-24
SLIDE 24

Did You Just Say "External"?

Outside the "Borders" of Kubernetes: Depends on your "platform" (as in infrastructure/cloud) Still Managed by Kubernetes (Automation) Requires "plugins" (operators/modules) per Load Balancer provider No API or no Kubernetes support: requires switching to NodePort

8 . 2
slide-25
SLIDE 25

Tell Me Your Kubernetes Distribution

… and I’ll tell you which LB to use…

8 . 3
slide-26
SLIDE 26

Cloud Managed Kubernetes

Cloud providers provides their own external LBs Fully Automated Management with APIs Great UX due to the integration: works out of the box Benets from cloud provider HA and performances But: You have to pay for this :) Conguration is cloud-specic (using annotations) Relies on LB implementation limits

8 . 4
slide-27
SLIDE 27

Bare-Metal Kubernetes

  • Aka. "Run it on your boxes"

Best approach: , a Load Balancer implementation for Kubernetes, hosted inside inside Kubernetes Uses all Kubernetes primitives (HA, deployment, etc.) Allows Layer 2 routing as well as BGP But… still not considered production ready Otherwise: external static (or legacy) LB Requires switching to NodePort Service Metal LB

8 . 5
slide-28
SLIDE 28

Cloud "Semi-Managed" Kubernetes

Depends on the compute provider: cloud or bare-metal You need a tool for mananaging clusters: kubeadm, kops, etc. Most of these tools already manage LB if the provider does.

8 . 6
slide-29
SLIDE 29

Source IP On The Kingdom Of Kubernetes

9 . 1
slide-30
SLIDE 30

Business Case: Source IP

As a business manager, I need my system to know the IP of the emitters of the requests to track usage, write access logs for legals reasons and limit trafc in some cases.

9 . 2
slide-31
SLIDE 31

NAT/DNAT/SNAT

NAT NAT stands for "Network Adress Translation" IPv4 world: Routers "masquerades" IPs, to allow routing from different network DNAT DNAT stands for "Destination NAT" Masquerade of the destination IP with the internal pod IP SNAT SNAT stands for "Source NAT" Masquerade of the source source IP with the router’s IP

9 . 3
slide-32
SLIDE 32

NAT/DNAT/SNAT

Destination: 85.12.12.12 Source: 93.25.25.25 Destination: 10.0.2.10 Source: 93.25.25.25 Client IP: 93.25.25.25 IP: 85.12.12.12 IP: 10.0.0.1 Server IP: 10.0.2.10

DNAT

Destination: 85.12.12.12 Source: 93.25.25.25 Destination: 85.12.12.12 Source: 10.0.0.1

SNAT

9 . 4
slide-33
SLIDE 33

Preserve Source IP

Rule: We do NOT want SNAT to happen Challenge: many intermediate components can interfere and SNAT the packets in our back!

9 . 5
slide-34
SLIDE 34

Inside Kubernetes: Kube-Proxy

kube-proxy is a Kubernetes component, running on each worker node Role: manage the virtual IPs used for Services Challenge with Source IP: kube-proxy might SNAT requests SNAT by kube-proxy depends on the Service: Let’s do a tour of Services Types!

9 . 6
slide-35
SLIDE 35

Source IP With Service ClusterIP

When kube-proxy is in "iptables" mode: no SNAT ✅ This is the default mode No intermediate component

Node Client Pod Pod Service ClusterIP (iptables) kube-proxy Configures

9 . 7
slide-36
SLIDE 36

Source IP With Service NodePort (Default)

SNAT is done ❌ (routing to the node where pod is): First node to node routing through nodes network Then node to pod routing through pod network

Node Node Service NodePort Pod kube-proxy kube-proxy Client SNAT

9 . 8
slide-37
SLIDE 37

Source IP With Service NodePort (Local Endpoint)

No SNAT ✅ with externalTrafficPolicy set to Local Downside: Dropped request if no pod on receiving node

Node Node Service NodePort externalTrafficPolicy=true Pod kube-proxy kube-proxy Client Successful Request (local endpoint) Dropped Request (no local endpoint)

9 . 9
slide-38
SLIDE 38

Source IP With Service LoadBalancer (Default)

Default: SNAT is done ❌, same as NodePort External Load Balancer can route to any node If no local endpoint: Node to node routing with SNAT

9 . 10
slide-39
SLIDE 39

Source IP With Service LoadBalancer (Local Endpoint)

However, No SNAT ✅ for load balancers implementing Local externalTrafficPolicy: GKE/GCE LB, Amazon NLB, etc.

🛡Nodes without local endpoints are removed from the

LB by failing healthchecks ฀฀Pros: no dropped request from client view, but nodes always ready ฀฀Cons: relies on healthcheck timeouts

9 . 11
slide-40
SLIDE 40

Alternatives When SNAT Happen

Sometimes, SNAT is mandadatory External LB Network Constraint Ingress Controller in the middle "Network is based on layers" - let’s use another layer: If using HTTP, retrieve the Source IP from headers If using TCP/UDP, use the "Proxy Protocol" Or use distributed logging and tracing

9 . 12
slide-41
SLIDE 41

HTTP Protocol Headers

X-Forwarded-From holds a comma-separated list of all the source IPs SNAT during all network hops. ✅ if you have an External LoadBalancer or an Ingress Controller supporting this header. ⚠ Not standard (header starting with X-) so not all HTTP appliance might support it. Upcoming Ofcial HTTP Header Forwarded

9 . 13
slide-42
SLIDE 42

Proxy Protocol

Introduced by Happens at Layer 4 (Transport) for TCP/UDP Goal: "chain proxies / reverse-proxies without losing the client information" Supported by a lot of appliances in 2019: AWS ELB, Traek, Apache, Nginx, Varnish, etc. Use Case: when SNAT happen AND not way to use HTTP. H HAProxy

9 . 14
slide-43
SLIDE 43

Distributing Logging And Tracing

🛡 Idea

Idea: Collect the source IP as soon as possible in distributed logging Use distributed tracing to track the request in the system ฀฀Pros: no more complex network setups, distributed logging and tracing stacks are already on your Kubernetes cluster (or will soon be) ฀฀Cons: relies on the distributed logging/tracing stacks

9 . 15
slide-44
SLIDE 44

Use Cases

10 . 1
slide-45
SLIDE 45

External Load Balancer With Traffic Policy

฀฀Pros: full automation 👏Cons: depends on actual LB implementation

Node Node Service LoadBalancer externalTrafficPolicy=true Pod kube-proxy kube-proxy LoadBalancer Client Automatically Configures

10 . 2
slide-46
SLIDE 46

Capturing Source IP From HTTP Headers

฀฀Pros: Simplied Setup 👏Cons: Only works with HTTP

HTTP Reverse Proxy Client Ingress-Controller

X-Forward-From: Real IP from Client

10 . 3
slide-47
SLIDE 47

Sources

https://kubernetes.io/docs/tutorials/services/source-ip/ https://en.wikipedia.org/wiki/Network_address_translation https://www.asykim.com/blog/deep-dive-into-kubernetes- external-trafc-policies

11
slide-48
SLIDE 48

Read More

https://info.containo.us/request-white-paper-routing-in-the- cloud

12
slide-49
SLIDE 49

Thank You!

 @mZapfDE  SantoDE

Slides (HTML): https://containous.github.io/slides/webinar-cncf-jan-2020 Slides (PDF): https://containous.github.io/slides/webinar-cncf-jan-2020/slides.pdf Source on : https://github.com/containous/slides/tree/webinar-cncf-jan-2020

13