How to Re-Architect without Breaking Stuff (too much) Owen G Garrett Ma March 2018 owen@nginx.com
All problems in computer science can be solved by another layer of indirection --- David Wheeler, FRS
“This giant piece of software that made our company successful... is now a problem.”
The transition to a Modern Application Architecture From Monolith ... ... to Microservices A giant piece of software Small, loosely connected Services Silo’ed teams (Dev, Test, Ops) DevOps Culture Big-bang releases Continuous delivery Persistent deployments VMs, Containers, Functions Fixed, static Infrastructure Infrastructure as code Complex protocols (HTML, SOAP) Lightweight, Programmable (REST, JSON)
Disruption is happening at the speed of software
NGINX as a Shock Absorber Ap App (static cluster) Ap App B Internet (static cluster) Ot Other r web and ap applicat ation services
NGINX as an Insulator Ap App (static cluster) Ap App B Internet (static cluster) Ot Other r web and ap applicat ation services
The busiest sites in the world use NGINX open source 10% 20% 30% 40% 50% 60% 0% 06/2012 12/2012 06/2013 12/2013 06/2014 12/2014 06/2015 12/2015 06/2016 12/2016 06/2017 12/2017
Four steps to non-disruptive rearchitecting
Be a hero – make changes… all while keeping the lights on!
Change the tyres while the car is moving
Roadmap to rearchitecting 1. Plan 2. Prepare 3. Package 4. Proceed
1. Your global architecture will be fluid Application Datacenter Load Per-Service Per-Application Balancer Load Load Balancer Balancer On-Prem Datacenter • Distribute traffic using DNS and redirects • Funnel traffic through Cloud Platform Per-Service concentrators Per-Application Load Load Load Balancer Balancer Balancer • Distribute these stateless concentrators Cloud Datacenter
Plan your global architecture for change Plan how you route traffic to the correct datacenter: 1. Segment with DNS 2. Use External Redirects ◦ Clients connect directly to the location of the service they are using ◦ Use the proxy to push out redirects 3. Route traffic internally 4. Use X-Accel-Redirect ◦ All traffic is handled through the same NGINX cluster, and internally routed to cloud
Get started with X-Accel-Redirect • A more sophisticated alternative to a simple proxy_pass GET /resource • Request goes to local server GET /resource • Local server internally redirects to remote server X-Accel-Redirect GET /resource Ideal for moving content to cloud storage or serverless, while retaining NGINX-based authentication and logging . Client can never access remote server directly.
2. Prepare to execute the change • Remove or streamline dependencies outside the core devops pipeline ◦ Hardware replace ◦ External business or technical processes • Don’t underestimate the strength of “we’ve always done it this way”
Example – Hardware Replace
Example – Hardware Replace Datacenter Load Balancer Application-specific Proxy
Example – Hardware Replace with NGINX • Cost savings • Modernize • No limits Save more than 80% and Get the flexibility to move No artificial bandwidth or run on commodity to the cloud, throughput caps to slow hardware microservices, Devops, you down and more
3. Package your Applications • Package as VMs or Containers; full-stack CI & CD should be your goal
Agile Methodology AGILE DEVELOPMENT code deploy plan CONTINUOUS operate CONTINUOUS OPS DEV release d INTEGRATION DELIVERY l i u b test monitor CONTINUOUS TESTING
Automation Tools code deploy plan operate OPS Bazel DEV Buck d release l i u b test monitor Bamboo
4. Proceed to operate the deployment Internet
4. Proceed to operate the deployment Blue-green Deployments Split Clients / A|B testing Auto-Scaling Internet Canary Releases Health Checks and Slow Start
Split Clients configuration • Split traffic to multiple servers based on, http { upstream blue_servers { for example, source IP address server 10.0.0.100:3001; server 10.0.0.101:3001; Just one example of the many ways to • } route traffic in NGINX: upstream green_servers { • By user cookie or authentication server 10.0.0.104:6002; token server 10.0.0.105:6002; • By source geography } split_clients "${remote_addr}" $appversion { • Forms the basis of blue-green 5% green_servers ; deployments * blue_servers ; } • Monitor NGINX access logs or extended server { status to measure health of new, green listen 80; server location / { proxy_pass http://$appversion; } } }
Service Discovery with Consul • NGINX open source can be https://github.com/nginxinc/NGINX- configured using an agent that is Demos/tree/master/consul-template-demo triggered by changes to the service database • NGINX Plus will look up consul in /etc/hosts/ file if using links or resolver consul:53 valid=10s; using Docker embedded DNS server. upstream service1 { zone service1 64k; • By default Consul uses this server service1.service.consul service=http format for services: resolve; [tag.]<service>.service[.d } atacenter].<domain>
Active Health Checks upstream my_upstream { NGINX open source passively detects zone my_upstream 64k; application failures server server1.example.com slow_start=30s ; } server { NGINX Plus provides “Active Health Checks” # ... location /health { • Polls /URI every 5 seconds internal; health_check interval=5s uri=/test.php • If response is not 200 , server marked as match=statusok mandatory; d failed proxy_set_header HOST www.example.com; proxy_pass http://my_upstream; } • If response body does not contain } “ Server N is alive ”, server marked as match statusok { failed # Used for /test.php health check status 200; • Recovered/new servers will slowly ramp header Content-Type = text/html; up traffic over 30 seconds body ~ "Server[0-9]+ is alive"; }
Move to Microservices “As we moved to microservices we realized that we needed a much smarter way of routing pages to our applications. The big benefits of NGINX Plus were firstly the support, the DNS configuration which allowed us to use sophisticated services in AWS, and the metrics told us which servers were failing.” - John Cleveley, Senior Engineering Manager
Two proven microservices delivery patterns
1. Managing north-south traffic with an Ingress Controller
Starting from your Monolith…
1. Containerise your Monolith Load Balancer
2. Decompose your Monolith Photo Uploader Pod Pod Photo User Data Resizer Load Pod Pod Balancer Content Orders Service Pod Pod
3. Rearchitect your Monolith User Photo Manager Uploader Pod Pod Auth Photo Pages Proxy Resizer Load Pod Pod Pod Balancer Content Album Service Manager Pod Pod
Deploy on, for example, Kubernetes K8s API Server Auth Content User Photo Proxy Service Uploader Manager Pod Pod Pod Pod
Kubernetes Ingress Resource Ingress: 1. apiVersion: extensions/v1beta1 • Built-in Kubernetes resource 2. kind: Ingress 3. metadata: name: hello-ingress 4. Automates c onfiguration for • 5. spec: 6. tls: an edge load balancer (or 7. - hosts: ADC) - hello.example.com 8. secretName: hello-secret 9. rules: 10. Ingress features: 11. - host: hello.example.com 12. http: • L7 routing based on the paths: 13. host header and URL - path: / 14. • backend: 15. TLS termination 16. serviceName: hello-svc 17. servicePort: 80
Application Delivery on Kubernetes Subscribe to Ingress Resources K8s API Server Ingress Controller Auth Content User Photo Proxy Service Uploader Manager Pod Pod Pod Pod
Limitations of the Kubernetes Ingress Resource Only does: 1. kind: Ingress 2. metadata: • Routing on the host header and URL name: hello-ingress 3. • TLS termination 4. spec: tls: 5. 6. - hosts: What about: 7. - hello.example.com secretName: hello-secret 8. • Session persistence rules: 9. • JWT validation - host: hello.example.com 10. 11. http: • Rewriting the URL of a request 12. paths: • Enabling HTTP/2 - path: / 13. backend: 14. • Choosing a load balancing method serviceName: hello-svc 15. • Changing the SSL parameters 16. servicePort: 80 • …
Extending the Kubernetes Ingress Resource Annotations An 1. apiVersion: extensions/v1beta1 2. kind: Ingress • Vendor-specific configuration 3. metadata: name: hello-ingress 4. settings annotations: 5. nginx.org/lb-method: "ip_hash" 6. 1. apiVersion: extensions/v1beta1 Co Configuration Snippets 2. kind: Ingress 3. metadata: • Embed NGINX configuration 4. name: hello-ingress directives directly into 5. annotations: nginx.org/location-snippets: | 6. config contexts proxy_set_header X-Custom-Header-1 foo; 7. proxy_set_header X-Custom-Header-2 bar; 8. or… Ed Edit Ingress Controller template directly
Recommend
More recommend