exploring the impact of workload distribution in a hybrid
play

Exploring the Impact of Workload Distribution in a Hybrid Edge and - PowerPoint PPT Presentation

Exploring the Impact of Workload Distribution in a Hybrid Edge and Cloud Application for Smart Grids Ot avio Carvalho, Manuel Garcia, Eduardo Roloff, Phillipe O. A. Navaux Federal University of Rio Grande do Sul - Parallel and Distributed


  1. Exploring the Impact of Workload Distribution in a Hybrid Edge and Cloud Application for Smart Grids Ot´ avio Carvalho, Manuel Garcia, Eduardo Roloff, Phillipe O. A. Navaux Federal University of Rio Grande do Sul - Parallel and Distributed Processing Group (GPPD)

  2. Table of contents 1. Introduction 2. Architecture and Implementation 3. Evaluation 4. Conclusion and Future Works 2

  3. Introduction - Motivation • Smart Grids potential to save billions of dollars in energy spending for both producers and consumers. • Internet of Things potential economic impact. • Technologies created for IoT are driving computing toward dispersion . • Edge Computing • Cloudlets • Micro-datacenters • Fog Nodes 3

  4. Introduction - Main goals • Explore the potential performance improvements of moving computation from cloud to edge in a Smart Grid application . 1. What are the boundaries of our application architecture in terms of latency and throughput? 2. To what extent is it possible to move our workload from cloud to edge nodes? 3. Which strategies can be used to reduce the amount of data that is sent to the cloud? 4

  5. Architecture and Implementation • Three-layered architecture: • Cloud-layer • High latency processing. • Receives aggregated data from multiple edge nodes. • Composed by applications running on Linux VMs on Windows Azure. • Edge-layer • Low latency processing. • Receives data from multiple sensors and perform local processing. • Reduces the amount of data that needs to be sent to Cloud-layer. • Composed by ARM nodes (Raspberry Pi Zero W) connected to Wi-Fi. • Sensor-layer • Measurements only. • Produces a high amount of measurements that should be sent to Edge-layer for aggregation. • For evaluation purposes, our sensor measurements are pre-loaded into our Edge-layer nodes. 5

  6. Architecture and Implementation Figure 1: Architecture overview: Three-layered architecture 6

  7. Evaluation - Communication 15000 32KB Latency (ms) 64KB 10000 128KB 256KB 512KB 1024KB 5000 0 50th 90th 99th Percentiles (ms) Figure 2: PingPong: Latency Percentiles by Message Sizes (32KB to 1MB) 7

  8. Evaluation - Communication 1.5 1.0 Throughput (QPS) 32KB 64KB 128KB 256KB 512KB 1024KB 0.5 0.0 32KB 64KB 128KB 256KB 512KB 1024KB Size (KB) Figure 3: PingPong: Maximum Throughput by Message Size (32KB to 1MB) 8

  9. Evaluation - Application concurrency 8000 6000 Throughput (QPS) Edge 4000 Cloud 2000 0 1 10 100 Concurrency (Number of Goroutines) Figure 4: Concurrency Analysis: Impact of Goroutines usage on throughput (Edge and Cloud nodes) 9

  10. Evaluation - Application scalability 2000 1500 Throughput (QPS) 1 2 1000 4 500 0 1 2 4 Number of Edge Nodes Figure 5: Scalability Analysis: Throughput with multiple consumers (1 to 4 edge nodes) 10

  11. Evaluation - Workload windowing 800000 600000 Throughput (QPS) 1 10 400000 100 1000 200000 0 1 2 4 Number of Edge Nodes Figure 6: Windowing Analysis: Windowing impact on throughput (1 to 1000 messages per request) 11

  12. Conclusion and Future Works • Conclusion • The application was able to achieve a higher throughput by leveraging processing on edge nodes. • We were able to reduce communication with the cloud by aggregating data at edge level. • Future Works • Study how other communication protocols (such as MQTT) would behave in this application context. • Explore techniques and models for adaptive workload scheduling. • Evolve the application architecture to a general framework for IoT. 12

  13. Thanks! Questions? 13

Recommend


More recommend