gym a vnf testing framework design and prototype insights
play

Gym: A VNF Testing Framework - Design and Prototype Insights Prof. - PowerPoint PPT Presentation

Gym: A VNF Testing Framework - Design and Prototype Insights Prof. Christian Rothenberg Raphael Vicente Rosa Claudio Bertoldo Jan-2017 University of Campinas (UNICAMP), Brazil in technical collaboration with Ericsson Research Imagine an


  1. Gym: A VNF Testing Framework - Design and Prototype Insights Prof. Christian Rothenberg Raphael Vicente Rosa Claudio Bertoldo Jan-2017 University of Campinas (UNICAMP), Brazil in technical collaboration with Ericsson Research

  2. Imagine … an olympic athlete Training Competing ● ● ○ Indices ○ Goals: Blood sugar levels Deliver the best ■ ■ ■ Hemoglobine % ■ Conscious actions and body reactions Transpiration But… ■ ○ ○ When?.. ■ Weather conditions may interfere Start Over... [stretching | heating ] ■ ■ ■ Steady (long run) ○ In the end… Final Sprint Break records ■ ■ ○ Why? ■ Keep competing High performance No injuries ■ ■ ■ Sport Analytics Athlete Profile ■ ● Continuous improvement

  3. We have VNFs, execution environments … and services! Extracted from: http://www.etsi.org/deliver/etsi gs/NFV-IFA/001 099/001/01.01.01 60/gs NFV-IFA001v010101p.pdf http://www.etsi.org/deliver/etsi gs/NFV-EVE/001 099/004/01.01.01 60/gs NFV-EVE004v010101p.pdf

  4. Let’s not forget: some Actors … and requirements

  5. Bring your VNF to the Gym (... and vice-versa) Old Story: VNF Benchmark-as-a-Service (VBaaS) ● Nowadays: Gym - VNF Testing Framework ● ○ Modular architecture with stand-alone programmable components Simple messaging system among them following generic RPC guidelines ○ Extensible set of tools and associated metrics ○ Programmability of tests in dynamic compositions of modules ○ ○ And respective flexible methods for the interpretation/processing of evaluations' output

  6. Why Gym? VNF Developers ● Develop tests for profiling VNFs performance aligned with agile Continuous ○ Development/Integration of DevOps methodologies, speeding their time-to-market Service Providers ● ○ Enhance offered services QoS with tested deployable scenarios (e.g., varying workloads in multiple sites), containing transparent sets of required VNF metrics ● Cloud/Infrastructure Providers Via VNF testing in their execution environments, would increase reliability with compliance ○ methodologies (e.g., energy consumption)

  7. Design Principles for Experimentation Methodologies ➢ Comparability: ○ output of tests must be simple to understand and process, in a human-readable format, able to be easily usable (e.g., imported by big data applications). ➢ Repeatability: ○ testing setup must be defined by a handful/flexible design model, able to be interpreted and executed by the testing platform repeatedly with customization. ➢ Interoperability: ○ tests should be able to be ported in different environments with lightweight technologies providing the means for it. ➢ Configurability: ○ open interfaces and extensible messaging models among testing components must exist to provide flexibility when composing tests descriptions and configurations.

  8. Architecture ● Core components as microservices ○ e.g., containers ● Flexible messaging ○ e.g., JSON and REST APIs ● Dynamic couple tools ○ Extensible classes as tools’ interfaces ● Freedom for design ○ Outlines and Profiles

  9. Some Concepts Outline: Profile: ★ ★ specifies how to test a VNF that may be is composed by the outputs of an Outline specific to it or applicable to several VNF types. execution, and represents a mapping between It includes structural (e.g., agents/monitors) virtualized resources (e.g., vCPU, memory) and and functional sketches with variable performance (e.g., throughput, latency parameters (e.g., probers/listeners properties), between in/out or ports) at a given used as inputs by Gym to perform VNFs tests environment. It abstracts a VNF allocation with desired resources to deliver a given (predictable/measured) performance quality

  10. I want to benchmark a vIMS, now what?.. Service Provider Question: ➔ In which cloud would be cheaper to deploy my vIMS? Why? ➔ I want to parameterize my service, so I can: Extract the vIMS service performance opportunity cost ● Understand its quality, and extract some SLA ● ● Offer/Negotiate it with other providers

  11. Testing with Gym... 1. Select some cloud providers: e.g., Microsoft, Google, Amazon 2. Get a VNF: vIMS (Project Clearwater) 3. Have a stimulus: sipp prober 4. Have a monitor for the environment: linux host listener 5. Target some metrics: CPU vs. Calls/s 6. OK, write your Outline, submit it to Player wait for the target Profile then, select your metrics, and … a. b. apply algorithms for analysis, c. make graphics, etc

  12. Not so fast … I mean, we missed some steps Deploying Gym’s components ➔ How? ◆ Gym’s components are python modules (VM, container, baremetal) ● Where? ◆ Depends on the testing topology, e.g., ● Are your customers in Australia? ○ Should you group agents: one-to-many, many-to-many? ○ How to compose the Outline? ➔ Gym provides a flexible syntax ◆ Under development: “It would be interesting to have some intents” ● Need to understand what you want ◆ i.e., agents/monitors requirements and structure (e.g., probers/listeners and some params) ● ● e.g., An specific prober metric vs. another listener metric

  13. Scenario vIMS components decomposed in OpenStack VMs (different flavors) Monitors installed in each vIMS VM -> listening host (cpu, memory, disk i/o, network) Agent with sipp prober in VM Manager/Player in another VM All components in the same network

  14. Partial Results: on-going activity in our lab

  15. What could we discuss here? Consistency: ❖ Such VNF when deployed in a tested execution environment will deliver the given performance ➢ described in its extracted Profile ?; specially when tested and put in production, making use of multiple virtualization technologies, for instance Stability: ❖ VNF testing measurements need to present consistent results when applied over different scenarios. ➢ So, do testing descriptions transparently handle service/resource definitions and metrics of VNFs placed in heterogeneous execution environments? Goodness: ❖ A VNF might be tested with different allocated resources and stimulus, unlike its possible ➢ production environment. Crucially, how good testing results, and associated stimulus, correspond to VNF measured performance when running in execution environments under actual workloads?

  16. Gym: by the birds’ eyes We leave VNF specific testing tools for users (VNF Developers) development ➢ They know better a VNF state machine testing procedure (e.g., sipp) ○ e.g., it is possible to code a prober to interface NetFPGA card or DPDK pktgen ○ Metrics and their interpretation are open field for research ➢ ○ e.g., docker containers presents dozens of features among cpu, memory, disk, etc Framework in early childhood ➢ Many tests, errors, debugging going on ○ Improving design of models, messages, behaviors ○ Important related work and foundations ➢ ○ OPNFV (yardstick, bottlenecks, etc), ToDD, many more (please, tell us about yours) IETF BMWG, IRTF NFVRG, ETSI NFV (TST) ○

  17. In a not so far future... Gym will be open source ➔ Open repository of VNF profiles ➔ New probers/listeners ➔ New scenarios coming from SDN/NFV outcomes for 5G realization ➔

  18. ...with some questions... Can we outperform continuous monitoring? When? How? Are we modeling tests correctly? Are tools able to express actual service workloads and heterogeneous behaviors? “Trust, but verify” or verify, then trust? Maybe it depends on our findings around consistency, stability and goodness. For sure, we cannot understand and evolve something, if not first measuring it

  19. Thanks!

Recommend


More recommend