7 th Int. Conf. on Internet and Distributed Computing Systems, IDCS 2014 September 24 th , 2014. Calabria, Italy Using a History ‐ Based Approach to Predict Topology Control Information in Mobile Ad Hoc Networks Pere Millán 1 , C. Molina 1 , R. Meseguer 2 , S. F. Ochoa 3 , R. Santos 4 1 Universitat Rovira i Virgili, Tarragona, Spain 2 Universitat Politècnica de Catalunya, Barcelona, Spain 3 Universidad de Chile, Santiago, Chile 4 Universidad Nacional del Sur, Bahia Blanca, Argentina
OLSR Outline • Motivation • Predicting Topology Control Information (TCI) • Experimental Framework & Results • Conclusions & Future Work 2
Motivation 3
Motivation Several social computing participation strategies Several social computing participation strategies use mobile ad hoc or opportunistic networks use mobile ad hoc or opportunistic networks 4
Motivation • Routing protocols in mobile collaboration scenarios – Must be simple, efficient, reliable and quickly adapt to changes in the network topology – Should minimize delivery of topology control information ( TCI ) to avoid consuming too much devices’ energy • Link ‐ state proactive ‐ routing protocols: – Low latency (using an optimized and known data ‐ path ) – Cost: periodically flooding the network with TCI … and when the number of nodes is high … … and when the number of nodes is high … 5
… can overload the network!!! … can overload the network!!! 6
… can we address the problem of delivering … can we address the problem of delivering much control information through the network? much control information through the network? 7
We present and evaluate a new strategy for predicting TCI in mobile ad hoc and opportunistic networks – Extends the previous work on TCI prediction [6 ‐ 8]. – Uses a time window with historical node TCI info … … to predict next TCI . – Named “ History ‐ Based Prediction ” (HBP). – HBP Performance : determined by simulations in several mobile scenarios . 8
Predicting TCI 9
Predicting TCI using Past Information • Idea: – Use historical Topology Control Information (TCI) to make predictions of the next control packets (CP). • Questions to answer: – What performance and limits has this approach? – In which mobile computing scenarios this proposal can provide a real benefit ? 10
HBP Assumptions • Each node keeps updated locally (in a table ) the recent TCI history received from its neighbors. • Prediction at each node: – Input: recent TCI history. – Output: a prediction of TCI for each neighbor (guess network topology without delivering control info). • Prediction can be done when previous TCI received matches TCI previously stored . • HBP predicts a state already appeared in the past . 11
HBP table example • Control Packets sequence: AAAABABAACBAABBAB • Table contents Pattern Next Count Last (patterns with A 2 AA B 2 # 2 control packets) : C 1 A 2 AB B 1 # AC B 1 # A 2 BA B 2 # BB A 1 # CB A 1 # 12
HBP tables with historic information • Unbounded – More flexibility to identify movement patterns. • One table per node. • Movement pattern (stored in the table): – Sequence of 1+ TCI packets seen in the past. • Attached to every pattern stored in the table: – A list of all packets appeared after each pattern. – Statistical information: last packet, most frequent . 13
Experimental Framework • NS ‐ 3 (4 hours) + BonnMotion. • Mobility: Random Walk, Nomadic, SLAW. • OLSR protocol (HELLO: 2 s / TC: 3 s). • 300x300 m open area (beach, park) . Free to move/interact. • Node devices: • All similar (capabilities ≈ iPhone 4). Wi ‐ Fi (detect others & Exchange CI) . Range: 80 m / BW ≥ 50 kbps. • • 10, 20, 30, 40 nodes, randomly deployed. • 1 m/s (walking) , 2 m/s (trotting) , 4 m/s (running) , and 6 m/s (bicycling) . 14
15 Results
We quantify TCI repetition over time • To help us understand predictability and prediction opportunity limits of our proposal. • Maximum reachable prediction accuracy: – Count if a certain TCI packet has ever appeared in the past. – If it appeared once , we assume it could be predicted . 16
Results: Predictability Limits %TCI packets appeared in the past %TCI packets appeared in the past • • 3 mobility models (1 m/s) 3 mobility models (1 m/s) • • 10 ‐ 40 nodes density 10 ‐ 40 nodes density Prediction limits decrease Prediction limits decrease About 80% for 10 nodes About 80% for 10 nodes when node density increases. when node density increases. High prediction potential High prediction potential Prediction capability does Prediction capability does not depend on mobility model not depend on mobility model 17
Packet representativeness • We also analyze the representativeness of the most ‐ frequent packets, with respect to the whole set of packets received by a node over time. • This will give us: – A first understanding about how difficult is to make right predictions. – And which is the amount of historical data that must be tracked to make these predictions. 18
Frequency of Observed Control Packets What control packets What control packets appear most frequently? appear most frequently? Many opportunities to predict Many opportunities to predict with a small subset of packets. with a small subset of packets. A small subset of packets A small subset of packets represent the most delivered represent the most delivered Does not depend Does not depend 30% of control packets on node density on node density represent 70% total observed nor mobility models nor mobility models 19
Types of wrong predictions • In case of wrong predictions ( miss ), classification: – It could be correctly predicted , if the right control packet was in the list of this pattern ( missPred ). – If not, it could not be predicted ( missNoPred ). • This identifies the limits of HBP and how far an approach is from the best. 20
History Depth • History Depth (HD) metric: – Number of TCI packets in the movement patterns. • HD range considered: – 0 to 5 TCI packets. • High HD values (long sequences): – More accurate predictions , few opportunities to predict. • Low HD values (short sequences): – Less accurate predictions, more opportunities to predict . 21
History ‐ based prediction (varying HD) noPred increases noPred increases with number of nodes with number of nodes (predictability limits) (predictability limits) But important But important %misses too %misses too Large HD: Large HD: ‐ Predictions ‐ Predictions + Accurate. + Accurate. HD=0: HD=0: largest %hits largest %hits HD 22
HBP flavors Last value last packet seen after this pattern. • Most ‐ frequent highest count packet. • History ‐ based Random any past packet • seen after this pattern. Pattern Next Count Last Previous example: • A 2 ? AAAABABAACBAABBAB AA B 2 # C 1 Most frequent: A Most frequent: A A 2 AB B 1 # Last value: B Last value: B History ‐ based random: any of A/B History ‐ based random: any of A/B AC B 1 # A 2 BA B 2 # Pure random: any of A/B/C Pure random: any of A/B/C BB A 1 # CB A 1 # 23
History ‐ based prediction (different policies) History information provides History information provides more accurate predictions. more accurate predictions. Much better results Much better results Always predicts Always predicts when using history when using history (wrongly) (wrongly) (even random) (even random) 24
History ‐ based prediction (different mobility) Mobility models do not present Mobility models do not present significant differences in prediction capability. significant differences in prediction capability. Similar behavior Similar behavior (difference <10%) (difference <10%) 25
Prediction confidence • HBP flavors must succeed in the predictions but also not predict when success is not guaranteed. – Success reduces network traffic and saves energy. – Wrong predictions can skew the network topology map and decrease the reliability of the process. • We include a confidence mechanism to determine the likelihood that a prediction is correct. – Aim: maximize right and minimize wrong predictions. 26
Implementing confidence • Simple confidence mechanism for HBP: – Saturated counter for each pattern in history table. – 2 ‐ bit counter (values range: 0 to 3). – Counter incremented when the prediction is right . – Counter decremented when the prediction is wrong . – Prediction is confident when counter ≥ 2 . – Counter initialized as 1 (no confidence). 27
History ‐ based prediction (2 ‐ bit confidence) Using a confidence mechanism Using a confidence mechanism we can minimize prediction errors. we can minimize prediction errors. Our goal: Our goal: Maximize noConf/miss Maximize noConf/miss Minimize noConf/hit Minimize noConf/hit Using confidence: Using confidence: Less predictions Less predictions (mainly hits, few misses) (mainly hits, few misses) 28
Dynamic history depth (tree) • Previous HBP flavors use fixed History Depth (HD). • We analyze an additional HBP flavor where History Depth is dynamic (prediction tree ): – Start the prediction with the largest HD . – If prediction is not possible (not confident or missing movement pattern), decrease HD value (shorter pattern), and repeat. – Repeat until prediction is possible or HD reaches 0. 29
Recommend
More recommend