Costas Busch Louisiana State University CCW’08
Becomes an issue when designing algorithms The output of the algorithms may affect the energy efficiency
Computation power at each node is abundant ◦ Unlimited energy for computations at each node ◦ Computation time at each node does not affect total time complexity Point to point communication ◦ Messages in local neighborhood can be sent simultaneously ◦ Message delivery dominates time complexity
The network is reliable ◦ The network topology does not change ◦ The messages are delivered as expected Global Synchronization ◦ All nodes can synchronize ◦ A special node initiates the algorithm The algorithm runs only once ◦ One shot problems
Computation power is limited Communication is not point-to-point ◦ Requires more energy due to channel interference The network is unreliable (ad hoc, mobility) ◦ More energy to transfer messages Global synchronization is not easy ◦ More messages, energy to achieve synchronization An algorithm may run forever ◦ It continuously consumes energy
Consider energy consumption when designing algorithms Do not make strong assumptions Design algorithms with: Smaller computation at each node Low message complexity Self-stabilizing Local Online
Classic metrics: Number of messages Total time New metrics: Max, Average utilization of the nodes Combination of the above metrics ◦ Number of Messages X Total Time? What are realistic metrics of performance?
Topology Control ◦ Focuses on obtaining sparse connected spanners, ◦ But what is the effect on load balancing? Routing ◦ Focuses on just obtaining routing paths, ◦ But what is the effect on congestion?
Peer-to-peer ◦ Focus on uniformly distributing and accessing the data ◦ But what about the actual node utilization and actual network paths? Data aggregation ◦ Focus on minimizing the total aggregation cost, ◦ But how does this affect the max cost at a node? Facility location ◦ Focus on path distances ◦ But how about the load on each facility?
How frequently do uniform disc graphs appear in practice? Can we afford to ignore maximum node utilization? Is the computation power at each node abundant?
Recommend
More recommend