an optimal local
play

An optimal local max approximation algorithm s.t. A x 1 , for - PowerPoint PPT Presentation

An optimal local max approximation algorithm s.t. A x 1 , for max-min linear programs C x 1 , x 0 Patrik Floren Joel Kaasinen Petteri Kaski Jukka Suomela Helsinki Institute for Information Technology HIIT University of


  1. An optimal local max ω approximation algorithm s.t. A x ≤ 1 , for max-min linear programs C x ≥ ω 1 , x ≥ 0 Patrik Floréen Joel Kaasinen Petteri Kaski Jukka Suomela Helsinki Institute for Information Technology HIIT University of Helsinki, Finland SPAA, Calgary, Canada, 13 August 2009

  2. Result on one slide max ω s.t. A x ≤ 1 , Distributed setting: C x ≥ ω 1 , constraints, a 1 x ≤ 1 a 2 x ≤ 1 x ≥ 0 deg ≤ ∆ I A , C : nonnegative agents, x 1 x 2 x 3 x 4 matrices deg = O ( 1 ) edge ∼ positive objectives, coefficient deg ≤ ∆ K c 1 x ≥ ω c 2 x ≥ ω Approximability with constant-time distributed algorithms: – new positive result: ∆ I ( 1 − 1/ ∆ K ) + ǫ – earlier negative result: ∆ I ( 1 − 1/ ∆ K ) 2 / 21

  3. Max-min linear programs General form: Equivalent form: maximise min k ∈ K c k x maximise ω subject to A x ≤ 1 , subject to A x ≤ 1 , C x ≥ ω 1 , x ≥ 0 x ≥ 0 A and C are nonnegative matrices Intuition: solution x uses a i x units of resource i ∈ I , and provides c k x units of service to customer k ∈ K 3 / 21

  4. Max-min LPs vs. packing LPs Max-min LP: Packing LP: maximise min k ∈ K c k x maximise cx subject to A x ≤ 1 , subject to A x ≤ 1 , x ≥ 0 x ≥ 0 A , C , and c are nonnegative 4 / 21

  5. Applications of max-min LPs Maximising the lifetime of a wireless sensor network: sink battery-powered relays (constraints) choose optimal ← data flows here sensors (objectives) 5 / 21

  6. Applications of max-min LPs Maximising the lifetime of a wireless sensor network: Abstraction that we study here: constraints i ∈ I deg ≤ ∆ I agents v ∈ V objectives k ∈ K deg ≤ ∆ K 6 / 21

  7. Applications of max-min LPs Max-min Mixed packing and linear program: covering problem: maximise find x ω subject to A x ≤ 1 , such that A x ≤ 1 , C x ≥ ω 1 , C x ≥ 1 , x ≥ 0 x ≥ 0 Near-optimal solution to max-min LP = ⇒ near-feasible solution to mixed packing and covering (or proof that there is no feasible solution) 7 / 21

  8. Problem max ω s.t. A x ≤ 1 , C x ≥ ω 1 , Focus: distributed algorithms that run in constant time x ≥ 0 ( local algorithms ) deg ( i ) ≤ ∆ I Running time may depend on i a i x ≤ 1 parameters ∆ I , ∆ K , etc., but must be independent of v x v the number of nodes k c k x ≥ ω deg ( k ) ≤ ∆ K 8 / 21

  9. Old results max ω s.t. A x ≤ 1 , C x ≥ ω 1 , Old negative result: x ≥ 0 • Approximation factor ∆ I ( 1 − 1/ ∆ K ) impossible deg ( i ) ≤ ∆ I Old positive results: i a i x ≤ 1 • Approximation factor ∆ I easy v x v (Papadimitriou–Yannakakis 1993) k c k x ≥ ω • Factor ∆ I ( 1 − 1/ ∆ K ) + ǫ deg ( k ) ≤ ∆ K possible in some special cases 9 / 21

  10. New results max ω s.t. A x ≤ 1 , C x ≥ ω 1 , Old negative result: x ≥ 0 • Approximation factor ∆ I ( 1 − 1/ ∆ K ) impossible deg ( i ) ≤ ∆ I New positive result: i a i x ≤ 1 • Approximation factor v x v ∆ I ( 1 − 1/ ∆ K ) + ǫ possible for any constant ǫ > 0 k c k x ≥ ω deg ( k ) ≤ ∆ K Matching upper and lower bounds! 10 / 21

  11. New results max ω s.t. A x ≤ 1 , Tight bound ∆ I ( 1 − 1/ ∆ K ) + ǫ C x ≥ ω 1 , holds for any combination of x ≥ 0 these assumptions: • anonymous networks deg ( i ) ≤ ∆ I or unique identifiers i a i x ≤ 1 • 0/1 coefficients in A , C v x v or arbitrary nonnegative numbers • one nonzero per column in A , C k c k x ≥ ω or arbitrary structure deg ( k ) ≤ ∆ K 11 / 21

  12. Local reductions It is enough to solve the following special case: • Communication graph is (infinite) tree • Degree of each constraint is 2 • Degree of each objective is at least 2 • Each agent is adjacent to at least one constraint • Each agent is adjacent to exactly one objective General result then follows by a series of local reductions 12 / 21

  13. Local reductions Hence we focus on instances with the following structure: constraint agent . . . . . . objective 13 / 21

  14. Local reductions An example: constraint agent objective 14 / 21

  15. Algorithm How to solve it? We begin with a thought experiment. . . constraint agent objective 15 / 21

  16. Two roles: “up” and “down” What if we could partition agents in two sets so that there is exactly one up-agent adjacent to each constraint or objective? constraint up-agent down-agent objective 16 / 21

  17. Layers Then we could also organise the graph in layers · · · constraint up-agent objective down-agent constraint · · · 17 / 21

  18. Layers Solve by using layers: • Message propagation upwards • Use the shifting strategy • Remove slack: down-agents choose large values, up-agents choose small values 18 / 21

  19. Layers Solve by using layers: · · · Globally consistent solution, ( 1 + ǫ ) -approximation But we had to assume that the agents are partitioned in two sets, “up” and “down”! 19 / 21

  20. Trick Useful property: the output of a node depends only on its own role (up or down) Consider both roles, take the average! A lucky coincidence: approximation guarantee weakens only by factor ∆ I ( 1 − 1/ ∆ K ) 20 / 21

  21. Summary max ω s.t. A x ≤ 1 , Distributed setting: C x ≥ ω 1 , constraints, a 1 x ≤ 1 a 2 x ≤ 1 x ≥ 0 deg ≤ ∆ I A , C : nonnegative agents, x 1 x 2 x 3 x 4 matrices deg = O ( 1 ) edge ∼ positive objectives, coefficient deg ≤ ∆ K c 1 x ≥ ω c 2 x ≥ ω Approximability with constant-time distributed algorithms: – new positive result: ∆ I ( 1 − 1/ ∆ K ) + ǫ – earlier negative result: ∆ I ( 1 − 1/ ∆ K ) 21 / 21

Recommend


More recommend