Walrasian tatonnement • This process always ends, otherwise prices go to infinity. • When it ends in the limit S i ∈ D ( v i ; p ) ✏ → 0 • What else ? • The only condition left is that ∪ i S i = [ n ] • For that we need: S i ⊆ X i ∈ D ( v i ; p i ) • Definition: A valuation satisfied gross substitutes if for all prices and there is X ∈ D ( v ; p 0 ) p ≤ p 0 S ∈ D ( v ; p ) s.t. S ∩ { i ; p i = p 0 i } ⊆ X • With the new definition, the algorithm always keeps a partition.
Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists.
Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) =
Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) =
Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) =
Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) =
Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) =
Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) = • matroid-matching
Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) = • matroid-matching
Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Some examples of GS: • additive functions v ( S ) = P i ∈ S v ( i ) • unit-demand v ( S ) = max i ∈ S v ( i ) • matching valuations max matching from S v ( S ) = • matroid-matching Open: GS ?= matroid-matching
Walrasian equilibrium • Theorem [Kelso-Crawford]: If all agents have GS valuations, then Walrasian equilibrium always exists. • Theorem [Gul-Stachetti]: If a class of valuations C contains all unit-demand valuations and Walrasian equilibrium always exists then C ⊆ GS
Valuated Matroids • Given vectors define v 1 , . . . , v m ∈ Q n ψ p ( v 1 , . . . , v n ) = n if det( v 1 , . . . , v n ) = p − n · a/b for prime . a, b, p ∈ Z p • Question in algebra: v i ∈ V ψ p ( v 1 , . . . , v n ) s.t. det( v 1 , . . . , v n ) 6 = 0 min • Solution is a greedy algorithm: start with any non- degenerate set and go over each items and replace it by the one that minimizes . ψ p ( v 1 , . . . , v n ) • [DW]: Grassmann-Plucker relations look like matroid cond
Valuated Matroids • Definition: a function is a valuated � [ n ] � → R v : k matroid if the “Greedy is optimal”.
Matroidal maps • Definition: a function is a matroidal map if v : 2 [ n ] → R for every a set in can be obtained by D ( v ; p ) p ∈ R n the greedy algorithm : and S 0 = ∅ S t = S t − 1 ∪ { i t } for i t ∈ argmax i v p ( i | S t )
Matroidal maps • Definition: a function is a matroidal map if v : 2 [ n ] → R for every a set in can be obtained by D ( v ; p ) p ∈ R n the greedy algorithm : and S 0 = ∅ S t = S t − 1 ∪ { i t } for i t ∈ argmax i v p ( i | S t ) • Definition: a subset system is a matroid if M ⊆ 2 [ n ] for every the problem can be solved S ∈ M p ( S ) max p ∈ R n by the greedy algorithm.
Discrete Concavity • A function is convex if for all , a f : R n → R p ∈ R n local minimum of is a global f p ( x ) = f ( x ) � h p, x i minimum. • Also, gradient descent converges for convex functions. • We want to extend this notion to function in the v : 2 [ n ] → R v : Z [ n ] → R hypercube: (or lattice or other discrete sets such as the basis of a matroid)
Discrete Concavity • A function is convex if for all , a f : R n → R p ∈ R n local minimum of is a global f p ( x ) = f ( x ) � h p, x i minimum. • Also, gradient descent converges for convex functions. • We want to extend this notion to function in the v : 2 [ n ] → R v : Z [ n ] → R hypercube: (or lattice or other discrete sets such as the basis of a matroid)
Discrete Concavity • A function is convex if for all , a f : R n → R p ∈ R n local minimum of is a global f p ( x ) = f ( x ) � h p, x i minimum. • Also, gradient descent converges for convex functions. • We want to extend this notion to function in the v : 2 [ n ] → R v : Z [ n ] → R hypercube: (or lattice or other discrete sets such as the basis of a matroid)
Discrete Concavity • A function is discrete concave if for all v : 2 [ n ] → R all local minima of are global minima. I.e. p ∈ R n v p v p ( S ) ≥ v p ( S ∪ i ) , ∀ i / ∈ S v p ( S ) ≥ v p ( S \ j ) , ∀ j ∈ S v p ( S ) ≥ v p ( S ∪ i \ j ) , ∀ i / ∈ S, j ∈ S then . In particular local v p ( S ) ≥ v p ( T ) , ∀ T ⊆ [ n ] search always converges. • [Murota ’96] M-concave (generalize valuated matroids) [Murota-Shioura ’99] -concave functions M \
Equivalence • [Fujishige-Yang] A function is gross v : 2 [ n ] → R substitutes i ff it is a matroidal map i ff it is discrete concave. [Dress-Wenzel ’91] [Kelso-Crawford ’82] [Murota-Shioura ’99] generalize necessary /“su ffi cient” generalize convexity Grassmann-Plucker condition for price to discrete domains relations adjustment to converge valuated matroids M-discrete concave gross substitutes matroidal maps
Equivalence • [Fujishige-Yang] A function is gross v : 2 [ n ] → R substitutes i ff it is a matroidal map i ff it is discrete concave. [Dress-Wenzel ’91] [Kelso-Crawford ’82] [Murota-Shioura ’99] generalize necessary /“su ffi cient” generalize convexity Grassmann-Plucker condition for price to discrete domains relations adjustment to converge valuated matroids M-discrete concave gross substitutes matroidal maps • In particular in poly-time. S ∈ D ( v ; p )
Equivalence • [Fujishige-Yang] A function is gross v : 2 [ n ] → R substitutes i ff it is a matroidal map i ff it is discrete concave. [Dress-Wenzel ’91] [Kelso-Crawford ’82] [Murota-Shioura ’99] generalize necessary /“su ffi cient” generalize convexity Grassmann-Plucker condition for price to discrete domains relations adjustment to converge valuated matroids M-discrete concave gross substitutes matroidal maps • In particular in poly-time. S ∈ D ( v ; p ) • Proof through discrete di ff erential equations
Discrete Di ff erential Equations v : 2 [ n ] → R • Given a function we define the discrete derivative with respect to as the function i ∈ [ n ] ∂ i v : 2 [ n ] \ i → R which is given by: ∂ i v ( S ) = v ( S ∪ i ) − v ( S ) (another name for the marginal)
Discrete Di ff erential Equations v : 2 [ n ] → R • Given a function we define the discrete derivative with respect to as the function i ∈ [ n ] ∂ i v : 2 [ n ] \ i → R which is given by: ∂ i v ( S ) = v ( S ∪ i ) − v ( S ) (another name for the marginal) • If we apply it twice we get: ∂ ij v ( S ) := ∂ j ∂ i v ( S ) = v ( S ∪ ij ) − v ( S ∪ i ) − v ( S ∪ j ) + v ( S ) • Submodularity: ∂ ij v ( S ) ≤ 0
Discrete Di ff erential Equations v : 2 [ n ] → R • [Reijnierse, Gellekom, Potters] A function is in gross substitutes i ff it satisfies: ∂ ij v ( S ) ≤ max( ∂ ik v ( S ) , ∂ kj v ( S )) ≤ 0 condition on the discrete Hessian. • Idea: A function is in GS i ff there is not price such that: or D ( v ; p ) = { S, S ∪ ij } D ( v ; p ) = { S ∪ k, S ∪ ij } If v is not submodular, we can construct a price of the first type. If then we ∂ ij v ( S ) > max( ∂ ik v ( S ) , ∂ kj v ( S )) can find a certificate of the second type.
Algorithmic Problems • Welfare problem: given m agents with v 1 , . . . , v m : 2 [ n ] → R find a partition of maximizing P i v i ( S i ) [ n ] S 1 , . . . , S m • Verification problem: given a partition S 1 , . . . , S m find whether it is optimal. • Walrasian prices: given the optimal partition ( S ∗ m ) 1 , . . . , S ∗ find a price such that S ∗ i ∈ argmax S v i ( S ) − p ( S )
Algorithmic Problems • Techniques: • Tatonnement • Linear Programming • Gradient Descent • Cutting Plane Methods • Combinatorial Algorithms
Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS P S x iS = 1 , ∀ i ∈ [ m ] P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i { 0 , 1 } x iS ∈ [0 , 1]
Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS P S x iS = 1 , ∀ i ∈ [ m ] P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i x iS ∈ [0 , 1]
Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual
Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual • For GS, the IP is integral: W IP ≤ W LP = W D-LP • Consider a Walrasian equilibrium and p the Walrasian prices and u the agent utilities. Then it is a solution to the dual, so: W D-LP ≤ W eq = W IP
Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual
Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual • In general, Walrasian equilibrium exists i ff LP is integral.
Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual • In general, Walrasian equilibrium exists i ff LP is integral. • Separation oracle for the dual: u i ≥ max v i ( S ) − p ( S ) S is the demand oracle problem.
Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual
Linear Programming • [Nisan-Segal] Formulate this problem as an LP: max P i v i ( S ) x iS min P i u i + P j p j P S x iS = 1 , ∀ i ∈ [ m ] u i ≥ v i ( S ) − P j ∈ S p j ∀ i, S P P S 3 j x iS = 1 , ∀ j ∈ [ n ] i p j ≥ 0 , u i ≥ 0 x iS ∈ [0 , 1] primal dual • Walrasian equilibrium exists + demand oracle in poly-time = Welfare problem in poly-time • [Roughgarden, Talgam-Cohen] Use complexity theory to show non-existence of equilibrium, e.g. budget additive.
Gradient Descent • We can Lagrangify the dual constraints and obtain the following convex potential function: φ ( p ) = P i max S [ v i ( S ) − p ( S )] + P j p j • Theorem: the set of Walrasian prices (when they exist) are the set of minimizers of . φ ∂ j φ ( p ) = 1 − P i 1[ j ∈ S i ]; S i ∈ D ( v i ; p ) • Gradient descent: increase price of over-demanded items and decrease price of over-demanded items. • Tatonnement: p j ← p j − ✏ · sgn @ j � ( p )
Comparing Methods method oracle running-time
How to access the input
How to access the input Value oracle: given i and S: query . v i ( S )
How to access the input Demand oracle: Value oracle: given i and p: given i and S: query . query . v i ( S ) S ∈ D ( v i , p )
How to access the input Demand oracle: Aggregate Demand: Value oracle: given p, query. given i and p: given i and S: query . query . v i ( S ) S ∈ D ( v i , p ) P i S i ; S i ∈ D ( v i , p )
Comparing Methods method oracle running-time aggreg demand pseudo-poly tatonnement/GD
Comparing Methods method oracle running-time aggreg demand pseudo-poly tatonnement/GD linear program demand/value weakly-poly
Comparing Methods method oracle running-time aggreg demand pseudo-poly tatonnement/GD linear program demand/value weakly-poly cutting plane aggreg demand weakly-poly • [PL-Wong]: We can compute an exact equilibrium ˜ with calls to an aggregate demand oracle. O ( n )
Comparing Methods method oracle running-time aggreg demand pseudo-poly tatonnement/GD linear program demand/value weakly-poly cutting plane aggreg demand weakly-poly
Comparing Methods method oracle running-time aggreg demand pseudo-poly tatonnement/GD linear program demand/value weakly-poly cutting plane aggreg demand weakly-poly combinatorial value strongly-poly
Comparing Methods method oracle running-time aggreg demand pseudo-poly tatonnement/GD linear program demand/value weakly-poly cutting plane aggreg demand weakly-poly combinatorial value strongly-poly • [Murota]: We can compute an exact equilibrium ˜ for gross susbtitutes in time. O (( mn + n 3 ) T V )
Algorithmic Problems • Welfare problem: given m agents with v 1 , . . . , v m : 2 [ n ] → R find a partition of maximizing P i v i ( S i ) [ n ] S 1 , . . . , S m • Verification problem: given a partition S 1 , . . . , S m find whether it is optimal. • Walrasian prices: given the optimal partition ( S ∗ m ) 1 , . . . , S ∗ find a price such that S ∗ i ∈ argmax S v i ( S ) − p ( S )
Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items.
Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items.
Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items. w jk = v i ( S i ) − v i ( S i ∪ k \ j )
Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items. w φ i k = v i ( S i ) − v i ( S i ∪ k )
Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items. w j φ i 0 = v i ( S i ) − v i ( S i \ j )
Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items. w φ i φ i 0 = 0
Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items.
Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items.
Computing Walrasian prices • Given a partition we want to find prices S 1 , . . . , S m such that S i ∈ argmax S v i ( S ) − p ( S ) • For GS, we only need to check that no buyer want to add, remove or swap items.
Computing Walrasian prices • Theorem: the allocation is optimal if the exchange graph has no negative cycle. • Proof: if no negative cycles the distance is well defined. So let then: p j = − dist( φ , j ) dist( φ , k ) ≤ dist( φ , j ) + w jk v i ( S i ) ≥ v i ( S i ∪ k \ j ) − p k + p j And since is locally-opt then it is globally opt. S i Conversely: Walrasian prices are a dual certificate showing that no negative cycles exist.
Computing Walrasian prices • Theorem: the allocation is optimal if the exchange graph has no negative cycle. • Proof: if no negative cycles the distance is well defined. So let then: p j = − dist( φ , j ) dist( φ , k ) ≤ dist( φ , j ) + w jk v i ( S i ) ≥ v i ( S i ∪ k \ j ) − p k + p j And since is locally-opt then it is globally opt. S i Conversely: Walrasian prices are a dual certificate showing that no negative cycles exist. • Nice consequence: Walrasian prices form a lattice.
Algorithmic Problems • Welfare problem: given m agents with v 1 , . . . , v m : 2 [ n ] → R find a partition of maximizing P i v i ( S i ) [ n ] S 1 , . . . , S m • Verification problem: given a partition S 1 , . . . , S m find whether it is optimal. • Walrasian prices: given the optimal partition ( S ∗ m ) 1 , . . . , S ∗ find a price such that S ∗ i ∈ argmax S v i ( S ) − p ( S )
Algorithmic Problems • Welfare problem: given m agents with v 1 , . . . , v m : 2 [ n ] → R find a partition of maximizing P i v i ( S i ) [ n ] S 1 , . . . , S m • Verification problem: given a partition S 1 , . . . , S m find whether it is optimal. • Walrasian prices: given the optimal partition ( S ∗ m ) 1 , . . . , S ∗ find a price such that S ∗ i ∈ argmax S v i ( S ) − p ( S )
Algorithmic Problems • Welfare problem: given m agents with v 1 , . . . , v m : 2 [ n ] → R find a partition of maximizing P i v i ( S i ) [ n ] S 1 , . . . , S m • Verification problem: given a partition S 1 , . . . , S m find whether it is optimal. • Walrasian prices: given the optimal partition ( S ∗ m ) 1 , . . . , S ∗ find a price such that S ∗ i ∈ argmax S v i ( S ) − p ( S )
Incremental Algorithm • For each we will solve problem to find the W t t = 1 ..n optimal allocation of items to buyers. [ t ] = { 1 ..t } m • Problem is easy. W 1 • Assume now we solved getting allocation S 1 , . . . , S m W t and a certificate = maximal Walrasian prices. p w jk = v i ( S i ) − v i ( S i ∪ k \ j )+ p k − p j w j φ i 0 = v i ( S i ) − v i ( S i \ j ) − p j w φ i k = v i ( S i ) − v i ( S i ∪ k )+ p k
Incremental Algorithm • For each we will solve problem to find the W t t = 1 ..n optimal allocation of items to buyers. [ t ] = { 1 ..t } m • Problem is easy. W 1 • Assume now we solved getting allocation S 1 , . . . , S m W t and a certificate = maximal Walrasian prices. p
Incremental Algorithm • For each we will solve problem to find the W t t = 1 ..n optimal allocation of items to buyers. [ t ] = { 1 ..t } m • Problem is easy. W 1 • Assume now we solved getting allocation S 1 , . . . , S m W t and a certificate = maximal Walrasian prices. p
Incremental Algorithm • Algorithm: compute shortest path from to φ t + 1 • Update allocation by implementing path swaps
Incremental Algorithm • Algorithm: compute shortest path from to φ t + 1 • Update allocation by implementing path swaps
Recommend
More recommend