Multi-Item Mechanisms without Item-Independence: Learnability via Robustness How to sell this rock to optimize revenue? π€ # π€ βΌ πΊ β β¦ β¦ π€ & Bayesian assumption: the seller knows πΊ q [Myersonβ81]: Characterize the optimal mechanism when π€ # , β¦ , π€ & are independent . q Where exactly did the priors come from? q Β§ A: from market research or observation of bidder behavior in some prior auctions. Sample-Based Mechanism Design: learn approximately optimal or up-to- π optimal auctions given q sample access to πΊ . Β§ Single-item Auctions: [Elkindβ07, Cole-Roughgardenβ14, Mohri-Medinaβ14, Huang et alβ14, Morgenstern- Roughgardenβ15, Devanur et alβ16, Roughgarden-Schrijversβ16, Gonczarowski-Nisanβ17, Guo et al. β19]. Β§ Multi-item Auctions: positive results known only under item-independence assumption. β’ PAC-learning based: [Morgenstern-Roughgarden β15, Syrgkanis β17, C.- Daskalakis β17, Gonczarowski-Weinberg β18]. β’ [Goldner-Karlin β16]: direct sample-based approach but tailored to Yaoβs mechanism [Yao β16].
Goals of Our Paper q Three goals of this paper: 1. Robustness : beyond sample access; suppose we only know * πΊ β πΊ . True F is in this ball β³ Exists optimal for all * πΊ what we know is only * πΊ Mechanisms Distributions 2. Modular Approach : that decouples the Inference and Mechanism Design components. Β§ PAC-learning approach requires joint consideration of Inference and Mechanism Design. Β§ Meta-Theorem [This paper]: Robustness + Learning Distβ βΉ sample complexity for learning an up- to- π optimal approximately BIC mechanism. 3. Item Dependence : up-to- π and approximately Bayesian Incentive Compatible mechanism for dependent items captured by a Bayesian network or Markov Random Field . 2
Max-min Robustness Model: Given an approximate distribution * πΊ . for each bidder π : β’ True world: β’ True F is in this ball βπ: π πΊ . , * β’ πΊ . β€ π , (e.g. Kolomogorov, LΓ©vy, Prokhorov distance) Goal: find mechanism β³ such that: β’ Exists optimal for all β³ * πΊ βπΊ . , π πΊ . , * πΊ . β€ π: what we know is Rev β³ Γ . πΊ . β₯ OPT Γ . πΊ . β poly π, π, π β πΌ only * πΊ Mechanisms Distributions [This paper]: Such β³ exists if you allow approximately - BIC ! β’ β’ Allows arbitrary dependency between the items. Setting Distance π Robustness Continuity Notations: πππ * Kolmogorov Rev π, πΊ β₯ OPT πΊ β π ππ β πΌ Single πΊ β πππ πΊ β€ π ππ β πΌ - n bidders, m items, any bidderβs π is IR and DSIC Item value for any item is in [0, πΌ] . LΓ©vy β§ β§ * - πΊ is the given dist. and πΊ is the true πππ * but unknown dist. TV Rev π, πΊ β₯ OPT πΊ β π ππ ππ β πΌ Multiple πΊ β πππ πΊ β€ π ππ ππ β πΌ π is IR and π πππΌπ -BIC - π is the mechanism designed based Items on only * πΊ . πππ * Prokhorov Rev π, πΊ β₯ OPT πΊ β π ππ + ππ π β πΌ πΊ β πππ πΊ β€ π ππ + ππ π β πΌ π is IR and ππΌ -BIC ( π = πππ + π ππ )
Learning Auctions with Dependent Items q Arbitrarily dependence requires exponential samples [Dughmi et alβ14]. q Parametrized sample complexity that degrades gracefully with the degree of dependence? q Two most prominent graphical models: Bayesian Networks (Bayesnet) and Markov Random Fields (MRF) . Note that they are fully general if the graphs on which they are defined are sufficiently dense . Β§ Degree of dependence of these models: maximum size of hyperedges in an MRF and largest indegree in a Β§ Bayesnet. Allow latent variables , i.e. unobserved variables in the distribution. Β§ Bayesnet: a directed acyclic graph MRF: an undirected graph State of Residence Image credit: internet umbrella sunglasses skis surfboard bTSc R X , W X ,STU Z QR STU RVR W XYZ STU [|]| ^ Sample Complexity: O , Ξ£ : alphabet. Sample Complexity: O ^ , Ξ£ : alphabet. _ ` _ `
Recommend
More recommend