Algorithms and Limits in Statistical Inference Jayadev Acharya Massachusetts Institute of Technology
Statistical Inference Given samples from an unknown source π Test if π has a property π¬ ? Learn π ? Is π monotone, product distribution, etc mixture of Gaussians, Log-concave, etc Traditional Statistics: samples β β - Pearson (1894), β¦, Redner ,Walker (1984), β¦, Dasgupta (1999), ..., Moitra, Valiant (2010),... - Pearsonβs chi-squared tests, Hoeffdingβstest, GLRT, β¦ - Devroye, Lugosi (2001), Bagnoli, Bergstrom (2005), β¦, error rates Wellner, Samworth et al - Batu et al (2000, 01, 04), Paninski (2008), ..., sample and computational efficiency Density estimation of mixture of Sample optimal and efficient testers for Gaussians with information monotonicity, and independence over theoretically optimal samples, π Γ π Γ[π] ? and linear run time?
Illustrative Results: Learning [ Acharya-Diakonikolas-Li-Schmidtβ15 ] Agnostic univariate density estimation with t -piece d -degree polynomial t(d+1) tβpoly d 2 O samples, O run time Ξ΅ 2 Ξ΅ 2 First near sample-optimal, linear-time algorithms for learning: β’ Piecewise flat distributions β’ Mixtures of Gaussians β’ Mixtures of log-concave distributions β’ Densities in Besov spaces, β¦
Illustrative Results: Testing [ Acharya-Daskalakis-Kamathβ15 ] Sample complexity to test if π β π¬ , or π ππ π, π¬ > π , For many classes, optimal complexity: |ππππππ| β’ Applications: β’ Independence, monotonicity over π π : Ξ( π π/2 π 2 ) β’ Log-concavity, unimodality over [π] : Ξ( π π 2 ) β’ Based on: β’ a new π 2 - β J test β’ a modified Pearsonβs chi-squared statistic
Recommend
More recommend