15-388/688 - Practical Data Science: Anomaly detection and mixture of Gaussians J. Zico Kolter Carnegie Mellon University Spring 2018 1
Outline Anomalies and outliers Multivariate Gaussian Mixture of Gaussians 2
Outline Anomalies and outliers Multivariate Gaussian Mixture of Gaussians 3
What is an βanomalyβ Two views of anomaly detection Supervised view: anomalies are what some user labels as anomalies Unsupervised view: anomalies are outliers (points of low probability) in the data In reality, you want a combination of both these viewpoints: not all outliers are anomalies, but all anomalies should be outliers This lecture is going to focus on the unsupervised view, but this is only part of the full equation 4
What is an outlier? Outliers are points of low probability Given a collection of data points π¦ 1 , β¦ , π¦ ν , describe the points using some distribution, then find points with lowest π π¦ ν Since we are considering points with no labels, this is an unsupervised learning algorithm (could formulate in terms of hypothesis, loss, optimization, but instead for this lecture weβll be focusing on the probabilistic notation) 5
Outline Anomalies and outliers Multivariate Gaussian Mixture of Gaussians 6
Multivariate Gaussian distributions We have seen Gaussian distributions previously, but mainly focused on distributions over scalar-valued data π¦ ν β β β π¦ β π 2 π π¦; π, π 2 = 2ππ 2 β1/2 exp 2π 2 Gaussian distributions generalize nicely to distributions over vector-valued random variables π taking values in β ν β 1 π π¦; π, Ξ£ = 2πΞ£ β1/2 exp 2 π¦ β π ν Ξ£ β1 π¦ β π β‘ πͺ π¦; π, Ξ£ with parameters π β β ν and Ξ£ β β νΓν , and were β denotes the determinant of a matrix (also written π βΌ πͺ π, Ξ£ ) 7
Properties of multivariate Gaussians Mean and variance π π = β« π¦πͺ π¦; π, Ξ£ ππ¦ = π β ν π¦ β π ν πͺ π¦; π, Ξ£ ππ¦ = Ξ£ ππ©π° π = β« π¦ β π β ν (these are not obvious) Creation from univariate Gaussians: for π¦ β β , if π π¦ ν = πͺ π¦; 0,1 (i.e., each element π¦ ν is an independent univariate Gaussian, then π§ = π΅π¦ + π is also normal, with distribution π βΌ πͺ π = π, Ξ£ = π΅π΅ ν 8
Multivariate Gaussians, graphically 3 π = β4 2.0 0.5 Ξ£ = 0.5 1.0 9
Multivariate Gaussians, graphically 3 π = β4 2.0 0 Ξ£ = 0 1.0 10
Multivariate Gaussians, graphically 3 π = β4 2.0 1.0 Ξ£ = 1.0 1.0 11
Multivariate Gaussians, graphically 3 π = β4 2.0 1.4 Ξ£ = 1.4 1.0 12
Multivariate Gaussians, graphically 3 π = β4 2.0 β1.0 Ξ£ = β1.0 1.0 13
Maximum likelihood estimation The maximum likelihood estimate of π, Ξ£ are what you would βexpectβ, but derivation is non-obvious ν log π(π¦ ν ; π, Ξ£) minimize β π, Ξ£ = β ν,Ξ£ ν=1 ν β 1 2 log 2πΞ£ β 1 2 π¦ ν β π ν Ξ£ β1 π¦ ν β π = β ν=1 Taking gradients with respect to π and Ξ£ and setting equal to zero give the closed-form solutions ν ν π = 1 Ξ£ = 1 π¦ ν , π¦ ν β π π¦ ν β π ν π β π β ν=1 ν=1 14
Fitting Gaussian to MNIST Ξ£ = π = 15
MNIST Outliers 16
Outline Anomalies and outliers Multivariate Gaussian Mixture of Gaussians 17
Limits of Gaussians Though useful, multivariate Gaussians are limited in the types of distributions they can represent 18
Mixture models A more powerful model to consider is a mixture of Gaussian distributions, a distribution where we first consider a categorical variable π β 0,1 ν , β π βΌ Categorical π , π ν = 1 ν i.e., π¨ takes on values 1, β¦ , π For each potential value of π , we consider a separate Gaussian distribution: π|π = π¨ βΌ πͺ π ν§ , Ξ£ ν§ π ν§ β β ν , Ξ£ ν§ β β νΓν , Can write the distribution of π using marginalization πͺ π¦; π ν§ , Ξ£ ν§ π π = β π π π = π¨ π(π = π¨) = β π ν§ ν§ ν§ 19
Learning mixture models To estimate parameters, suppose first that we can observe both π and π , i.e., our data set is of the form π¦ ν , π¨ ν , π = 1, β¦ , π In this case, we can maximize the log-likelihood of the parameters: ν log π(π¦ ν , π¨ ν ; π, Ξ£, π) β π, Ξ£, π = β ν=1 Without getting into the full details, it hopefully should not be too surprising that the solutions here are given by: ν π π¨ ν = π¨ ν π π¨ ν = π¨ π¦ ν β ν=1 β ν=1 π ν§ = π ν§ = , , ν π π¨ ν = π¨ π β ν=1 ν π π¨ ν = π¨ (π¦ ν βπ ν§ ) π¦ ν β π ν§ ν β ν=1 Ξ£ ν§ = ν π π¨ ν = π¨ β ν=1 20
Latent variables and expectation maximization In the unsupervised setting, π¨ ν terms will not be known, these are referred to as hidden or latent random variables This means that to estimate the parameters, we canβt use the function 1 π¨ ν = π¨ anymore Expectation maximization (EM) algorithm (at a high level): replace indicators 1 π¨ ν = π¨ with probability estimates π π¨ ν = π¨ π¦ ν ; π, Ξ£, π When we re-estimate these parameter, probabilities change, so repeat: E (expectation) step: compute π π¨ ν = π¨ π¦ ν ; π, Ξ£, π , βπ, π¨ M (maximization) step: re-estimate π, Ξ£, π 21
Μ Μ Μ Μ Μ Μ EM for Gaussian mixture models E step: using Bayesβ rule, compute probabilities π π¦ ν π¨ ν = π¨; π, Ξ£ π π¨ ν = π¨; π ν = π π¨ ν = π¨ π¦ ν ; π, Ξ£, π = π ν§ β ν§ β² π π¦ ν π¨ ν = π¨β²; π, Ξ£ π π¨ ν = π¨β²; π πͺ π¦ ν ; π ν§ , Ξ£ ν§ π ν§ = β ν§ β² πͺ π¦ ν ; π ν§ β² , Ξ£ ν§ β² π ν§ β² M step: re-estimate parameters using these probabilities ν π¦ ν ν (π¦ ν βπ ν§ ) π¦ ν β π ν§ ν ν ν ν ν β ν=1 π ν§ β ν=1 π ν§ β ν=1 π ν§ , π ν§ β , Ξ£ ν§ β π ν§ β ν π β ν=1 π ν,ν§ ν ν β ν=1 π ν§ 22
Local optima Like k-means, EM is effectively optimizating a non-convex problem Very real possibility of local optima (seemingly moreso than k-means, in practice) Same heuristics work as for k-means (in fact, common to initialize EM with clusters from k-means) 23
Illustration of EM algorithm 24
Illustration of EM algorithm 25
Illustration of EM algorithm 26
Illustration of EM algorithm 27
Illustration of EM algorithm 28
Possibility of local optima 29
Possibility of local optima 30
Possibility of local optima 31
Possibility of local optima 32
Poll: outliers in mixture of Gaussians Consider the following cartoon dataset: If we fit a mixture of two Gaussians to this data via the EM algorithm, which group of points is likely to contain more βoutliersβ (points with the lowest π(π¦) )? 1. Left group 2. Right group 3. Equal chance of each, depending on initialization 33
EM and k-means As you may have noticed, EM for mixture of Gaussians and k-means seem to be doing very similar things Primary differences: EM is computing βdistancesβ based upon the inverse covariance matrix, allows for βsoftβ assignments instead of hard assignments 34
Recommend
More recommend