covered topics
play

Covered Topics! v Big Graph Data Mining Sampling Ranking v Big Data - PowerPoint PPT Presentation

Covered Topics! v Big Graph Data Mining Sampling Ranking v Big Data Management Indexing v Big Data Preprocessing/Cleaning v Big Data Acquisition/Measurement J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,tp:// 1


  1. Covered Topics! v Big Graph Data Mining § Sampling § Ranking v Big Data Management § Indexing v Big Data Preprocessing/Cleaning v Big Data Acquisition/Measurement J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,tp:// 1 www.mmds.org

  2. Welcome to DS504/CS586: Big Data Analytics Big Data Clustering Prof. Yanhua Li Time: 6:00pm–8:50pm Thu Location: AK 233 Spring 2018

  3. Quiz 1 next Thursday on Clustering v 2 questions on clustering. v 15 minutes v At the beginning of the class. v Count 5% towards the final v The written part (30%) includes the quizzes. v Try to provide intermediate results in the quiz

  4. High Dimensional Data v Given a cloud of data points we want to understand its structure J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,tp:// 4 www.mmds.org

  5. The Problem of Clustering v Given a set of points , with a notion of distance between points, group the points into some number of clusters , so that § Members of a cluster are close/similar to each other § Members of different clusters are dissimilar v Usually: § Points are in a high-dimensional space § Similarity is defined using a distance measure • Euclidean, Cosine, Jaccard distance, … J. Leskovec, A. Rajaraman, J. Ullman: Mining of 5 Massive Datasets, http://www.mmds.org

  6. Clustering Approaches v Hierachical clustering v Point assignment § Distance based clustering § Density based clustering v Clustering on big data J. Leskovec, A. Rajaraman, J. Ullman: Mining of 6 Massive Datasets, http://www.mmds.org

  7. Example: Clusters & Outliers x x x xx x xx x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x Cluster Outlier J. Leskovec, A. Rajaraman, J. Ullman: 7 Mining of Massive Datasets, http:// www.mmds.org

  8. Clustering is a hard problem! J. Leskovec, A. Rajaraman, J. Ullman: 8 Mining of Massive Datasets, http:// www.mmds.org

  9. Why is it hard? v Clustering in two dimensions looks easy v Clustering small amounts of data looks easy v And in most cases, looks are not deceiving v Many applications involve not 2, but 10 or 10,000 dimensions v High-dimensional spaces look different: v Almost all pairs of points are at about the same distance J. Leskovec, A. Rajaraman, J. Ullman: 9 Mining of Massive Datasets, http:// www.mmds.org

  10. Clustering Problem: Music CDs v Intuitively: Music divides into categories, and customers prefer a few categories § But what are categories really? v Represent a CD by a set of customers who bought it: v Similar CDs have similar sets of customers, and vice-versa J. Leskovec, A. Rajaraman, J. Ullman: Mining of 10 Massive Datasets, http://www.mmds.org

  11. Clustering Problem: Music CDs Space of all CDs: v For each customer § Values in a dimension may be 0 or 1 only § A CD is a point in this space ( x 1 , x 2 ,…, x k ), where x i = 1 iff the i th customer bought the CD v For Amazon, the dimension is tens of millions v Task: Find clusters of similar CDs J. Leskovec, A. Rajaraman, J. Ullman: 11 Mining of Massive Datasets, http:// www.mmds.org

  12. Cosine, Jaccard, and Euclidean v As with CDs we have a choice when we think of documents as sets of words: § Sets as vectors: Measure similarity by the cosine distance § Sets as sets: Measure similarity by the Jaccard distance § Sets as points: Measure similarity by Euclidean distance J. Leskovec, A. Rajaraman, J. Ullman: 12 Mining of Massive Datasets, http:// www.mmds.org

  13. Overview: Methods of Clustering v Hierarchical: § (bottom up): • Initially, each point is a cluster • Repeatedly combine the two “nearest” clusters into one § (top down): • Start with one cluster and recursively split it v Point assignment: § Maintain a set of clusters § Points belong to “nearest” cluster J. Leskovec, A. Rajaraman, J. Ullman: Mining of 13 Massive Datasets, http://www.mmds.org

  14. Hierarchical Clustering v Key operation: Repeatedly combine two nearest clusters v Three important questions: § 1) How do you represent a cluster of more than one point? § 2) How do you determine the “nearness” of clusters? § 3) When to stop combining clusters? J. Leskovec, A. Rajaraman, J. Ullman: 14 Mining of Massive Datasets, http:// www.mmds.org

  15. Example: Hierarchical clustering (5,3) o (1,2) o x (1.5,1.5) x (4.7,1.3) o (2,1) o (4,1) x (1,1) x (4.5,0.5) o (0,0) o (5,0) Data: o … data point x … centroid Dendrogram

  16. “Closest” Point? v (1) How to represent a cluster of many points? clustroid = point “ closest ” to other points v Possible meanings of “closest”: § Smallest maximum distance to other points § Smallest average distance to other points § Smallest sum of squares of distances to other points • For distance metric d clustroid c of cluster C is: 2 min d ( x , c ) ∑ Centroid Datapoint c x ∈ C Centroid is the avg. of all (data)points X in the cluster. This means centroid is Clustroid an “artificial” point. Cluster on Clustroid is an existing (data)point that is “closest” to all other points in 3 datapoints 16 the cluster.

  17. Defining “Nearness” of Clusters v (2) How do you determine the “nearness” of clusters? § Approach 1: Intercluster distance = minimum of the distances between any two points, one from each cluster § Approach 2: Pick a notion of “ cohesion ” of clusters, e.g. , maximum distance in the cluster • Merge clusters whose union is most cohesive J. Leskovec, A. Rajaraman, J. Ullman: 17 Mining of Massive Datasets, http:// www.mmds.org

  18. Cohesion v Approach 2.1: Use the diameter of the merged cluster = maximum distance between points in the cluster v Approach 2.2: Use the average distance between points in the cluster v Approach 2.3: Use a density-based approach § Take the diameter or avg. distance, e.g., and divide by the number of points in the cluster J. Leskovec, A. Rajaraman, J. Ullman: 18 Mining of Massive Datasets, http:// www.mmds.org

  19. Implementation v Naïve implementation of hierarchical clustering: § At each step, compute pairwise distances between all pairs of clusters O( N 2 ), with up to N steps. § Then merge with in total O( N 3 ) § Too expensive for really big datasets that do not fit in memory J. Leskovec, A. Rajaraman, J. Ullman: 19 Mining of Massive Datasets, http:// www.mmds.org

  20. k -means clustering

  21. k –means Algorithm(s) v Assumes Euclidean space/distance v Start by picking k , the number of clusters v Initialize clusters by picking one point per cluster § Example: Pick one point at random, then k -1 other points, each as far away as possible from the previous points J. Leskovec, A. Rajaraman, J. Ullman: 21 Mining of Massive Datasets, http:// www.mmds.org

  22. Populating Clusters v 1) For each point, place it in the cluster whose current centroid it is nearest v 2) After all points are assigned, update the locations of centroids of the k clusters v 3) Reassign all points to their closest centroid § Sometimes moves points between clusters v Repeat 2 and 3 until convergence § Convergence: Points don’t move between clusters and centroids stabilize J. Leskovec, A. Rajaraman, J. Ullman: 22 Mining of Massive Datasets, http:// www.mmds.org

  23. Example: Assigning Clusters x x x x x x x x x x x x … data point … centroid Clusters after round 1 J. Leskovec, A. Rajaraman, J. Ullman: 23 Mining of Massive Datasets, http:// www.mmds.org

  24. Example: Assigning Clusters x x x x x x x x x x x x … data point … centroid Clusters after round 2 J. Leskovec, A. Rajaraman, J. Ullman: 24 Mining of Massive Datasets, http:// www.mmds.org

  25. Example: Assigning Clusters x x x x x x x x x x x x … data point … centroid Clusters at the end J. Leskovec, A. Rajaraman, J. Ullman: 25 Mining of Massive Datasets, http:// www.mmds.org

  26. Getting the k right How to select k ? v Try different k , looking at the change in the average distance to centroid as k increases v Average falls rapidly until right k , then changes little Best value of k Average distance to centroid k J. Leskovec, A. Rajaraman, J. Ullman: Mining of 26 Massive Datasets, http://www.mmds.org

  27. Example: Picking k=2 Too few; x many long x xx x distances x x to centroid. x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x J. Leskovec, A. Rajaraman, J. Ullman: 27 Mining of Massive Datasets, http:// www.mmds.org

  28. Example: Picking k=3 x Just right; x distances xx x rather short. x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x J. Leskovec, A. Rajaraman, J. Ullman: 28 Mining of Massive Datasets, http:// www.mmds.org

  29. Example: Picking k Too many; x little improvement x in average xx x x x distance. x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x J. Leskovec, A. Rajaraman, J. Ullman: 29 Mining of Massive Datasets, http:// www.mmds.org

Recommend


More recommend