time series data mining
play

Time series data mining Outline Basic Knowledge Multi variate - PowerPoint PPT Presentation

Time series data mining Outline Basic Knowledge Multi variate association States association What is time series data Formally, a time series data is defined as a sequence of pairs T = [(p1, t1), (p2, t2), . . . , (pi, ti), . . . , (pn, tn)]


  1. Time series data mining

  2. Outline Basic Knowledge Multi variate association States association

  3. What is time series data Formally, a time series data is defined as a sequence of pairs T = [(p1, t1), (p2, t2), . . . , (pi, ti), . . . , (pn, tn)] (t1 < t2 < · · · < ti < · · · < tn), where each pi is a data point in a d-dimensional data space, and each ti is the time stamp at which the corresponding pi occurs. 10 20 30 40 50 60 0

  4. Time Series Data Characteristics 1.high dimensionality 2.hierarchical nature A time series can be analyzed by its underlying time hierarchy, such as hourly, weekly, monthly, and yearly. 3.multi-variate Time series data analysis often studies one variable, but sometimes deals with time series data consisting of multiple related variables. For example, weather data consists of well-known measurements such as temperature, dew point, humidity, etc

  5. Our work Multi variate association: A and B are highly correlated States association : A = 2 → B = 3

  6. Multi variate association Extract feature ↓ Cluster the feature ↓ Analyze the clustering result

  7. Why to extract feature Time series are essentially high dimensional data and working directly with such data in its raw format is very expensive in terms of both processing and storage cost. It is thus highly desirable to develop representation techniques that can reduce the dimensionality of time series, while still preserve the fundamental characteristics of it.

  8. How to extract feature Principles: reduce dimension while preserve its fundamental characteristics Split the data into fixed size window ↓ Extract feature of each window [relative time,standard deviation]

  9. Clustering The objective is to find the most homogeneous clusters that are as distinct as possible from other clusters. More formally, the grouping should maximize intercluster variance while minimize intracluster variance.

  10. 取第一条数据单独成簇, Clustering 簇中心为自己 Y 是否处理完所有 取第一条数据 数据 N 取下一条数据 和所有簇中心比较 均小于相 至少有一 似度阈值 个大于相 似度阈值 和所有簇中心比较 该数据单独成为 将该数据放进和它 一个新簇 相似度最高的簇中 均小于相 至少有一个大于 似度阈值 相似度阈值 是否处理完所有数 据 将该数据放进和它相似 该数据单独成为一 度最高的簇中,并且更 个新簇 N 换簇中心 Y 取下一条数据 更换簇中心 Y 簇中心是否变化 N 从所有簇中选出最 相近的两个 是否大于给定阈值 Y N 退出 合并这两个簇

  11. The analysis of clustering result Noise . . . .. . . . . . . ... . . . . . . . n . . . . . . . . . . . . . . . . . . . L . ∑ . . . . . . . . . . . i . . . . . . . . . . . . . . . . . . . . sup port i 1 . . . . . = . = . . . . . . . . . . k n . . . . . . . . . L . . . ∑∑ . . . . . ij . . . . . . . . . . . . . . . . . . . . . . . . . . . j 1 i 1 . . = = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

  12. Experiments result

  13. States Association Our goal Not only to find that variable A,B,C,D are related but also discover their value association. A = 2 ,D = 4 → B = 3,C = 7

  14. States Association • Transform the data into symbol • Apriori algorithm

  15. Data preprocessing Split the data into fixed length window ↓ Extract the feature of each window ↓ Cluster the window ↓ Symbolize each cluster

  16. Feature extraction DFT others 1 DWT 6 2 feature 5 extraction 3 APCA 4 A A P Multi views frequency time statistics … domain domain

  17. Clustering and Symbolization Cluster the features extracted from each window and then each cluster is represented by a character. So the windows within a cluster is marked with the corresponding character.

  18. data →

  19. data time A1 A2 A3 A4 A5 A6 t1 a d a b c a t2 1 a a a d a t3 2 b b b c b

  20. Association mining The Apriori Algorithm is an influential algorithm for mining frequent itemsets and association rules. Association rule generation is usually split up into two separate steps: 1. First, minimum support is applied to find all frequent itemsets in a database. 2. Second, these frequent itemsets and the minimum confidence constraint are used to form rules. While the second step is straight forward, the first step needs more attention.

  21. Association mining

  22. Experiment result A1 = {0,1,2} sin(t ) A 0 + δ = ⎧ 1 ⎪ A 1 A 1 = = ⎨ 4 1 ⎪− sin(t ) A 2 + δ = ⎩ 1 1000 (t t )/ T A 0 × − = ⎧ 0 1 ⎪ A 1 A 1 = = ⎨ 3 1 ⎪− 1000 (t t )/ T A 2 × − = ⎩ 0 1

  23. Experiment result A2 = {6,7,8,9} 2000 (t t )/ T A 6 × − = ⎧ 0 2 ⎪ 1500 (t t )/ T A 7 × − = ⎪ 0 2 A = ⎨ 5 1 A 8 = ⎪ 2 ⎪− 1000 (t t )/ T A 9 × − = ⎩ 0 2 sin(t ) A 6 + δ = ⎧ 2 ⎪ 1 A 7 = ⎪ 2 A = ⎨− 6 sin(t ) A 8 + δ = ⎪ 2 ⎪ cos(t ) A 9 + δ = ⎩ 2

  24. Experiment result

  25. Experiment result

  26. Thanks

Recommend


More recommend