Scale-Free Networks Scale-Free Networks Original model Introduction Model details Complex Networks, Course 303A, Spring, 2009 Analysis A more plausible mechanism Robustness Redner & Prof. Peter Dodds Krapivisky’s model Generalized model Analysis Department of Mathematics & Statistics Universality? Sublinear attachment University of Vermont kernels Superlinear attachment kernels References Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License . Frame 1/57
Scale-Free Outline Networks Original model Original model Introduction Model details Introduction Analysis A more plausible Model details mechanism Robustness Analysis Redner & A more plausible mechanism Krapivisky’s model Generalized model Robustness Analysis Universality? Sublinear attachment kernels Redner & Krapivisky’s model Superlinear attachment kernels Generalized model References Analysis Universality? Sublinear attachment kernels Superlinear attachment kernels References Frame 2/57
Scale-Free Scale-free networks Networks Original model Introduction Model details ◮ Networks with power-law degree distributions have Analysis A more plausible become known as scale-free networks. mechanism Robustness ◮ Scale-free refers specifically to the degree Redner & Krapivisky’s model distribution having a power-law decay in its tail: Generalized model Analysis Universality? P k ∼ k − γ for ‘large’ k Sublinear attachment kernels Superlinear attachment kernels References ◮ One of the seminal works in complex networks: Laszlo Barabási and Reka Albert, Science, 1999: “Emergence of scaling in random networks” [2] ◮ Somewhat misleading nomenclature... Frame 4/57
Scale-Free Scale-free networks Networks Original model Introduction Model details Analysis A more plausible mechanism Robustness ◮ Scale-free networks are not fractal in any sense. Redner & Krapivisky’s model ◮ Usually talking about networks whose links are Generalized model Analysis abstract, relational, informational, . . . (non-physical) Universality? Sublinear attachment kernels ◮ Primary example: hyperlink network of the Web Superlinear attachment kernels ◮ Much arguing about whether or networks are References ‘scale-free’ or not. . . Frame 5/57
Scale-Free Random networks: largest components Networks Original model Introduction Model details Analysis A more plausible mechanism Robustness Redner & Krapivisky’s model Generalized model Analysis γ = 2.5 γ = 2.5 γ = 2.5 γ = 2.5 Universality? � k � = 1.8 � k � = 2.05333 � k � = 1.66667 � k � = 1.92 Sublinear attachment kernels Superlinear attachment kernels References γ = 2.5 γ = 2.5 γ = 2.5 γ = 2.5 � k � = 1.6 � k � = 1.50667 � k � = 1.62667 � k � = 1.8 Frame 6/57
Scale-Free Scale-free networks Networks Original model Introduction Model details Analysis The big deal: A more plausible mechanism Robustness ◮ We move beyond describing of networks to finding Redner & Krapivisky’s model mechanisms for why certain networks are the way Generalized model they are. Analysis Universality? Sublinear attachment kernels Superlinear attachment A big deal for scale-free networks: kernels References ◮ How does the exponent γ depend on the mechanism? ◮ Do the mechanism details matter? Frame 7/57
Scale-Free Heritage Networks Original model Introduction Work that presaged scale-free networks Model details Analysis A more plausible ◮ 1924: G. Udny Yule [9] : mechanism Robustness # Species per Genus Redner & Krapivisky’s model ◮ 1926: Lotka [4] : Generalized model Analysis # Scientific papers per author Universality? Sublinear attachment ◮ 1953: Mandelbrot [5] ): kernels Superlinear attachment kernels Zipf’s law for word frequency through optimization References ◮ 1955: Herbert Simon [8, 10] : Zipf’s law, city size, income, publications, and species per genus ◮ 1965/1976: Derek de Solla Price [6, 7] : Network of Scientific Citations Frame 8/57
Scale-Free BA model Networks Original model Introduction Model details ◮ Barabási-Albert model = BA model. Analysis A more plausible ◮ Key ingredients: mechanism Robustness Growth and Preferential Attachment (PA). Redner & Krapivisky’s model ◮ Step 1: start with m 0 disconnected nodes. Generalized model Analysis ◮ Step 2: Universality? Sublinear attachment kernels 1. Growth—a new node appears at each time step Superlinear attachment kernels t = 0 , 1 , 2 , . . . . References 2. Each new node makes m links to nodes already present. 3. Preferential attachment—Probability of connecting to i th node is ∝ k i . ◮ In essence, we have a rich-gets-richer scheme. Frame 10/57
Scale-Free BA model Networks Original model ◮ Definition: A k is the attachment kernel for a node Introduction Model details with degree k . Analysis A more plausible mechanism ◮ For the original model: Robustness Redner & Krapivisky’s model A k = k Generalized model Analysis Universality? ◮ Definition: P attach ( k , t ) is the attachment probability. Sublinear attachment kernels Superlinear attachment ◮ For the original model: kernels References k i ( t ) k i ( t ) P attach ( node i , t ) = = � N ( t ) � k max ( t ) j = 1 k j ( t ) k = m kN k ( t ) where N ( t ) = m 0 + t is # nodes at time t and N k ( t ) is # degree k nodes at time t . Frame 12/57
Scale-Free Approximate analysis Networks ◮ When ( N + 1 ) th node is added, the expected Original model Introduction increase in the degree of node i is Model details Analysis A more plausible k i , N mechanism E ( k i , N + 1 − k i , N ) ≃ m . Robustness � N ( t ) j = 1 k j ( t ) Redner & Krapivisky’s model Generalized model ◮ Assumes probability of being connected to is small. Analysis Universality? Sublinear attachment ◮ Dispense with Expectation by assuming (hoping) that kernels Superlinear attachment kernels over longer time frames, degree growth will be References smooth and stable. ◮ Approximate k i , N + 1 − k i , N with d d t k i , t : k i ( t ) d d t k i , t = m � N ( t ) j = 1 k j ( t ) where t = N ( t ) − m 0 . Frame 13/57
Scale-Free Approximate analysis Networks ◮ Deal with denominator: each added node brings m Original model new edges. Introduction Model details N ( t ) Analysis � k j ( t ) = 2 tm A more plausible ∴ mechanism Robustness j = 1 Redner & Krapivisky’s model ◮ The node degree equation now simplifies: Generalized model Analysis Universality? Sublinear attachment d k i ( t ) = mk i ( t ) 2 mt = 1 kernels d t k i , t = m 2 t k i ( t ) Superlinear attachment kernels � N ( t ) j = 1 k j ( t ) References ◮ Rearrange and solve: d k i ( t ) k i ( t ) = d t 2 t ⇒ k i ( t ) = c i t 1 / 2 . Frame 14/57 ◮ Next find c i . . .
Scale-Free Approximate analysis Networks Original model Introduction ◮ Know i th node appears at time Model details Analysis A more plausible � i − m 0 mechanism for i > m 0 Robustness t i , start = Redner & 0 for i ≤ m 0 Krapivisky’s model Generalized model Analysis ◮ So for i > m 0 (exclude initial nodes), we must have Universality? Sublinear attachment kernels � 1 / 2 Superlinear attachment � t kernels k i ( t ) = m for t ≥ t i , start . References t i , start ◮ All node degrees grow as t 1 / 2 but later nodes have larger t i , start which flattens out growth curve. ◮ Early nodes do best (First-mover advantage). Frame 15/57
Scale-Free Approximate analysis Networks Original model Introduction 20 Model details Analysis A more plausible mechanism Robustness 15 Redner & Krapivisky’s model Generalized model k i (t) Analysis ◮ m = 3 Universality? 10 Sublinear attachment kernels ◮ t i , start = Superlinear attachment kernels 1 , 2 , 5 , and 10. References 5 0 0 10 20 30 40 50 t Frame 16/57
Scale-Free Degree distribution Networks ◮ So what’s the degree distribution at time t ? Original model Introduction ◮ Use fact that birth time for added nodes is distributed Model details Analysis uniformly: A more plausible mechanism P ( t i , start ) d t i , start ≃ d t i , start Robustness Redner & t + m 0 Krapivisky’s model Generalized model ◮ Using Analysis Universality? Sublinear attachment kernels � 1 / 2 ⇒ t i , start = m 2 t � t Superlinear attachment k i ( t ) = m k i ( t ) 2 . kernels t i , start References and by understanding that later arriving nodes have lower degrees, we can say this: Pr ( k i < k ) = Pr ( t i , start > m 2 t k 2 ) . Frame 17/57
Scale-Free Degree distribution Networks Original model Introduction Model details ◮ Using the uniformity of start times: Analysis A more plausible mechanism k 2 ) ≃ t − m 2 t Robustness Pr ( k i < k ) = Pr ( t i , start > m 2 t k 2 Redner & . Krapivisky’s model t + m 0 Generalized model Analysis Universality? ◮ Differentiate to find Pr ( k ) : Sublinear attachment kernels Superlinear attachment kernels 2 m 2 t Pr ( k ) = d d k Pr ( k i < k ) = References ( t + m 0 ) k 3 ∼ 2 m 2 k − 3 as m → ∞ . Frame 18/57
Recommend
More recommend