Algorithms in Nature Network robustness Slides adapted from Carl Kingsford
Network robustness Many complex systems show a surprising degree of tolerance to errors: Biological networks persist despite environmental noise, failures, attacks Communication networks often deal with malfunction, attacks, construction (usually local failures don’t lead to catastrophic global failures) What network structures enable such robust response?
Random vs. scale-free network
What is the effect of these two structures on the network’s robustness?
diameter fraction of nodes removed
diameter
Network fragmentation Initially, only small clusters with one isolated node form. Then, at critical point, the main cluster breaks into many smaller pieces and <s> peaks. Then, we keep removing more nodes and continue isolating nodes leading to decreasing <s> S = fractional size of the largest cluster (important for connectivity) <s> = average size of isolated clusters
Network fragmentation SF: for random failures, S slowly decreases but <s> stays near 1 (all nodes break off one by one) increasing f SF: for attacks, once the hubs are removed, the network falls apart S = fractional size of the largest cluster <s> = average size of isolated clusters increasing f
[Jeong et al. Nature 2000 ]
Avg # incoming links / node Avg # outgoing links / node
# of substrates removed
Conclusions Real-world networks are robust to random failures, but less able to deal with targeted attacks on high-degree nodes Two assumptions made: 1. Every node is equally likely to fail or be targeted (is this realistic?) 2. The entire network is required at all times (are all proteins involved in every biological process?) 3. Do attacks only affect a single node? (what about a propagating virus?)
Recommend
More recommend