https://doi.org/10.1140/epjb/e2009-00344-7
Computational complexity of impact size estimation for spreading processes on networks
Institute for Operations Research, ETH Zurich, 8092 Zurich, Switzerland
Corresponding author: a laumanns@ifor.math.ethz.ch
Received:
7
January
2009
Revised:
28
August
2009
Published online:
17
October
2009
Spreading processes on networks are often analyzed to understand how the outcome of the process (e.g. the number of affected nodes) depends on structural properties of the underlying network. Most available results are ensemble averages over certain interesting graph classes such as random graphs or graphs with a particular degree distributions. In this paper, we focus instead on determining the expected spreading size and the probability of large spreadings for a single (but arbitrary) given network and study the computational complexity of these problems using reductions from well-known network reliability problems. We show that computing both quantities exactly is intractable, but that the expected spreading size can be efficiently approximated with Monte Carlo sampling. When nodes are weighted to reflect their importance, the problem becomes as hard as the s-t reliability problem, which is not known to yield an efficient randomized approximation scheme up to now. Finally, we give a formal complexity-theoretic argument why there is most likely no randomized constant-factor approximation for the probability of large spreadings, even for the unweighted case. A hybrid Monte Carlo sampling algorithm is proposed that resorts to specialized s-t reliability algorithms for accurately estimating the infection probability of those nodes that are rarely affected by the spreading process.
PACS: 02.70.Tt – Justifications or modifications of Monte Carlo methods / 64.60.aq – Networks / 89.70.Eg – Computational complexity / 89.75.-k – Complex systems
© EDP Sciences, Società Italiana di Fisica, Springer-Verlag, 2009