Skip to main content

Scaling IP multicast on datacenter topologies

Author(s): Li, X; Freedman, Michael J

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1x38g
Abstract: IP multicast would reduce significantly both network and server overhead for many datacenter applications' communication. Unfortunately, traditional protocols for managing IP multicast, designed for arbitrary network topologies, do not scale with aggregate hardware resources in the number of supported multicast groups. Prior attempts to scale multicast in general settings are all bottlenecked by the forwarding table capacity of a single switch. This paper shows how to leverage the unique topological structure of modern datacenter networks in order to build the first scale-out multicast architecture. In our architecture, a network controller carefully partitions the multicast address space and assigns the partitions across switches in datacenters' multi-rooted tree networks. Our approach further improves scalability by locally aggregating multicast addresses at bottleneck switches that are running out of forwarding table space, at the cost of slightly inflating downstream traffic. We evaluate the system's scalability, traffic overhead, and fault tolerance through a mix of simulation and analysis. For example, experiments show that a datacenter with 27,648 servers and commodity switches with 1000-entry multicast tables can support up to 100,000 multicast groups, allowing each server to subscribe to nearly 200 multicast groups concurrently.
Publication Date: 9-Dec-2013
Electronic Publication Date: 2013
Citation: Li, X, Freedman, MJ. (2013). Scaling IP multicast on datacenter topologies. 61 - 72. doi:10.1145/2535372.2535380
DOI: doi:10.1145/2535372.2535380
Pages: 61 - 72
Type of Material: Conference Article
Journal/Proceeding Title: 2013 9th ACM International Conference on Emerging Networking Experiments and Technologies
Version: Final published version. This is an open access article.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.