If you’ve been keeping up with data center network news—and if you’re reading this blog I assume you have been—you’ve probably read about Facebook’s new data center fabric. The company recently deployed this new architecture in a data center in Iowa with the goal of increasing scalability and flexibility, both of which are critical for an organization that handles a tremendous amount of network traffic.
Essentially, Facebook’s new architecture was designed to break away from the aggregation cycle (for more on that, click here) to create a more elegant and efficient network. Rather than continuing to rely on clusters of hundreds of server cabinets with top of rack (TOR) switches aggregated to large core switches, Facebook created a distributed network by disseminating core switching functionality to several spine switches that make the company less reliant on massive hardware from incumbent switch vendors.
The company built this new architecture by creating 48-node pods, each served by spine switches. It also built its own management software that can automatically configure white box switches; so if Facebook wants to scale by adding a new device in the data center, the software recognizes that new machine and configures it to match Facebook specs. (Click here or here if you’re interested in a more in-depth look at Facebook’s new design).
What Facebook has done with its new topology is demonstrate that you can build a large, scalable architecture using smaller switches to do the same work as larger devices. They’re using more distributed switching and essentially telling us that the world no longer needs the unwieldy core switch hardware at the middle of the network that incumbent vendors have had so much success selling in recent years. In fact, taken a step further, Facebook’s reliance on white box switches also proves that you can build a large, efficient network without using any switches from the large incumbent vendors.