With the ever-growing demand for big data, innovative data center technology is as important as ever. Last week, I discussed the future of software-defined wide-area-networks in 2018, which are expected to skyrocket in popularity. This week, we’ll cover another noteworthy topic: hyperscale data centers.
Whether you call them hyperscale or webscale, or split the very largest data center owners into two “scale” types, companies like Microsoft, Facebook, Google, Apple, and even Netflix and LinkedIn, are doing data centers at a different level from enterprise organizations. How different are they, and do any of their strategies translate over? I recommend checking out the articles below to find out!
Whether you call them hyperscale or webscale, Apple, Facebook, Google and Microsoft are global powerhouses, each with massive data centers pushing the cutting edge of network technology. Generally, I focus on the innovative designs of the data centers themselves, or the technology innovations within, but this week I found a few articles looking at these facilities from less familiar - and in some cases more beautiful - angles.
Read on for more detail, and let me know what you think in the comments below.
News of the Week - 1/8/16
Wrapping up the first week of 2016, it’s interesting to look at the ecosystem of people driving the transformation of networks and technology in general. Researchers are constantly pushing the envelope of what is possible, finding ways to make our technology faster, easier to manufacture, more powerful and more energy efficient. Manufacturers make those advances available to the wider world as new products and new approaches to the architecture of complex systems. Then integrators and service providers tie it all together for customers in enterprise and even government to gradually (or sometimes not-so-gradually) transform the world we live in.
Happy New Year!
It’s Monday, but that doesn’t mean you can’t have fun – and I can’t be the only one who enjoys reading predictions for the year ahead! There has been a lot of change in the technology landscape over the past year, and all signs point to 2016 being another year full of new and exciting developments.
What’s driving so much change, so fast? In my opinion, it’s a combination of the push of customer demand – big data, video streaming and the explosion of mobile devices in business to name three – and the possibilities that new technological developments open up, such as cloud computing, SDN, NFV and new network architectures – but let’s see what the experts say:
Over the past several years server, network, storage and application virtualization has revolutionized the way hyperscale data centers are built by consolidating workloads. The trend has simplified network architecture significantly and resulted in huge cost savings as well. In fact, according to a recent report from the U.S. Government Accountability Office, U.S. governmental agencies alone saved $1.1 billion from 2011-2013 with virtualization.
Still, as hyperscale data centers grow, so do their costs for real estate, power, cooling and racks. To combat this growing problem, these data centers need more than a tweaking of traditional network architectures—they need a complete re-imagining.
Instead of continuing to expand the network with increasingly large and expensive equipment—like core and aggregation switches, for example—what if hyperscale data centers could visualize and control every part of the network, including the physical layer, from a single pane of glass? What if they could connect any two points on the network using intelligent, software-controlled optical fiber? And what if every packet and flow could be optimized on this software-controlled fiber infrastructure via a central orchestration system?
Achieving this connectivity virtualization would completely change the data center architecture landscape for hyperscale environments. It would relieve many of the inevitable problems that will emerge if these data centers continue to use traditional network architectures in this age of information expansion.
The bleeding-edge solutions mentioned above are brand new, and as such, are sure to provoke deeper discussions and a host of questions from network engineers and data center managers. If your data center is struggling to scale efficiently and you’d like to learn more about how tomorrow’s solutions can help solve these problems, check out the Fiber Mountain: Scaling Hyperscale white paper from Intellyx.