Data center efficiency isn’t a new challenge, but the methods used to improve PUE and other power consumption concerns are constantly evolving. It’s easy to focus on how hyperscales like Facebook and Google are pushing the envelope, but this week’s featured articles showcase some different angles.
GreenBiz | Michael Rohwer discusses the connection between corporations looking to reduce their emissions footprints and colocation appetite for renewable energy. In addition to increased efficiency and the corresponding reduction in energy use, colocation and cloud providers who can verify their use of renewable energy sources can pass the benefit of that carbon reduction on to their customers.
MIT News | Larry Hardesty reports on the effort to expand the use of flash memory for use in cache systems. While flash memory is commonly considered to be too slow for cache servers, MIT researchers have developed a new system which is competitive in speed with RAM, while consuming around 5% of the energy.
The Next Platform | Nicole Hemsoth goes in-depth with this article on NASA’s approach to building out new supercomputing capacity. Instead of building massive new data center complexes, NASA is taking a modular route. This not only allows greater flexibility in choosing the best equipment for a task, it also increases cooling efficiency several times over.
Of course, energy efficiency is only one piece of the data center puzzle. If you’re interested in the evolution of managed and dynamic physical layer infrastructure, click through below to watch the joint Fiber Mountain & Legrand webinar!