What is the Open Compute Project; What’s Needed to Make it Happen

30th Oct 2014

A few years back, Facebook engineers wanted to build their data center with the most optimized equipment.  They had the advantage of having a clean slate to design and build their own computer, storage and network devices using cheap, commodity components – creating their own hyperscale data center.  In 2011, they shared their specs to the industry which resulted in an open hardware movement and foundation, named the Open Compute Project.

With a rapidly growing community of collaborative engineers around the world, the Open Compute Project Foundation is designing and enabling the delivery of the most efficient server, storage and data center hardware designs for scalable computing. They openly share ideas, specifications and other intellectual property with the intention to advance networking innovation and reduce the operational complexity within the scalable computing space. Through collaboration between data center users who know what they need and want, and technology developers, they’re working to openly develop the most efficient servers, storage and data center infrastructure to bring computing down to the lowest cost and the widest distribution. The goal is to give network operators various options to buy and deploy standards- based operating systems and hardware that curbs the cost of capital and operating expenditures.

Today, enterprises with very large data centers who need to outfit their data centers face the same problems that Facebook and other hyperscale operators faced a few years ago.  They want to remove the unnecessary complexity and get to the bare bones of the hardware.  They’re now committed to the Open Compute Project, proving that this approach has the potential to lower capital expenditures and operating costs for large-scale IT deployments.  The Open Compute networking approach gives customers a flexible, open way to manage their data centers.

Facebook has reported that Open Compute has saved billions of dollars because of the approach of using Open Compute hardware. The theory of Open Compute is that hardware becomes a commodity with identical, interchangeable hardware available from several vendors who compete only on price. Ideally with Open Compute, hardware is also simpler and standardized, which should cause fewer compatibility problems with hardware and drivers. Servers are the most popular implementation for Open Compute, as enterprise-grade white box servers are available at competitive prices.

Although some industry experts maintain that it’s too early to say whether the Open Compute Project will be adopted by the broader IT industry, there is certainly an increasing need for more computing resources.  Plus energy costs and the corresponding impact on the environment adds to the need for a shift in the IT industry’s approach to building data centers. While most enterprises can’t replicate the success that Facebook has had with the Open Compute Project, but they can reap benefits over time from the industry-led project.

For the Open Compute Project to be successful, enterprises need an operating system that gives the modularity, a high degree of scalability, and the robustness of an established operating system. The Open Compute Project has a great future ahead of it – and will even be stronger with an operating system designed for Open Compute Project users.