NGD’s Steve Davis considers the critical areas that will make HPC data centres fit for purpose
The IoT is no longer just talk about some fanciful glimpse of how things will be in the future. It’s already here and a booming business according to Gartner’s latest IoT Global Forecast. It says total IoT services spending this year will be $273bn on professional, consumer and connectivity services.
Just the investment on endpoints and services is likely to be $1,689m this year, compared to $1,379m in 2016, with as much as $2,094m forecast by the end of 2018. At the same time, analysts are reporting the number of connected IoT units will jump from 6.4 billion in 2016 to 20.4 billion by 2020.
Data centres will be pivotal to enabling the delivery of the IoT. All that torrent of IoT-driven Big Data is heading their way! Apart from being able to store it all, the ability to access and interpret it as meaningful actionable information - very quickly - is going to be vitally important and will give huge competitive advantage to those organisations that do it well.
This will help Retailers, for example, to monitor customer web clicks to identify buying trends; Insurance firms to conduct complex risk management calculations; Utilities to more precisely capture industrial and household energy usage to forecast supply and predict outages; Meteorologists to more accurately forecast storms and issue alerts; Oil and gas companies to make more efficient and safer drilling decisions. The list goes on.
The Big Data landscape is dominated by Operational and Analytical systems, the former being real-time interactive systems where data is primarily captured and stored. Latency for these applications must be very low and availability must be high in order to meet SLAs and user expectations for modern application performance. Analytical systems, on the other hand, demand high throughput to provide capabilities for complex analysis of large batches of retrospective data. Quite often both Big Data approaches are deployed together, and in any event, demand clustering of multiple servers comprising thousands of terabytes for the storing and processing of billions of files.
All of this is accelerating the need for high performance computing (HPC), but organisations with true HPC needs may find the public cloud ill-suited to delivering the right platform. Not only this, they could struggle to find colocation providers who can meet their specific needs for powering and cooling such highly-dense and complex platforms.
However, the answer is not to design and build a highly expensive owned data centre that will age rapidly and fail to be fit for purpose. At the same time there are few colocation providers in the market who understand the specialised needs for HPC.
Enter the HPC-ready data centre with the space, power, cooling and connectivity necessary to support clusters of very high density server racks, some pulling as much 60 kWs! By choosing the right colocation provider, an organisation can grow or shrink its HPC platform as required with the knowledge that the facility provider will not be a constraint on their needs. This solution must not only offer a future-proofed data centre infrastructure to accommodate further expansion, but also provide the essential engineering skills necessary for the design and build of highly bespoke environments, with energy efficiency being a top priority.
Food for thought
The main areas for concern when choosing a data centre provider are around power and cooling – HPC, by its very nature, requires very high amounts of power, and a suitable facility provider should be able to tap into the grid for not only what it needs now, but also for what it predicts will be required for the foreseeable future. However, grid power is not a guaranteed service – outages will occur. Therefore, the facility provider must also be able to demonstrate how it will bridge between the grid outage and auxiliary power kicking in. This must be based on all workloads managed within the facility being maintained.
As part of this power management, the systems in place must also act as power cleansers, ensuring that the power that is fed to the IT equipment is kept firmly within defined parameters of voltage and current at all times, with spikes, surges and brown-out power fades all being dealt with by the in-line power management systems.
Cooling is also an issue as HPC requires more targeted approaches. Simple computer room air conditioning (CRAC) or free air cooling systems such as swamp or adiabatic coolers are unlikely to have the capabilities required. The data centre facility provider must provide either sufficient cooling capability for all the HPC platforms under its roof, and/or must be able to effectively remove any excess heat that is removed from the HPC systems by built-in in-row cooling systems.
Reliable, low latency connectivity is a prerequisite. Many problems with connectivity come down to physical damage, such as caused by cables being broken during roadworks, so ensuring that connectivity is through multiple different directions from the facility is crucial. Such connectivity solutions should also be of the right quality – basic public connectivity solutions will generally not be sufficient for HPC systems. Look for providers that have specialised connectivity solutions – such as BT Wholesale Optical Nodes and Cloud Connect.
Finally, with the amount of power and heat generated by HPC rack clusters, ensuring energy efficiency and low PUE are a must. Data centres have historically used disparate systems which is considerably less efficient than using an integrated monitoring and management approach. An advanced system will save many thousands of pounds through reduced power costs.