Skip to main content

In the data centre

Tate Cantrell, chief technology officer at Verne Global

'Data centres in general are becoming a focal point like never before as the staggering growth of the internet, and especially the growth in the mobile arena, has pushed more and more cycles to the data centre. Mobile really has changed everything – not just in the last 10 years, but in the last two or three years. With mobile internet scaling from one per cent of the total internet in 2009 to more than 10 per cent this year, the ability to add data is truly scaling at an impressive rate. This scale is pushing companies to search for efficient solutions.

'The key objective of the data centre is to produce an information product efficiently. Just as factories do, data centres and their operations are centred on keeping the efficiency high and the total costs down. In the case of the data centre, the goal is true computational efficiency and in Iceland we extend this beyond energy; we offer carbon efficient solutions without a price premium.

'The concept of efficiency is, in many ways, a new focus of the data centre industry. The expectations for efficiency in the mechanical and electrical systems have grown dramatically over the last decade. However, attention is now turning toward the applications themselves and ways of tuning them to reduce the energy and, ultimately, cost consumption in the data centre asset. At Verne Global, we use modular deployment techniques to make Iceland’s dual-sourced renewable power grid available for powering the most intense computational requirements. Additionally, we have prepared our site in Iceland for web scale growth by harnessing large, redundant transmission connections to Iceland’s robust energy network.

'The first step in establishing an efficient operation is business alignment. The objectives must be understood before selecting the products and tools that will create efficiencies for the business. Ultimately, the business leaders must understand their business objectives and communicate those into IT asset requirements. Data centres enable top line growth for enterprises when properly leveraged, but costs will potentially outweigh benefits if the deployment strategy is left to linger. Businesses can now choose industry-standard products that allow them to deploy data centre assets more quickly than ever before without stranding capital costs.

'Recently, we completed the deployment of a fully outfitted, redundant data centre installation in four months, thanks to our industry relationships and the modular solution that we chose in order to take advantage of Iceland’s favourable environment. This facility was fabricated and fully tested in a purpose-built factory prior to being delivered directly to our site. All components were reconnected and then fully tested for site integration.

'Every solution that we invoke must be flexible enough to meet the needs of data centre customers that come from anywhere in the world and so we search for platforms that allow customers to quickly deploy and thereby rapidly realise the benefits of cost savings and carbon efficiency that Iceland offers.'

Jim Hearnden, IEEE member and enterprise technologist at Dell

'Europe is leading the green charge and, while eco-consciousness is certainly a part of that, it’s mainly a result of the fact that utility prices have risen over the years. The drive to economise has become a necessity, rather than an ideal. Energy prices are also now much higher in the United States and our American colleagues are embracing the trend and looking to EMEA as a steer. Another defining factor is that hardware is now running at a higher temperature. Traditionally, IT hardware would run at around 19°C, but x86 hardware has stretched that by quite a margin and, even by generic standards, data centre operators are looking at 25-27°C. That said, there are a vast amount of people out there with hardware that doesn’t run on those temperatures; they’re running at 21°C and that’s costing a considerable amount of money. There are companies, such as Dell, that have hardware at the telecoms NEBS (Network Equipment-Building System) standard which means that it can run at 35°C all the time. This is a significant change and various influencing factors – some legislative, some behavioural – also seem to be pushing the market in this direction.

'In northern Europe, this will mean many data centres will be able to run without any cooling infrastructure – they simply need to move enough cool air around the facility. This ability to run without chillers enables organisations to save on not only capital expenditure in terms of the infrastructure, but on operating expense as well. I do suspect that data centres in the US will follow this example as not only does it make financial sense, it reduces the major issue of downtime as a result of cooling failure.  

'Power is another determining factor for data centres. Power and cooling are very strongly interrelated, because the power put into the server is exactly what is needed to cool within the infrastructure. Most companies are looking very closely at their power budgets and are painfully aware of how much power is needed and what it will take to provide adequate cooling. From that point of view, the power race has been replaced by a focus on performance per watt as efficiency has become the main concern. The situation is very similar to what has occurred in the automotive industry; manufacturers previously aimed towards making the most powerful car, but now the focus is on developing the most economic, in keeping with market demands. The same is true of data centres.

'If you can come up with some sort of hot and cold separation, there are some very quick wins in the data centre, such as blanking panels, and all of them offer massive leaps in efficiency for a relatively small cost. These solutions have yet to be implemented on a wide scale because while many people are aware of them, they aren’t aware of their true implications in terms of the improvement to efficiency. The situation is that the gains are viewed as marginal at best and with mounting pressure simply to get the job done, these considerations are very often pushed to the side. The sheer pressure of keeping the kit running should not be underestimated and sufficient resource planning is absolutely vital. Maintaining good operational and data centre practices can offer a way of making steady improvements by ensuring that it’s a concurrent process – for example, every time a piece of hardware is taken out of a rack, a blanket panel should be put in.

'Of course, another issue is that operators can become entrenched in a ‘small data centre’ attitude and will continue to do things in a certain way, simply because it’s how they have always been done. Essentially, they believe that if everything worked when there were one or two racks, it should keep working when there are more. The best advice is to take a step back and take advantage of all the available information. But be warned, there is a considerable amount of what I like to call ‘green wash’ out there – things sound convincing, but when you get down to the nuts and bolts of the suggestions, they aren’t valid by any stretch of the imagination. Some of the advice would also have very limited payback. The best way to differentiate between the good and the bad is to talk with industry peers and find out what’s working for them. Reducing energy costs is fundamental to everyone and ISPs (Internet Service Providers) have this down to a fine art. Most organisations are proud of what they have been able to achieve in terms of increasing the efficiency of their facilities and will be more than willing to discuss it.'

Andy Watson, VP of Life Sciences EMEA at IntraLinks

'There has been a real mood shift within pharmaceutical companies away from the view that all information should be stored and managed internally, to the realisation that software-as-a-service (SaaS) is a good solution. Companies don’t want to sink lots of capital into servers and hardware or spend time running validation routines and so are turning to reputable vendors who they can audit and rely upon to do all of that for them. The outsourcing approach can not only reduce costs, but also enables companies to be secure in the knowledge that their systems are moving forward in a validated environment. There are some organisations that may be concerned about a lack of control, but the current change of perception has led to the word ‘vendor’ quite often being replaced with the word ‘partner’.

'The main obstacle for organisations is that the distinction between a public cloud and SaaS, or private cloud, is quite often misunderstood. The key is that we’re not talking about something like a proprietary solution where movies and photos are hosted, we’re talking about somewhere that data is safe and secure on an on-going basis. We host in two locations in the US and two in Europe to offer our customers peace of mind in terms of where the data is residing and we use SunGard to host the servers that we manage and maintain. It’s important that we offer hosting in Europe as many European organisations want the reassurance of knowing that their data is being retained here. We also have a robust process in terms of encrypting the data and undergo penetration tests on a regular basis to maintain security.'

Chad Harrington, VP of marketing at Adaptive Computing

'Within the data centre, everything comes down to cost; be it in terms of the power consumption, cooling, hardware or space. The overall price of servers – certainly in terms of per unit of compute – is declining, yet hardware remains the biggest expense. In time, however, the ancillary costs may take over as the main investment. Power costs, for example, are continuing to rise on a global scale and much of a data centre’s power is consumed in powering the cooling equipment. Data centres produce a lot of heat and many of them are implementing cost-reduction strategies such as utilising free air cooling instead of air conditioning.

'Other operations are running higher voltages, which results in fewer power losses. Power is also lost each time a voltage or type of power is converted and so larger data centres are investigating running DC only instead of AC. There are also a range of other technologies that remove the number of steps that need to be taken when converting power, which result in higher efficiencies. On the space side, it really comes down to having the flexibility to find a cheaper location. Unfortunately some scientific centres need to locate their compute power in a specific area and are therefore faced with paying the going rate. Other have more freedom – the US National Security Agency (NSA), for example, is building a $2 billion facility in Utah, which is much cheaper than Virginia or Maryland, where most of their facilities are located.

'Historically, high-performance computing (HPC) data centres have had the largest footprint, but they have been outstripped by those operated by internet giants such as Google and Facebook. It’s interesting that, as an industry, we are now learning from these companies. Facebook, for instance, is using evaporative cooling in its data centres and locating them in a dry climate and Google has selected a site in Europe where sea water can be used for cooling. In the future, we believe that HPC data centres will be able to learn a lot from what these companies are doing and that the largest will closely resemble their operations.'

David Power, head of HPC at Boston

'The biggest issue our customers within the data centre arena have been faced with in recent years is the consistent rise in costs – both in terms of energy consumption and increasing demand for space and power. As energy costs rise along with a heightening concern for the environment and carbon footprint, IT decision makers have a new responsibility in addressing energy efficiency concerns in the IT infrastructure.

'Companies have always been focused on business growth, but what if their data centre can’t support expansion due to power and cooling limitations or spiralling energy costs? The double impact of rising data centre energy consumption and rising energy costs has dramatically heightened the importance of data centre efficiency as a means of reducing costs, managing capacity and promoting environmental responsibilities.

'Data centre energy consumption is largely driven by demand for greater compute power and increased IT centralisation. While this heightening demand was occurring, global electricity prices also increased – by a staggering 56 per cent between 2002 and 2006. The financial implications, as you would expect, are significant, with estimates of annual power costs for US data centres, for example, ranging as high as $3.3 billion.

'So, in terms of energy consumption within the data centre, what accounts for such a large demand for power? In a recent study by Emerson Network Power that looked at energy consumption within typical 5,000 square-foot data centres, the likes of power hungry, x86-based servers and storage systems account for nearly 60 per cent of total consumption. And a major factor behind the massive appetite of these hardware platforms is the overworked processors.

'With the latest generation Intel Xeon E5-2600 series processor averaging a 95W TDP and the fastest models peaking at 150W, the CPU is considered a significant pain-point of a server’s fierce power consumption within the data centre. This is for the simple reason that processor architects continue to add more features and performance to each new design. However, multiple studies, from the likes of Microsoft and McKinsey, have consistently shown that most server CPUs typically only run at around 10 per cent capacity in daily use.

'This means that data centre administrators are currently wasting phenomenal amounts of their limited fiscal and power budget on processing inefficient transactions. Over the last few years, numerous IT companies have proposed virtualisation as a potential solution to this problem. This is based on the idea that you can increase efficiency by running more than one application on a single server.

'Boston, however, looked to develop a completely new server – one designed to run a single application as efficiently as possible. It uses an alternative processor architecture to the x86 as a way of reducing power consumption and heat dissipation. The Viridis platform features low power general purpose System-on-Chips (SoCs) – used in Calxeda’s ARM-based EnergyCard – that deliver an order of magnitude improvement in performance-per-Watt. With each SoC using as little as 5W, this equates to a fully populated server – within a 2U enclosure – consuming 240W plus the overhead of disks (so ~300W for 48 quad core servers).'



Media Partners