Uptime Institute says Datacom data centres are among world’s best

By Tom Jacob

N+1 1.8 mW Caterpillar generators

Over the past 12 months our improvement initiatives at our Orbit and Kapua data centres have focused on how we challenge ourselves to be better and give our customers confidence in the running, availability, security and sustainability requirements over the lifetime of our data centres.

This program has already seen us secure a certificate of compliance in support of customer PCI requirements, and complete a successful Risk and Control audit against International Standard on Assurance Engagements 3402 “Assurance Reports at a Service Organisation”.

We are delighted that the culmination of the 2014 program has been the Management and Operational Stamp of Approval by The Uptime Institute — the acknowledged world authority on mission critical infrastructure in data centres. The Uptime Institute (UTI) is an unbiased, third-party data centre research, education, and consulting organisation.

In July, two representatives from the Uptime Institute were in New Zealand for a week to rigorously assess the management and operations practices at the Orbit and Kapua data centres.

The achievements by both the Orbit and Kapua data centres were outstanding; rated by the Uptime Institute as being amongst the very best run data centres in the world. Orbit and Kapua are the first and only data centres in Australasia to have received these Stamps of Approval.

Kapua Exterior

The Kapua data centre achieved a score of 95.6/100. To put that score in context, the pass mark is 80/100 and the highest score ever awarded was 96/100. The average score is just 79.2 points – just over 50 percent of data centres pass the review the first time.

Methods of Procedure (MOPs), staff training, shift rotation methods, and our tablet-based approach for system checks were identified by the Uptime Institute as being especially excellent.

Orbit and Kapua – Tier 3+ Certified

Also completed during 2014 an independent TIA-942 audit summarised Kapua and Orbit as meeting Tier 4 standards for Mechanical and Telecommunications, and Tier 3+ standards for Architecture and Electrical.

Datacom data centres classed among the top 5 in ANZ

By Tom Jacob

IT infrastructure operations and data centres were on the agenda at the Gartner Summit in Sydney this year. The two-day event was centred on maximising value and managing change in a cloud-driven world.

During the summit, Gartner provided its perspective on the data centre and infrastructure utility providers, where Datacom is seen to be ranked by size and category amongst the top five providers across ANZ. This chart, organised alphabetically, is based on Gartner’s estimate of the providers IT outsourcing – data centre and infrastructure outsourcing – revenues in 2013 in $US.

datacentrerankings

 

Competition is fierce, but the data centre market is fragmented with many organisations providing a variety of infrastructure services.

Gartner analyst Rolf Jester, VP Distinguished Analyst at Gartner, explains.

“The Asia-Pacific data centre market is more complex and difficult to compete in due to a number of market pressures, ranging from inconsistent offerings and pricing terms to the increased hyper competition from cloud, Telco, hosting and Indian/Japanese providers.”

Here are some key questions asked and takeaways from the summit.

The theme for this year’s summit was on maximising value in a cloud driven market. What are your views, and key takeaways?

In the past when I have attended data centre conferences the content has primarily focused on the facility services side of the data centre: power, cooling, design concepts and management practices. This year’s summit was very different as the time spent on the physical facility was less than five percent and the remainder was heavily focused on cloud, networking and global data centre connectivity.

It was interesting to note that Gartner’s definition of a data centre has evolved to be considered more as a network of places where IT services are delivered from, rather than a purpose-built facility providing power, cooling and facilities management to support customer IT workloads. Gartner regularly referred to a data centre as being a place where cloud based services are delivered from by either known or unknown locations.

The relationship between the customer and the data centre will now more than ever be managed by contracts, rather than the customer having a say in how the facility is run and managed which is something that has occurred in the past.

What do you see as the main considerations or constraints for organisations reviewing their data centre strategies?

In the past typical constraints were mainly capacity constraints for power, cooling, space, specialist data centre/server room management skills and ongoing funding. We have observed a change in the last 18 months where these issues are diminishing mainly due to consolidation with the aid of improved IT infrastructure and virtualisation technologies. Cloud Services (IaaS and SaaS) are also maturing and customers are seriously considering how these services will fit inside their organisations. Early adopters are already consuming services such as email, digital image storage and test and development services. We only need to look at the success of our own Datacom Cloud Services (DCS) and Datacom Cloud Services Government (DCSG), along with the global success of AWS, Azure and Office 365.

We see customers’ own facility constraints becoming less and less of an issue and we are already observing customers repurposing their old server rooms back to productive office space. And when organisations relocate premises it’s clear that moving the IT equipment to a data centre makes more sense than reconstructing a server room.

How do you see the future of the data centre market evolving in ANZ market, considering the analyst view on market pressures, consolidation, competition and partnerships?

The future is uncertain and depends on where customers are comfortable having their services and data stored and delivered from. There are current customer concerns about data sovereignty, network access and the high availability and locality of these facilities. But we don’t expect to see many more new data centres being built and we’re certain we’ll see a number of the older data centres empty out and close down. We’re confident that if customers choose New Zealand-based cloud service providers then there’ll be a healthy local market and, in time, additional Tier 3 data centres may be commissioned. Datacom is well-placed for this growth as both of our Tier 3 Data Centres (Orbit in Auckland and Kapua in Hamilton) have plenty of capacity. Datacom also actively promotes the use of these data centres to competing cloud and service providers with the aim of giving customers plenty of choice and retaining them.

What criteria do you think we have that places us in that class of providers as mentioned by Gartner?

Firstly it’s because Datacom covers all the bases. It has high-quality, innovative data centres, and the right policies to encourage service providers and customers to host there. And Datacom has a wide range of cloud offerings that give customers convenient access to services.

The design choices Datacom made in the initial design 6-7 years ago have proven to be winners. The use of outdoor air to cool the IT equipment has been a consistent factor in the energy efficiency of the data centre improving, making a significant contribution to customers’ sustainability goals. And the flexibility of Datacom’s solutions means we can always find a way to make it work for a customer—it’s not one size fits all.

Tom Jacob is Datacom’s General Manager of Data Centres.

The relevance of PUE

By Tom Jacob

The rapid expansion of the internet together with the declining cost of computation (in energy terms the performance per watt) has resulted in an exploding demand for servers. These servers are becoming denser and each rack requiring more energy, and creating more heat. Being able to meet the power and cooling requirements of a modern rack and managing rising energy costs are resulting factors of a more efficient data centre.

This efficiency can be measured and tracked using the power usage effectiveness (PUE), which is a measure of how efficiently a computer data centre uses energy; specifically, how much energy is used by the computing equipment (in contrast to cooling and other overhead).

PUE is much like the energy star rating that you see on white ware appliances. The lower the number the more efficient the data centre is. Low PUE translates to direct savings for customers and this measure should be at the top of the list when evaluating data centres.

The definition of PUE was established by a non-profit organisation called The Green Grid. Founded in 2007, their mission is to become the global authority on resource efficiency in information technology and data centres collaborating with companies and specialists all over the world.

10 years ago a PUE of 2.0 was pretty typical but by today’s standards is quite inefficient. The goal is to get the PUE at 1.5 or lower. In addition this should be auditable and readily available to show customers.

At Orbit our Auckland based data centre facility, we designed the PUE to be 1.5 and today we are achieving monthly PUE’s as low as 1.3. The current rolling 12 month PUE is below 1.5, and hence exceeding design specifications.

We invite you to tour through Orbit or Kapua our Hamilton based data centre facility and our team will show you what to look out for when evaluating data centres.

Tom Jacobs is Datacom’s General Manager of Data Centres.

Auckland and Hamilton data centres become PCI Compliant

By Tom Jacob and Darryl Roots

Data integrity and security remains a key responsibility for many organisations dealing with highly sensitive business information especially where that includes data such as a customer’s payment details and in particular cardholder information.

In 2006 five global payment brands established the PCI Security Standards Council (SSC), which through their operating regulations require that any merchant and service provider that accepts scheme branded credit or debit cards for payment validate their compliance against a number of specific test points outlined in the PCI Data Security Standards (DSS).

Specifically, the PCI DSS is a set of requirements designed to ensure that all companies that process, store or transmit credit card information maintain a secure environment. As many of Datacom’s Orbit (Auckland based data centre facility) customers are dealing with financial transactions involving card scheme such as VISA or MasterCard, they must meet various degrees of compliance mandated by the PCI-DSS.

Over the past few months the Orbit and Kapua (Hamilton based data centre facility) teams have been working towards achieving this and in December and January we received notification that both facilities are now fully PCI compliant and meet the standard for a Level 1 Security Provider in restricting physical access to cardholder data.

Achieving PCI compliance in this way supports the requirements of our clients to be PCI accredited with various banking and financial institutions who provide them credit card merchant facilities and enable them to accept those cards as payment for goods and services.

Being able to provide data centre services that have already met PCI requirements reduces the overall cost of compliance for our clients and saves them from having to implement those stringent security measures within their own facilities, something that is often very difficult to achieve.

Datacom is committed to assisting clients in reducing the impact and costs of PCI compliance with our security consulting services, our own PCI compliant payments gateway and now the achievement of this milestone at Orbit and Kapua.

Tom Jacobs is Datacom’s General Manager of Data Centres.

Darryl Roots is Datacom’s Business Manager of Payment Services.

Creating a Fail-safe Data Centre Disaster Recovery Plan

Whether you host the bulk of your infrastructure in your own data centre, a third-party data centre or a mix of both, you need a data center disaster recovery plan. These plans differ from the traditional disaster recovery plan as they take into account the actual data centre, its location, infrastructure and environmental systems, amongst other things. A DR plan, whilst still necessary, deals more with the actual IT systems that could be disrupted during an outage or event and the process of recovering them. The right data centre services provider can help you create your plan whether it’s for your own data centre or a third-party facility.

Check the operations

Depending on your arrangement, this exercise could involve input from internal data centre teams, external data centre providers and other resources. Anything related to the data centre infrastructure you use should be considered in your assessment. This will include building and floor plans, environmental features and network configuration documents. If you’re using a third-party facility, they might already have their own data centre disaster recovery plan. If you’re relying on your own facility, you’ll need to assess the biggest potential issues that could affect the data centre, such as security breaches and power outages, which types of disruptions have affected you in the past and what the current process is for addressing them.

Depending on the information you uncover, you might need to retest certain procedures and redefine the maximum outage time you can bear. You’ll also want to ensure you know which key staff will need to be available to respond to data centre incidents and whether they need any additional training or retraining. In addition, outline the response procedures of any third-party providers and if they ran smoothly last time they were used.

Know your gaps

Mining this information will give you a current-state picture of what you are missing in your data centre disaster recovery plan strategy. It will also help you identify the most pressing risk scenarios, whether they are related to nature, security or human error. You’ll aim to list these potential situations in order of impact, seriousness and probability to help formulate the proper steps and procedures to take to respond to them. Then you’ll outline how to achieve your desired future state of data centre readiness and what this will require in terms of resourcing, staff training and budget.

After the plans are reviewed and next steps are actioned, the data centre disaster recovery plan should be tested and implemented once any tweaks needed are made. Going forward, you’ll want to schedule regular audits of the plan to ensure it still meets your business’s needs and reflects the current state of your data centre assets and arrangement.

Remember to enlist the help of your data centre services provider or hosting facility in creating your data centre disaster recovery plan. Their expertise will help ensure all your bases are covered so you have the most protection from potential outages and incidents.

The Importance of an Environmentally-Conscious Data Centre for your Business

Gartner has estimated that the IT industry produces 2 per cent of global CO2 emissions — a per centage almost equal to the aviation industry. Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants, according to the New York Times.

But it’s more than a green issue. Having your infrastructure stored in a data centrethat has poor cooling, ventilation or power balance can increase the risk your systems will suffer downtime, poor performance or damage. When looking for a data centre to host your infrastructure, considering the following environmental design elements and monitoring features can ensure your hardware is protected and runs at optimal levels.

Environmental Management Plan

An Environmental Management Plan focuses on sustainability targets and strategies to implement environmentally-conscious design and operational elements in the data centre. When looking for a data centre provider, ensure that their EMP is audited to ISO 14001 standards on an annual basis. The certification is considered the only auditable international standard defining requirements for the creation, implementation and maintenance of EMPs to better represent and manage the long-term commitment to carbon reduction and sustainability.

Heating and cooling

Better energy utilisation can increase the power efficiency for the entire data centre, meaning your systems run better and faster. Over time, this energy efficiency can reduce operations and maintenance costs — savings that can be passed onto to you as the customer. Features such as Cold Air Economisers will reduce energy consumption and can ensure optimum Power Utilisation Efficiency (PUE) in data centres. Other things to look for in your data centre provider are purpose design and layout of cooling and ventilation using proven hot/cold aisle ASHRAE — American Society of Heating, Refrigerating and Air-Conditioning Engineers — design. The ASHRAE standard is focussed on energy efficiency, sustainability and building systems.

Building automation systems

A BAS offers visibility of any inefficiency and environmental issue. These systems will have a strict understanding of all site environmental controls, including temperature, humidity and access alerts, and provide clear mapping of the entire site to allow immediate identification of incidents. Such a system also offers all monitoring and alerting on building items and any trends in performance. The best BAS will also have security alerts or breaches recorded on cameras and escalated to the network operations centre (NOC) and the data centre manager.

A strict maintenance activity regime

Components designed to be energy-efficient still need to be serviced and maintained. Data centre providers with a dedicated maintenance team ensure all equipment is serviced regularly and promptly. This dedication will cut the risk of malfunctioning systems that can eat up energy use and lead to slower-performing systems. It also helps to choose a data centre where facilities are monitored on a 24/7 basis to provide immediate response to any environmental issues.

When looking for a data centre provider, or an advisor to discuss your own Environmental Management Plans, you should enquire about their environmental efforts and EMP. Asking these questions upfront will help you feel more comfortable about the conditions in which you are storing your systems, in addition to your reduced environmental footprint. The data centre teams at Datacom are already having this discussion with our clients about facilities, performance and efficiency in our data centre sites as well as theirs.

Datacom Commits to Environmental Sustainability with UN Green Leaders Summit Sponsorship

As part of Datacom’s commitment to sustainability and environmental leadership in its data centres, Datacom Australia is a sponsor of the United Nations Green Leaders Summit in Sydney Sept. 9 through 12.

The annual Green Leaders event is produced by Green World City with the support of United Nations World Urban Campaign and brings together global international delegates, green leaders and innovators from all sectors to discuss, share and collaborate on sustainable solutions. Topics covered will include natural resource depletion, global urbanisation, climatic change, energy, pollution, food resources and the impact of population growth.

Participation in the Green Leaders Summit is part of Datacom’s work towards using best practice environmental standards in its data centres, such as:

  • A custom-designed cooling system to offer leading-edge cooling efficiency
  • Design and layout of cooling and ventilation using a proven hot/cold aisle ASHRAE (American Society of Heating, Refrigerating and Air-conditioning Engineers) approach
  • Building automation system (BAS) to offer visibility of any inefficiency
  • A strict maintenance activity regime to ensure all equipment is serviced promptly
  • Cold air economisers that use outside air to cool buildings to ensure optimum energy efficiency.

Datacom has worked for several years to become green leaders in data centres, sustaining and developing a comprehensive Environmental Management Plan for its data centres that is audited to ISO 14001 standards each year. The certification is considered the only auditable international standard defining requirements for the creation, implementation and maintenance of EMPs to better represent and manage the long-term commitment to carbon reduction and sustainability. Datacom completed the successful audit certification for ISO 14001 in 2009 through BSI, the world’s first National Standards Body, and will be completing another phase of auditing in Victoria in October 2013.

Why Disaster Recovery Should Be Standard Operating Procedure

Not that long ago, disaster recovery often simply entailed organisations making back-ups of critical files and a staff member taking tape back-ups home “just in case” anything happened.

Of course, the IT field has evolved considerably since those simpler times. Yet, many organisations aren’t keeping pace with current disaster recovery technology and standards. Failing to update your approach to disaster recovery doesn’t leave your organisation at risk during a catastrophe — it puts your organisation at risk every day. Taking a holistic approach to disaster recovery planning — with the help of a DR or data centre services provider — will help you cover each minute detail so your organisation is ready to withstand the simplest and worst of unplanned incidents.

The Full Scope of Disaster Recovery

Disaster recovery is not simply what you do when the disaster strikes, but what you do to mitigate risk and ensure business continuity for technology and the related processes. CSO Online defines disaster recovery as the “planning and processes that help organisations prepare for disruptive events — whether those events might include a hurricane or simply a power outage caused by a backhoe in the parking lot.” Preparing for such an event, whether it be hurricane or errant backhoe, means creating and maintaining a solution that covers:

  • Scalability that accounts for new processes and data beyond planned growth
  • Redundancy of critical servers and infrastructure — particularly for customer-facing processes
  • Failover systems that continue business operations if a disaster strikes
  • Secure back-ups that aren’t harmed in an emergency and can be retrieved as soon as possible
  • Vetting all SaaS programs to ensure vendors meet your disaster recovery standards
  • Written and known procedures for your staff to follow in a disaster, and for end-users if their workflow changes during the process of an event

Creating a Disaster Recovery Plan

Like most large-scale IT projects, the process of crafting a disaster recovery plan will demand two very important elements:

  • A large majority of your staff’s time and resources, likely meaning an adjustment in their day-to-day duties that could hamper operations
  • Experts who are familiar with the best disaster recovery technology and protocols, particularly if you’re in a highly regulated industry

Instead of continually postponing planning until your internal resources are available and the stars have aligned, you can rely on a disaster recovery partner to guide your business through the process. Besides freeing your staff, a team of experts will help you:

  • Assess how all business processes — inside IT and within your organisation — will be affected during a disaster
  • Audit your infrastructure, technology and technology vendors to determine gaps in disaster recovery, including redundancy and failovers
  • Draft plans for everyone in your organisation that explain any alternate workflows, work locations or different technology to use during a disaster
  • Manage the implementation of disaster recovery technology and plans

If your fellow executives question the cost of this project, a cost-benefits analysis demonstrating how much business you’ll lose during days of downtime if you didn’t have a plan versus how much you’ll lose if a catastrophe strikes and you were prepared properly will likely make them start humming a different tune. But most importantly: Start planning now. After all, you never know when that backhoe will strike.

The Importance of Data Sovereignty in the Data Centre

As more Australian firms have become concerned with the privacy and security of their company data, they’ve simultaneously been faced with the decision of where to host it. That decision is also becoming more complicated, as more and more data centre providers have crept into Australia.

Within the past year, large companies have laid out plans to make their data centre debuts in Australia. Most are making the moves to strategically globalise their brands. For Australian organisations, however, relying on foreign data centre providers brings data risks and limitations that can result in privacy breaches and data loss for businesses.

When organisations determine where to host their data, privacy, data sensitivity, marketing concerns and latency should all factor into the decision. Legal implications should also be top of mind. The issues at stake all contribute to the argument for truly local hosting in the data centre.

Sensitivity of data

In recent years, enterprises have turned to storing critical business intelligence in the data centre — data so sensitive that it would be detrimental to customers and the entire organisation if it were leaked. When PlayStation waited more than a week before notifying Australian customers that their personal and credit card information had been breached, it was clear how important data centre security is to hosting customers.

When putting data into the hands of an overseas provider, the location of your data becomes murky. For example, many providers with a local data centre presence still spread data among nodes around the globe. That leads to questions about where your data is located and what law of the land rules. It is important to do ample research on a provider’s country-specific legislation before finalising a contract, because it is the client’s responsibility to understand all of the implications. Should there be a discrepancy between your country’s data laws and a provider’s, your company will bare the brunt of any consequences.

Because the physical proximity is not as scalable offshore, organisations have raised concerns about where their data is physically located, if they can visit data centre sites, what personnel are handling the data and what type of disaster recovery plan is instated beforehand.

It’s also harder to determine if the provider implements appropriate methods for executing key principles of security:

  • Availability: With a provider oceans away, likely in a different time zone, will they be able to access your data should there be a glitch during your workday? Likewise, will your service be interrupted if anything ever goes wrong at the data centre?
  • Integrity: From a distance, there’s no way of knowing who personally is watching over your data (or how many different people) and every little malfunction that may occur throughout the centre. With so many vague factors at play, the long-term consistency and accuracy of your data becomes just as opaque.
  • Confidentiality: The same unknowable factors that shroud a company’s data integrity when hosted offshore pose similar problems in regards to data confidentiality. It can be all too easy for access to fall into the wrong hands in the data centre, exposing critical information and potentially putting other parties at risk.

Marketing concerns

When using a local data centre, it is easier for providers and clients to establish a relationship based on familiarity and trust. Hosting data continents away puts ideas and innovations at greater risk, especially given the reality that some larger providers have the right to store and share data or use it for marketing purposes. This would be similar to, for example, an email hosting service re-selling personal information for marketing by using harvested inbox data and sharing it with advertisers. By opting for a local data centre host, organisations can get a better feel for the data centre provider’s practices, including their stance on sharing data with third parties.

Latency

During a technical crisis, damage control is easier to conduct when data is hosted locally. If a natural disaster, power outage or data centre breach occurs at one of your office locations, having data hosted offshore means a less-immediate recovery process. Hosting data in different countries also subjects your information to separate foreign legal codes, which may make it more difficult for enterprises to access and analyse their data during critical times. On the contrary, data centre providers that host locally afford clients more control of their information, which leads to quicker, less complicated solutions.

A two-millisecond delay in the transfer of data may not seem like enough to fuss about, but repeated hundreds of thousands of times in a year, it can mean the loss of significant man hours.

Data centre providers that strictly adhere to local privacy laws and store data onshore can make customers feel more at ease about moving their data off-premises.

3 Things Every Data Centre Must Offer

All data centres are not created equal. Some offer unacceptable security measures. Others might not provide adequate failover ability. Others might store your data offshore, which breaches federal, state or industry privacy regulations.

Vetting a data centre is, simply put, one of the most critical IT decisions your department will make, one that affects nearly every business unit. Ideally, your data centre will provide a secure, central locationso your organisation can access, store and use data that’s available anywhere. As you research data centre providers, whether it’s for Infrastructure-as-a-Service cloud or disaster recovery, keep these things in as top priorities.

1. The data capacity to meet your organisation’s evolving needs: Consider how your business planning may affect data needs over the foreseeable future. For example, is your business planning to launch new products or services that will require operational changes? Can you think of a handful of departments yearning for a data solution to solve their problems? What is the data transmission capacity? Your provider should ensure it can properly forecast capacity to prevent issues with rapid scaling and make it easy to scale from both technical and financial perspectives.

2. Multiple hosting and storage options and room to grow: The rack space you need on Day 1 of your data centre service contract likely won’t be the same as on Day 403. Your data centre should allow room to expand and offer a range of hosting and service options to enable business growth and agility. This might include an ability to go from co-location to a fully-managed cloud infrastructure environment if you wish, or setting up a disaster recovery or production site.

3. Support 24 hours a day, every day: How would you feel if your data was essentially abandoned after hours? A critical test of your prospective data centre provider is whether they provide hands-on maintenance monitoring around the clock to protect your infrastructure and applications.

This extends beyond simple phone support to encompass a 24×7, purpose-built data centre facility with backup power generation, uninterruptable power supplies, and redundant systems for cooling and telecommunications links. And the operations team doing the monitoring support must be actual IT professionals, not security guards. A data centre provider with local, accredited IT technicians who can maintain and troubleshoot the data centre environment will help you sleep better at night.

This list many not cover the gamut of data centre options potential providers must meet. But if they can’t pass these initial tests, they’re in the wrong class.