Changing payroll systems – not just for the brave?

Is changing payroll systems just for the brave?

By Kevin Murphy

How scared should you be? High profile payroll implementation failures in the education and health sectors make changing payroll systems seem like high risk projects. It’s therefore often not until a critical incident occurs or significant pressure builds on the people, processes or technology involved that the need for change overcomes the appetite for risk.

Why change?

We would love to believe that people change to Datacom’s payroll software because it is so much better than what they had. Our software is assuredly better, however it seems that nobody changes their payroll software just because they have found something better. They typically change for any or all of the following reasons:

  • Their existing software or software supplier has let them down in a major way or they believe the risk exists for this to happen.
  • They are being forced to undertake a major upgrade of their payroll software.
  • The person that has been running the payroll for seemingly forever has decided it’s time to retire.
  • They are unable to get important business information from their existing system.
  • They recognise that they are doing an unreasonable amount of manual administration that can be automated or eliminated.

Moving to a modern payroll system that is cloud-based and date effective, that does all calculations in real time, that includes mobile and web applications for staff, that automates manual award interpretation, that can be integrated with other systems, and so on… only seems to happen when one of the above conditions exist.

It is unfortunate that many are missing out on the benefits that a modern payroll system can provide, and it is not until an organisation is forced to research alternatives that these benefits are uncovered.

What to look for

Your payroll should be one of those things that runs silently in the background. If you are thinking about payroll at all, this is likely not to be the case. Silent running should be one of your primary objectives.

The number one thing to look for in your new payroll software is a solution to your current dilemma.

If your existing software or supplier has let you down, look for a track record and references. Look for disaster recovery systems and regular DR testing. Nothing drops staff morale faster than failing to pay them on time and correctly so having confidence that the payroll is going to be available when you need it is critical.

If you are being forced to undertake a major upgrade, look for a cloud service that will always be up to date when you connect to it. There really is no need anymore for dedicated infrastructure that you need to maintain, update and renew prior to accepting the latest version of your payroll software. Look for continuous development behind the scenes and a steady stream of new releases. Look for one with the capacity to manage your payroll whatever size you grow to without upgrades.

If you are unable to get the information that you need from your payroll system, look for a comprehensive set of standard reports. You’ll want a custom report writer that does not require specialist report writing skills, and the ability to get data out in .csv format and/or through an API for further manipulation.

If you find your payroll staff are dealing with paper timesheets, paper leave requests, or manual payroll calculations, seek time-saving alternatives in the form of employee mobile and web applications, back pay calculators, and an award interpreter.

But also look for something that is as “future proof” as possible. Look for a cloud application that is supported by a development team of some size who are continually maintaining compliance and adding new features.

How to run the project

Payroll projects can be risky. The newspapers frequently carry stories of disastrous payroll projects and we believe this is the main reason that people are so reluctant to upgrade their software until they really have to. The truth though is that payroll projects do not need to be risky. Datacom currently completes an average of seven significant payroll migration projects every month.

Dealing with an experienced and expert payroll company with a mature project methodology should be your first risk mitigation when planning a payroll project. Secondly, you could consider breaking the project into bite sized chunks. While not always possible, consider making changes in your existing platform first before migrating payroll platforms, so that change happens incrementally.

It is generally a good idea to simplify your payroll as much as possible before migrating (or even choosing) payroll systems. For example, renegotiating remuneration to simplify rate calculations, or cashing up allowances, etc. Adopting common standards for such payments will mean you have a greater choice of systems and will require minimal customisation.

Every payroll project (even the annual upgrade required on legacy client-server systems) should include parallel runs. That is, running both the old and new system in parallel, providing the same data inputs into both and reconciling the outputs. This can be quite a lot of work, but your payroll software supplier should be able to help with this work, and have tools available to simplify the work and reconciliation.

While the payroll supplier will have primary responsibility for the work to be undertaken during the project, there are a number of parts of the project that cannot be done by a supplier. Your payroll supplier should be able to clearly explain what they expect of you as a part of the project. This is likely to include dealing with the legacy system supplier, perhaps providing information from the legacy system in specific formats, participation in various configuration workshops, approving configurations, reviewing findings from parallel run reconciliations, managing communications with staff, and so on.

In general, you should not need to provide a specialist project manager (the supplier should provide one), but you will need to make available the people who have the best understanding of your current payroll system to provide information to the project team. The project team will need to have a clear understanding of how things work today, and how you want them to work.

What next

Modern software these days is generally provided on a Software as a Service (SaaS) basis. That is, you simply connect to it via the Internet and use it, without having to own and manage a lot of IT infrastructure. A benefit of this is that your payroll need no longer be an island on which only a privileged few have access to.

Obviously security and privacy controls need to be strictly maintained, but connectivity in the cloud world means that you can easily connect to your staff via web portals for timesheet input, leave requests, leave approvals, and for payroll data output, like payslips or other notices. The “new world” equivalent of this is mobile apps for smartphones. Smartphones are becoming ubiquitous and connecting employees to your payroll via their smartphone provides convenience that cannot be matched for many non-desk bound employees.

Connectivity in the cloud world also makes it easier to connect applications. For example you might connect your payroll system to specialist HR applications that particularly suit your business, rather than the old world where you had to purchase a single system that did many things but none of them well. APIs (Application Programming Interfaces) are included in most cloud applications and allow you to exchange or synchronise data between applications.

Once you are using modern software, keep in touch with your supplier, be familiar with their roadmap, provide them with your feedback, and take advantage of new features as they become available.

So, how scared should you be?

As long as you are working with a partner who has a dedicated team of experts, using a proven project methodology, not scared at all. In fact you should be excited, and looking forward to positive feedback from staff who are happily using smartphone apps to input and receive data from your new payroll system.

Kevin Murphy is Director of Datacom Payroll for New Zealand. 

Azure, hybrid cloud, and the importance of monitoring at a business service level

AzureWhitePaper_Banner_Blank

According to Datacom customer research presented in Before You Go Public, Read This, few organisations are currently planning to go ‘all in’ to public cloud and will, therefore, retain some of their workloads on-premise or in private cloud. Many services are, and will be for the foreseeable future, delivered via applications or workloads with a hybrid set up. For example, an organisation may want to use Microsoft Azure for a front-end application that needs elasticity, and use a private cloud for the interconnected database.

Managing hybrid cloud complexity

Hybrid cloud architecture, however, requires careful management and planning to account for key factors such as latency, security and compliance, as well as potential added complexity and transition costs.

“Making hybrid cloud work well means focusing on integration and interconnectedness, especially in the planning stages of a project. For instance, if an organisation stretches certain components by running them in Azure, what is the impact on other, reliant components? These critical factors are sometimes overlooked,” says Brett Alexander, Solution Architect at Datacom.

On top of this, with greater adoption of public cloud (and proper planning), usually comes a corresponding service orientation and increasing focus on business services and related outcomes enabled by cloud. These outcomes may include risk or reputation management, reducing cost and making key services available when needed and at a suitable quality. They are often delivered through the aggregation of multiple providers, services and solutions – and various SLAs.

Making these services and outcomes happen and managing the many moving parts involved is clearly an important function and a complex task, which organisations can take on themselves or outsource, at least in part, to a qualified partner like Datacom. Whatever the approach taken, there is a growing need for those in the organisation involved in Azure to understand the way different clouds and related services interact and how they can be integrated – with each other and with other environments and types of IT.

Monitor from business service level down

Business services, even something as simple as email, are built from and rely on a number of components, including applications, firewalls, switches, servers and storage. Mapping these services, the applications and infrastructure that enable them, and the interconnections and dependencies of the various components, are an integral part of planning for Azure adoption and optimisation and managing a hybrid cloud environment.

This is why Datacom recommends monitoring at an availability-of-business-service level. This means having dependency-based monitoring from the business service level down through applications and infrastructure, including Azure. We also recommend automated root-cause analysis to provide information and evidence for problem management processes and liaising with Azure support teams, if required. For this, organisations need to implement robust analysis and troubleshooting tools.

In short, if your Azure servers or services go down you need to know what will be affected, for how long, and what impact that will have on your organisation in order to determine and take necessary steps. Among other things, this means knowing what it takes to keep a high availability application operational if X or Y shuts down. And things do go down from time to time.

“Azure has planned, routine outages for maintenance purposes, when servers going offline temporarily. Before this happens, organisations need to know how many more stand-in machines are required to maintain each service in the event of an outage, compared to traditional, on-premise IT,” says Roger Sinel, Operations Manager at Datacom.

Broadly speaking, the skills required for such mapping, monitoring and management include knowing how applications work, how infrastructure works and how they work together. Operations engineers need to co-operate with developers and application specialists to ensure applications run smoothly in Azure through the correct use of resiliency and performance techniques, and by testing and monitoring correctly. What is supported in Azure and what isn’t need to be understood – especially in a hybrid environment. As reliance on Azure increases, along with complexity, automating parts of processes as much as possible using scripts becomes increasingly important.

The recommendations above are among many others made in our free white paper, How to make the most of Azure, which is available to download. It’s based on Datacom’s many years of experience working in partnership with Microsoft on cloud projects of all sizes for a wide range of organisations. These include what is still the world’s largest production SAP migration to Azure, for Zespri International, one of the most successful horticultural marketing companies globally.

For even more information and advice on how your organisation can take full advantage of Azure, please contact us on cloud@datacom.co.nz or cloud@datacom.com.au.

Tool up to make the most of Amazon Web Services (AWS)

AWSWhitePaper_BannerBlank

Adopting and making the most of AWS or other public cloud platforms will almost certainly require investment in new tools. In general, Datacom recommends that organisations have a tooling strategy that focuses on tools with API-based integration capabilities. This avoids the lock-in that some proprietary tools cause, which constrains customisation, adds complexity and hampers agility. For optimal outcomes in a hybrid cloud environment, it is better to use API-based tools – large or small – that enable cross-cloud platform integration alongside native AWS tools.

In public cloud operations, a new method of engagement is necessary to match the tectonic shift in focus from hardware to software that the platform engenders. Engineers no longer have direct access to infrastructure so they view servers through a portal and use software to control things. This means some people may need to adopt a new mentality and update their skills substantially. They need to move away from traditional, manual, GUI-based methods of monitoring and control to using scripts and coding to enable process automation and managing by exception.

This means that using start-up and shutdown scripts should be a goal for operations teams. Alongside this, server health checks are required to ensure performance. AWS provides native tools to help with such tasks. For instance, AWS Lambda enables task scheduling that can be utilised in conjunction with scripts to wake up servers, get them to perform a job, and then shut them down – all automatically.

Other native tools worth noting include:

  • AWS Service Catalog – allows organisations to centrally build and manage commonly-used and compliant catalogues of IT services – comprised of a range of components, from virtual machines and databases to complex application architectures. Once built, these IT services can be deployed automatically and repeatedly, in one click, saving time on development and management, and helping to avoid sprawl
  • AWS Trusted Advisor – another useful tool for making the most of AWS, it reports on cost optimisation, performance and compliance issues, and recommends ways to improve these things
  • AWS Inspector – provides an automated security assessment and rule-based compliance service at the application level

Monitoring tools have new challenges with public cloud: not all were built for this environment. For example, more machines are usually required in public cloud compared with on-premise (to account for machines switching off from time to time) to provide the same service. This means that, if monitoring agents are placed on all AWS machines, they may produce too many alerts to handle. And monitoring costs may go up. So monitoring in AWS needs a new approach, and to be tested and fine-tuned over time.

Organisations should also assess their approaches to data backup as they adopt AWS. In a hybrid cloud situation, this isn’t a simple task. For backup, as with monitoring, a mixture of traditional and native AWS tools may be the best option – at least in the short term. Although backing up cloud-ready applications may be relatively straightforward in AWS, replicating traditional enterprise backup methodologies in this environment without a dramatic increase in cost is challenging.

Looking at development in particular, AWS has a multitude of tools to support continuous integration and continuous delivery, including CodeDeploy, CodePipeline and CodeCommit, and supports an array of coding languages via APIs. Using the platform and its native tools in combination with a DevOps approach to developing cloud-ready applications for the platform can result in faster, cheaper and more efficient development processes compared with developing on-premise.

The recommendations above are among many others made in our new free white paper, How to make the most of Amazon Web Services, which is available to download. It’s based on years of experience working in partnership with AWS on projects of all sizes for a wide range of organisations.

As an AWS Managed Service Provider, Datacom is at the front line of new innovations in AWS and evolving best practice, as well as changes to pricing, SLAs and other aspects of the platform. We have AWS operations specialists, with blended software and infrastructure skills, who manage, for customers, applications that we have architected to take advantage of the unique features of AWS.

We are therefore in an ideal position to help customers across a wide range of areas related to AWS, including development and operations, designing and building cloud architecture, and integrating and managing complex hybrid cloud and/or multi-cloud environments.

For even more information and advice on how your organisation can take full advantage of AWS, please contact us on cloud@datacom.co.nz or cloud@datacom.com.au.

Why change management is critical to cloud success

LessonsfromtheCloudFace_blog_01

And other lessons on cloud adoption and management.

Managing services, applications and workloads in a cloud environment is different to on premise. Not better or worse, just different. These differences are more marked in public cloud, but apply to private cloud and hybrid cloud as well. So getting people ready for the cloud journey is as important as preparing the strategy and plan and working on the technology.

A move to public cloud especially will most likely change the nature of some people’s jobs – which may have been performed the same way for many years. Some functions will stay the same, some will transform, and some may disappear. (Some may of course be passed onto a service provider.)

For example, some tasks that system admins have been doing for twenty years will need to change with public cloud. Take server outages: traditionally seen as a problem to be investigated or rectified in on premise or even highly virtualised environments. Many monitoring toolsets raise alerts at outages and trigger processes aimed at rectification. But in a public cloud environment, where machines may be switched off anytime when they are not needed to provide a service, this set up needs to be amended.

Even if a workload does cause an issue in this environment, it can be readily destroyed and a new one redeployed in its place. This action can be logged for review in the morning rather than cause a major alert. In this context, the traditional mindset of a server down always equating to a serious issue needs to be updated, along with the related processes.

So people will need to reskill and think differently to ensure successful cloud service delivery. This need may be reduced if much of the management is outsourced to a third party, but there will always be a learning curve of some kind required to ensure the business can make the most of its partner’s or partners’ services and support.

There is often understandable resistance to these changes. Change management is therefore a crucial aspect of any transition to cloud and a key consideration to build into cloud strategy and planning. As part of the transition stage of cloud adoption, we spend time with customers to explain what is to change operationally, from a people and process perspective.

The positive flipside of all this challenging change is the huge opportunity for individuals and organisations alike to be empowered and prosper in the cloud era.

With public cloud in particular comes much potential automation of traditionally manual processes. The same can be said of private cloud or even traditional environments, of course, but these kinds of systems by definition have limits that public cloud does not. Nevertheless, with any cloud environment, automation of processes is an important reason why it offers more benefit to an organisation than legacy infrastructure.

The server destroying and redeploying process described above can in fact be wholly automated, with no manual intervention necessary to maintain the service. This mentality of coding and automating is another mindset shift that people need to make to get the most out of cloud.

More automation means that a single engineer may be able to manage 300-400 virtual machines instead of many fewer. It also means that they can focus less on servers, as such, and more on what they deliver. That is, they can get more involved in higher value activities, such as capacity planning and service management and delivery. As automation progresses, these same engineers may become more strategic and powerful in terms of the scale and importance of what they oversee and control.

This and other essential lessons on cloud adoption and management, learned by Datacom over years working at the ‘cloud face,’ are contained in a new white paper available for download now. If you would like to talk to us about it, or cloud adoption and management in general, then please get in touch at cloud@datacom.co.nz.

The what, why, where and when of cloud strategy and planning

Connected Building

Finding the right approach to cloud is crucial to maximising the benefits of its adoption. For major endeavours in particular, it can make the difference between success and failure.

Indeed, the more you use cloud, and the more you shape your organisation and its people, processes and technology to exploit it, the more important having a considered cloud strategy and plan becomes.

One reason for this is because making the most of cloud – especially public cloud – is often as much an organisational issue as a technological one. Having the right people, with the right skills, and the right processes in place are key.

Most important of all though is focusing on the business first and technology second. The ultimate goal, after all, is to figure out the best way to use cloud to get desired business outcomes.

Determining this involves asking what, why, where and when questions, such as:

  • Why does cloud stack up for your organisation?
  • What current or new business services will cloud help to deliver better than the status quo?
  • When should certain workloads be moved to, or built for, cloud?
  • Where is the best place for the workloads involved to run?

And that’s true whether you eventually decide to embark on a full-scale cloud migration; leave legacy IT where it is for now and take a cloud-first approach to new applications; run a Proof of Concept in public cloud before committing further; or do something else entirely.

To provide more detailed advice on how to define the best cloud strategy and plan for your organisation, wherever it is on its cloud journey, we’ve produced a new paper, which is available for download now.

It outlines a five-step process based on Datacom’s technology-independent approach to cloud planning, which has been honed over the course of many different projects for a diverse range of organisations, including Zespri, Fairfax, Aussie Home Loans and Brisbane Festival.

Even if you already have a comprehensive cloud strategy, the stages presented in the paper can work as a checklist. If you are just building a playbook for cloud adoption at this stage, then by all means lift elements from it and incorporate them into your guiding principles.

Or, if you don’t know where to start to craft a strategic approach to cloud, then look no further.

We use the framework in this flexible manner, according to our customer’s needs. If you’d like to talk to us about it or planning for cloud in general, then please get in touch on cloud@datacom.co.nz.

The key to cloud: Plan up front

drawing businss concept

By Arthur Shih

In my role, I have the opportunity to speak to customers of all kinds from all backgrounds, ranging from technical operations engineers through to C-level executives. The one overriding question that comes up is “How can we use the cloud to enable and accelerate my business?”

It’s a complicated question, but it really comes down to one thing – spend the time up front getting your planning and governance right before you try to do anything.

Cloud computing gives users the ability to extremely quickly and cost effectively scale IT resources, but with speed and agility comes risk. We see customers who have developers who love embracing cloud, but all of a sudden something they do on their development environment in the cloud causes an important application to grind to a halt.

Unfortunately, this is an increasingly common occurrence, and the response of the business is often to lock down all access into the cloud and negate the great business benefits it could bring. The IT department needs to ensure that their governance frameworks encourage appropriate cloud use.

Risks can be mitigated if you do all your governance and risk planning up front. By spending time in the very beginning to understand all the factors, you can design a governance model around your business needs. This gives your organisation the freedom to use the cloud and innovate within an established framework.

Once the governance framework is established, you need to understand where you want to apply it. Organisations looking to adopt cloud should not look at it as “one-size-fits all”, but assess it based on which applications they are looking to put in the cloud. By understanding how your applications work and scale, and understanding the benefits and limitations of each different type of cloud, you can then understand which cloud can provide the most benefit for your business.

To ensure successful adoption of cloud, organisations need to understand how it can enable your applications to run more efficiently (reduce cost) and more effectively (available to users whenever required). The key to achieving this is to invest up front in your planning, which will then give your business the freedom to innovate and develop your business further.

Arthur Shih is Cloud Solutions Manager for Datacom Auckland.

The vCloud Air Network with Datacom Enterprise Cloud

vCloudAir

Datacom is a vendor-independent organisation with a variety of cloud partners. We pride ourselves on recommending the right solution for the customer, whether it’s hosted in Datacom’s own cloud, Microsoft Azure, AWS, or vCloud.

Last week at VMworld, VMware CEO Pat Gelsinger used his keynote to officially launch the vCloud Air Network. The vCloud Air Network is the next iteration of VMware’s vision of a global network of service providers deploying consistent and enterprise-ready cloud platforms along with their overall cloud strategy.

Datacom has been deploying cloud services with VMware since the inception of the VMware Service Provider Programme in 2007, and has been named their Service Provider Partner of the Year for Asia Pacific and Japan in both 2010 and 2013. We are extremely pleased to once again embark on the next exciting iteration of cloud services with VMware as a launch partner of the network, along with other global giants such as AT&T, SingTel, and China Telecom.

What is the vCloud Air Network?

The new vCloud Air Network combines both VMware’s global network of 3,800 service providers along with its own vCloud Air Public Cloud offering (previously known as vCloud Hybrid Service) to provide customers with maximum choice when it comes to cloud services. By tapping into the expertise of the global service providers, customers can find a solution tailored to their needs.

Datacom is both a VMware “IaaS Powered” and “Hybrid Cloud Powered” service provider within the vCloud Air Network. This means customers are able to seamlessly and securely extend their current VMware environment into a public cloud offering, whilst retaining the ability to have control over all operational and governance requirements.

Also, by joining the network, Datacom are working even more closely with VMware to develop more products that will enable customers to deliver even more business values to their own target market.

How is this different from what is already there?

In the past, VMware has released products which service providers can leverage to build their own hosted offerings. However, without a standard method of measuring and identifying what each service provider was providing, it was easy for customers to get confused when selecting a service provider. To fix this, VMware have released new badge system for service providers that can help customers differentiate.

The two badges to be released initially are:

  1. “IaaS Powered” – shows that a service provider has a vSphere based, single-tenanted offering
  2. “Hybrid Cloud Powered”- shows that a service provider has a multi-tenanted vCloud Director-based offering with hybrid cloud capability.

This should get rid of the confusion for customers going forward in terms of which service offerings they will be getting from their own service providers, and help them make a more informed decision.

Why am I excited about this offering?

I think this is a great move by VMware and really does provide some excellent opportunities for service providers and customers alike. I am most excited about the following three things:

  1. Gives customers access to VMware’s own vCloud Air Public Cloud capabilities, but hosted by Service Providers – The combination of VMware’s own vCloud Air Public Cloud offering with service providers’ own systems means vCloud can provide customers with the same functionality that VMware provide on their public cloud offering, but hosted locally by their own trusted service provider.
  1. Give customers maximum choice and flexibility – Service providers can build on top VMware’s offerings and provide differentiated offerings that can meet specific market needs. PCI compliance? HIPAA compliance? Government Approved? SAP compatibility? There will be an offering in the network, built on VMware’s reference architectures, that achieves these kinds of certifications.
  1. Strong focus on hybrid cloud – We all understand and believe that the future is not black and white – it’s not about being in the public cloud or a private/hosted cloud, it will be about hybrid cloud. The vCloud Air Network allows us to provide customers with a seamless and proven offering for customers to immediately start consuming hybrid cloud.

So what’s next?

While this is early days yet for the network and a lot of work is still to be done, we are extremely excited about the possibilities that it brings. Watch out for specific announcements from us around new products and services that we will be building on top of our capabilities in the coming months. We have an exciting roadmap that we can’t wait to share with our customers.

And as usual, don’t hesitate to get in contact if you have questions or thoughts, or if you are just interested in learning more.

Arthur Shih is Datacom Cloud Solutions Manager. He can be reached at arthurs@datacom.co.nz.

Datacom data centres classed among the top 5 in ANZ

By Tom Jacob

IT infrastructure operations and data centres were on the agenda at the Gartner Summit in Sydney this year. The two-day event was centred on maximising value and managing change in a cloud-driven world.

During the summit, Gartner provided its perspective on the data centre and infrastructure utility providers, where Datacom is seen to be ranked by size and category amongst the top five providers across ANZ. This chart, organised alphabetically, is based on Gartner’s estimate of the providers IT outsourcing – data centre and infrastructure outsourcing – revenues in 2013 in $US.

datacentrerankings

 

Competition is fierce, but the data centre market is fragmented with many organisations providing a variety of infrastructure services.

Gartner analyst Rolf Jester, VP Distinguished Analyst at Gartner, explains.

“The Asia-Pacific data centre market is more complex and difficult to compete in due to a number of market pressures, ranging from inconsistent offerings and pricing terms to the increased hyper competition from cloud, Telco, hosting and Indian/Japanese providers.”

Here are some key questions asked and takeaways from the summit.

The theme for this year’s summit was on maximising value in a cloud driven market. What are your views, and key takeaways?

In the past when I have attended data centre conferences the content has primarily focused on the facility services side of the data centre: power, cooling, design concepts and management practices. This year’s summit was very different as the time spent on the physical facility was less than five percent and the remainder was heavily focused on cloud, networking and global data centre connectivity.

It was interesting to note that Gartner’s definition of a data centre has evolved to be considered more as a network of places where IT services are delivered from, rather than a purpose-built facility providing power, cooling and facilities management to support customer IT workloads. Gartner regularly referred to a data centre as being a place where cloud based services are delivered from by either known or unknown locations.

The relationship between the customer and the data centre will now more than ever be managed by contracts, rather than the customer having a say in how the facility is run and managed which is something that has occurred in the past.

What do you see as the main considerations or constraints for organisations reviewing their data centre strategies?

In the past typical constraints were mainly capacity constraints for power, cooling, space, specialist data centre/server room management skills and ongoing funding. We have observed a change in the last 18 months where these issues are diminishing mainly due to consolidation with the aid of improved IT infrastructure and virtualisation technologies. Cloud Services (IaaS and SaaS) are also maturing and customers are seriously considering how these services will fit inside their organisations. Early adopters are already consuming services such as email, digital image storage and test and development services. We only need to look at the success of our own Datacom Cloud Services (DCS) and Datacom Cloud Services Government (DCSG), along with the global success of AWS, Azure and Office 365.

We see customers’ own facility constraints becoming less and less of an issue and we are already observing customers repurposing their old server rooms back to productive office space. And when organisations relocate premises it’s clear that moving the IT equipment to a data centre makes more sense than reconstructing a server room.

How do you see the future of the data centre market evolving in ANZ market, considering the analyst view on market pressures, consolidation, competition and partnerships?

The future is uncertain and depends on where customers are comfortable having their services and data stored and delivered from. There are current customer concerns about data sovereignty, network access and the high availability and locality of these facilities. But we don’t expect to see many more new data centres being built and we’re certain we’ll see a number of the older data centres empty out and close down. We’re confident that if customers choose New Zealand-based cloud service providers then there’ll be a healthy local market and, in time, additional Tier 3 data centres may be commissioned. Datacom is well-placed for this growth as both of our Tier 3 Data Centres (Orbit in Auckland and Kapua in Hamilton) have plenty of capacity. Datacom also actively promotes the use of these data centres to competing cloud and service providers with the aim of giving customers plenty of choice and retaining them.

What criteria do you think we have that places us in that class of providers as mentioned by Gartner?

Firstly it’s because Datacom covers all the bases. It has high-quality, innovative data centres, and the right policies to encourage service providers and customers to host there. And Datacom has a wide range of cloud offerings that give customers convenient access to services.

The design choices Datacom made in the initial design 6-7 years ago have proven to be winners. The use of outdoor air to cool the IT equipment has been a consistent factor in the energy efficiency of the data centre improving, making a significant contribution to customers’ sustainability goals. And the flexibility of Datacom’s solutions means we can always find a way to make it work for a customer—it’s not one size fits all.

Tom Jacob is Datacom’s General Manager of Data Centres.

Managed Services in 2014 – How is it Evolving?

Managed services as a topic in and of itself doesn’t always get the attention topics such as cloud and mobility do. That’s largely because managed services covers such a large umbrella of technology services that it can often be absent from conversations about specific solutions. It’s important to relate these single solutions back to managed services because it changes the way they are consumed, designed and supported. Take note of the following predicted managed services trends for 2014 and how you can use them to improve business.

Enterprise Content Management

More than 60 per cent of midsize businesses are using Microsoft SharePoint to organise and share information, according to Forrester. TechNavio anticipates the global enterprise content management (ECM) market to increase to $9.6 billion in 2014. ECM can help businesses improve records management, search and e-discovery and document capture. A managed services provider can help integrate organisations’ disparate data and management systems to improve content workflows and accessibility. And as Ovum expects mobility, social media and cloud computing to transform ECM in 2014, business can take advantage of a managed services provider to help incorporate these additional capabilities into a complete ECM solution that fully allows anytime, anywhere access to content of all types.

Managed security services

This year will be a particularly busy one for the managed security services — or MSS — market, according to Gartner. The research firm predicts the MSS market to grow from $12 billion in 2013 to more than $22.5 billion by 2017. Increasing security threats brought about by BYOD and mobile apps and advanced persistent threats (APTs) coupled with a lack of internal resources to manage all these threats is driving the MSS growth. Australia already suffers from a lack of skilled IT resources, and the IT security realm is no different — a major risk when threats are continuously becoming more numerous and complex. The result will be more organisations enlisting the help of a third-party security service or managed services provider that can address incident response and detect APTs. In some instances, these managed resources will work with in-house staff and, at the very least, will educate internal employees on how to best protect the business.

Cloud services managed for you

As we’ve written before, consuming cloud through a managed services provider can help organisations leverage best-practice, enterprise-level technology and delivery methods. Having your cloud services managed for you by expert IT providers lowers risk and frees up internal IT staff time — it also makes the integration more seamless. With the recent rise in organisations using a multi-cloud approach — where businesses consume at least two different types of cloud services —, businesses will increasingly need a provider to procure, design and manage these different cloud service providers and platforms. This includes overseeing all the SLAs, performance metrics and billing for you.

Why Your Cloud Architecture Design Should be a Top Concern

Figures from the global job web site Indeed.com show that the number of job postings for cloud architecture design rose 15 per cent between January 2009 and January 2013. Cloud architects are what they sound like: IT professionals who can help businesses plan, design and deploy cloud services. The rise in these roles is directly related to the growing awareness that cloud architecture design is not a matter that should simply be left to internal IT staff. To properly plan a cloud deployment, you need professionals knowledgeable in a range of cloud platforms and providers, in addition to applications and workloads  all ever more important as a multi-cloud approach becomes more appealing to organisations. Here’s why it’s important to take cloud architecture design seriously and enlist the right resources to plan it.

Application performance and availability

Depending on your organisation, you might have workloads that have low or partial utilisation levels, such as batch processing, or workloads subject to dramatic spikes in traffic, such as public-facing apps. Your workload types will affect the type of cloud platforms or services you choose. Effectively matching the right workload to the right cloud platform takes thorough understanding of each application’s needs, required computing power and traffic patterns and storage and compliance standards. Cloud architecture design assesses the requirements of all cloud-ready workloads in your organisation to discover the best options available.

The security of your environment

Every business has its own set of security needs  in fact, security concerns were once the top barrier to enterprise cloud computing. Security is built in through the cloud architecture design and development phases. When you understand or have been involved in the cloud architecture design, you will know exactly how the environment behaves and how it is secured. For instance, some cloud platforms allow customers to create user accounts to provide access to their systems and remove users who have left the company or changed roles. Workloads can be delegated amongst the right internal resources, reducing the risk that the wrong individual or business unit will gain access to sensitive information.

The steps to doing cloud architecture design well

By enlisting the expertise of cloud architects, you organisation will go through a detailed process that will ensure needs are fully understood before designing or choosing a cloud platform or multiple platforms for you. These steps include:

  • Gathering of requirements: Cloud architecture design, business, functional and non-functional requirements and creation of a cloud readiness assessment report
  • Design: Technical cloud architecture designs, implementation plans and migration preparation using identified requirements as inputs so you get a tailored cloud solution
  • Testing and proof of concept: This ensures your cloud architecture design is “proven” and will work the way it is intended to when launched
  • Implementation and build: The use of all the previous steps to construct and deploy your cloud service
  • Migration services: Formulation of the migration strategy that suits your business needs and implementation with proven methods
  • Post-migration support: Decommissioning of your old infrastructure, documentation of your cloud architecture design and handover to your IT team.

If you are looking for help planning cloud architecture design, Datacom’s Professional Services team can create, implement and manage your strategy.