Azure, hybrid cloud, and the importance of monitoring at a business service level

AzureWhitePaper_Banner_Blank

According to Datacom customer research presented in Before You Go Public, Read This, few organisations are currently planning to go ‘all in’ to public cloud and will, therefore, retain some of their workloads on-premise or in private cloud. Many services are, and will be for the foreseeable future, delivered via applications or workloads with a hybrid set up. For example, an organisation may want to use Microsoft Azure for a front-end application that needs elasticity, and use a private cloud for the interconnected database.

Managing hybrid cloud complexity

Hybrid cloud architecture, however, requires careful management and planning to account for key factors such as latency, security and compliance, as well as potential added complexity and transition costs.

“Making hybrid cloud work well means focusing on integration and interconnectedness, especially in the planning stages of a project. For instance, if an organisation stretches certain components by running them in Azure, what is the impact on other, reliant components? These critical factors are sometimes overlooked,” says Brett Alexander, Solution Architect at Datacom.

On top of this, with greater adoption of public cloud (and proper planning), usually comes a corresponding service orientation and increasing focus on business services and related outcomes enabled by cloud. These outcomes may include risk or reputation management, reducing cost and making key services available when needed and at a suitable quality. They are often delivered through the aggregation of multiple providers, services and solutions – and various SLAs.

Making these services and outcomes happen and managing the many moving parts involved is clearly an important function and a complex task, which organisations can take on themselves or outsource, at least in part, to a qualified partner like Datacom. Whatever the approach taken, there is a growing need for those in the organisation involved in Azure to understand the way different clouds and related services interact and how they can be integrated – with each other and with other environments and types of IT.

Monitor from business service level down

Business services, even something as simple as email, are built from and rely on a number of components, including applications, firewalls, switches, servers and storage. Mapping these services, the applications and infrastructure that enable them, and the interconnections and dependencies of the various components, are an integral part of planning for Azure adoption and optimisation and managing a hybrid cloud environment.

This is why Datacom recommends monitoring at an availability-of-business-service level. This means having dependency-based monitoring from the business service level down through applications and infrastructure, including Azure. We also recommend automated root-cause analysis to provide information and evidence for problem management processes and liaising with Azure support teams, if required. For this, organisations need to implement robust analysis and troubleshooting tools.

In short, if your Azure servers or services go down you need to know what will be affected, for how long, and what impact that will have on your organisation in order to determine and take necessary steps. Among other things, this means knowing what it takes to keep a high availability application operational if X or Y shuts down. And things do go down from time to time.

“Azure has planned, routine outages for maintenance purposes, when servers going offline temporarily. Before this happens, organisations need to know how many more stand-in machines are required to maintain each service in the event of an outage, compared to traditional, on-premise IT,” says Roger Sinel, Operations Manager at Datacom.

Broadly speaking, the skills required for such mapping, monitoring and management include knowing how applications work, how infrastructure works and how they work together. Operations engineers need to co-operate with developers and application specialists to ensure applications run smoothly in Azure through the correct use of resiliency and performance techniques, and by testing and monitoring correctly. What is supported in Azure and what isn’t need to be understood – especially in a hybrid environment. As reliance on Azure increases, along with complexity, automating parts of processes as much as possible using scripts becomes increasingly important.

The recommendations above are among many others made in our free white paper, How to make the most of Azure, which is available to download. It’s based on Datacom’s many years of experience working in partnership with Microsoft on cloud projects of all sizes for a wide range of organisations. These include what is still the world’s largest production SAP migration to Azure, for Zespri International, one of the most successful horticultural marketing companies globally.

For even more information and advice on how your organisation can take full advantage of Azure, please contact us on cloud@datacom.co.nz or cloud@datacom.com.au.

Tool up to make the most of Amazon Web Services (AWS)

AWSWhitePaper_BannerBlank

Adopting and making the most of AWS or other public cloud platforms will almost certainly require investment in new tools. In general, Datacom recommends that organisations have a tooling strategy that focuses on tools with API-based integration capabilities. This avoids the lock-in that some proprietary tools cause, which constrains customisation, adds complexity and hampers agility. For optimal outcomes in a hybrid cloud environment, it is better to use API-based tools – large or small – that enable cross-cloud platform integration alongside native AWS tools.

In public cloud operations, a new method of engagement is necessary to match the tectonic shift in focus from hardware to software that the platform engenders. Engineers no longer have direct access to infrastructure so they view servers through a portal and use software to control things. This means some people may need to adopt a new mentality and update their skills substantially. They need to move away from traditional, manual, GUI-based methods of monitoring and control to using scripts and coding to enable process automation and managing by exception.

This means that using start-up and shutdown scripts should be a goal for operations teams. Alongside this, server health checks are required to ensure performance. AWS provides native tools to help with such tasks. For instance, AWS Lambda enables task scheduling that can be utilised in conjunction with scripts to wake up servers, get them to perform a job, and then shut them down – all automatically.

Other native tools worth noting include:

  • AWS Service Catalog – allows organisations to centrally build and manage commonly-used and compliant catalogues of IT services – comprised of a range of components, from virtual machines and databases to complex application architectures. Once built, these IT services can be deployed automatically and repeatedly, in one click, saving time on development and management, and helping to avoid sprawl
  • AWS Trusted Advisor – another useful tool for making the most of AWS, it reports on cost optimisation, performance and compliance issues, and recommends ways to improve these things
  • AWS Inspector – provides an automated security assessment and rule-based compliance service at the application level

Monitoring tools have new challenges with public cloud: not all were built for this environment. For example, more machines are usually required in public cloud compared with on-premise (to account for machines switching off from time to time) to provide the same service. This means that, if monitoring agents are placed on all AWS machines, they may produce too many alerts to handle. And monitoring costs may go up. So monitoring in AWS needs a new approach, and to be tested and fine-tuned over time.

Organisations should also assess their approaches to data backup as they adopt AWS. In a hybrid cloud situation, this isn’t a simple task. For backup, as with monitoring, a mixture of traditional and native AWS tools may be the best option – at least in the short term. Although backing up cloud-ready applications may be relatively straightforward in AWS, replicating traditional enterprise backup methodologies in this environment without a dramatic increase in cost is challenging.

Looking at development in particular, AWS has a multitude of tools to support continuous integration and continuous delivery, including CodeDeploy, CodePipeline and CodeCommit, and supports an array of coding languages via APIs. Using the platform and its native tools in combination with a DevOps approach to developing cloud-ready applications for the platform can result in faster, cheaper and more efficient development processes compared with developing on-premise.

The recommendations above are among many others made in our new free white paper, How to make the most of Amazon Web Services, which is available to download. It’s based on years of experience working in partnership with AWS on projects of all sizes for a wide range of organisations.

As an AWS Managed Service Provider, Datacom is at the front line of new innovations in AWS and evolving best practice, as well as changes to pricing, SLAs and other aspects of the platform. We have AWS operations specialists, with blended software and infrastructure skills, who manage, for customers, applications that we have architected to take advantage of the unique features of AWS.

We are therefore in an ideal position to help customers across a wide range of areas related to AWS, including development and operations, designing and building cloud architecture, and integrating and managing complex hybrid cloud and/or multi-cloud environments.

For even more information and advice on how your organisation can take full advantage of AWS, please contact us on cloud@datacom.co.nz or cloud@datacom.com.au.