Datacom Saves Cape Australia Money and Time with Unified Communications

What happens when four businesses become one?

Four disparate IT environments was the outcome for Cape Australia, which supplies maintenance and repair services for the mining and industrial sectors, when it acquired four businesses in 2007.

With operations spread out across Australia and a presence in 27 other countries, Cape Australia needed a way to integrate this tangled technology, which involved incongruent systems and modes of operation.

Consistent collaboration

Cape Australia decided to move forward with a strategy for bringing together its multiple IT environments, aptly named “One Way.” Started in 2009, the One Way project involved Datacom’s work from design through to implementation to deliver a single suite of applications to all users in all locations.

Key in the project was a Microsoft stack of unified communication and collaboration technologies to fuel better communication among Cape Australia’s employees both across the country and the world. Datacom implemented Microsoft Exchange 2010 for the email messaging service and voice mail consolidation. Microsoft Office Communications Server (OCS) 2007 enabled video conferencing, instant messaging and desktop sharing. For better information sharing and collaboration, Datacom brought in SharePoint Server 2010. To round out the unified communications piece of the project, Datacom replaced more than 30 branch networks with an integrated IP telephony solution with a solid voice system.

In addition to the UCC piece, Datacom implemented a new server and storage infrastructure, security solution and management tools.

An outstanding outcome

The UCC implementation quickly helped Cape Australia to save time and money in several ways. Each staff member has saved between one and two hours a day thanks to the improved productivity available through the new unified communication and collaboration tools. The video conferencing and online communication tools helped cut executive travel expenses by up to 50 per cent, while IT administration and operational costs have also been cut by half. Telecommunication costs and national call charges were reduced by 100 per cent.

Cape Australia appreciated the best practice knowledge, trusted partnership and professionalism Datacom brought to the project, said Jason Cowie, former CIO and Executive Manager of Business Services for Cape.

“Datacom provided us with best practice knowledge for the design and implementation of the infrastructure and Microsoft technologies that we were seeking to deploy. As our trusted partner, Datacom provided continual suggestions throughout the project on possible improvements to the initial design to enhance integration and user adoption.”

Datacom NSW Saves One Council More Than $100,000 on Data Centre and Disaster Recovery Costs

In 2010, Woollahra Council in Sydney sought a refreshed data centre environment and disaster recovery solution. Some of the council’s goals included improved server management and monitoring and a better production environment with an eye toward greener IT in the form of reduced energy consumption. The council also needed to build a new DR infrastructure.

Early on, Datacom was able to set itself apart from the competitors, according to Saleh Nabil, Manager Information Systems at the Council. During the proposal process, Datacom “focused on the production environment solution and disaster recovery environment at a competitive price,” he said. “This was an immediate differentiator as Datacom aligned to our direction and thoughts.”

The challenges

Woollahra wanted to streamline its IT production and disaster recovery environments while reducing future infrastructure costs. The 24 physical servers at the council added a maintenance and monitoring burden to network administration; it was also taking the IT staff about a month to provision a single new physical server. In the production environment, too much downtime was occurring during system replication. In addition to tackling these areas, the council wanted room to expand its data centre infrastructure in the future.

The solution and benefits

Datacom was able to decrease the number of Woollahra’s physical servers from 24 to four. As these servers cost $5,000 to $10,000 to refresh every three years, Woollahra will be able to save between $100,000 and $200,000 on this investment. In addition, the new virtualised environment now allows council IT staff to provision new servers in four hours instead of four weeks. The council’s IT department can now better monitor the systems being used in areas such as hardware performance and undertake preventative maintenance when systems are underperforming.

The Datacom solution enabled Woollahra to remove two data centre racks, which leaves space for any possible expansion efforts in the data centre. Datacom was also able to help the council move towards its green initiatives by enabling power and cooling efficiencies while reducing overall management overheads. To ensure optimised performance, Datacom provided ongoing support services and continual checks on the data centre environment, such as configuration enhancements and recommendations.

The specifics

When Datacom won the Woollahra contract, they set to work designing and implementing a virtualised data centre environment and end-to-end disaster recovery solution. As part of this transformation, Datacom installed new production server hardware and core production network equipment to address the issues of excess downtime and used the replaced hardware to set up disaster recovery infrastructure. A backup and recovery plan was established to support the critical servers in case of an outage. The solution relied upon HP Blade Technology, HP Systems Insight Manager/VMware vCentre and HP Multi Node P4000 SAN.

“Datacom facilitated a discussion with HP regarding commercial viability, and because HP was in the process of marketing this particular product, we were able to achieve a very good deal to buy it,” Nabil says.

In the end, Woollahra was happy with the solution from start to finish. “Datacom were very patient with us during the pre-sales and pre-planning process in terms of coming up with the right solution for the council,” Nabil says.

3 Ways to Maximise Your Server Virtualisation Investment

When it comes to new IT investments, the benefits – including cost savings – often depend on how well you use and take care of the technology or infrastructure. Akin to a new car, organisations could easily wind up spending more money if they quickly run the technology purchase into the ground. If your organisation is wondering why it isn’t seeing as great of an ROI on server virtualisation as expected, look to how well the IT staff is managing the physical servers and virtual machines.

1.  Use more of your physical servers

Technology and business research firm Forrester reports that one of the chief reasons organisations don’t maximise their virtual infrastructure investment is because they fail to put enough virtual machines on their physical servers. It is tough to strike a balance between too few VMs and too many, as the latter can inevitably lead to poor performance. But setting a strict utilisation percentage is not the answer either, as the organisation might get stuck in a cycle of buying more servers to host the VMs earlier than should be needed.

In Forrester’s research, many companies reported stopping at three to five VMs per server when some of these servers could actually host up to 15 VMs. While how many VMs IT runs will depend on factors such as resource-heavy apps, organisations can typically run three to five virtual machines for each core on a new Intel or AMD processor, according to a CIO report. One Sydney council was able to reduce the number of physical servers in its data centre from 24 to four and save more than $100,000 with a server virtualisation project implemented by Datacom.

2.  Avoid virtual sprawl

Of course the danger of creating more VMs on your servers is that you’ll spawn server sprawl. It’s now so easy to create VMs that everyone wants one when they want it. This may not seem as dangerous as physical server sprawl, but it is. Too many VMs lead to an over-allocation of resources, which drives up costs; there’s also the risk the organisation might have to purchase another physical server when they shouldn’t need to. VM sprawl also drains the IT department’s management abilities.

VM sprawl is a sneaky beast – it happens quietly and slowly, so the best defence against it is regularly monitoring how resources are being used and how many VMs are in the data centre one month compared to the next. VMs should have a lifecycle, and careful reporting will help IT departments determine when VMs are no longer needed. Going forward, IT should demand justification for VMs when they are requested to avoid creating them just because it’s easy. IT could also turn off an unused VM every time someone asks for a new one.

3.  Replace your physical servers on time

Yes, it can be a drag when you have to potentially fork over thousands or tens of thousands of dollars to replace your physical servers. IDC figures have shown that organisations that upgrade their servers within three-and-a-half-years make back the amount of their investment within 12 months and receive an ROI of more than 150 per cent over three years.Sticking to this refresh cycle can unearth some of the virtualisation benefits your organisation initially sought out, such as improved maintenance and better energy efficiency.

How does your organisation plan to fully realise the cost-saving benefits of server virtualisation?

3 Considerations for Finding the Right Disaster Recovery Solution

Now that disaster recovery (DR) is as big a concern among CIOs as it is for potential customers, a piecemeal solution combining in-house servers with several off-site backups will no longer suffice.

Ensuring business continuity for internal and external customers in the midst of a disaster can separate your business — and your leadership — from the pack. And, in a short manner of time, it will be a minimum requirement for even the SME crowd.

According to a study by KPMG, 20 per cent of all organisations will undergo some type of disaster during a five-year period. And 40 per cent of businesses that survive a disaster will go out of business within a two-year timeframe. As businesses become more aware of their vulnerability, the demand for comprehensive disaster recovery will only escalate.

As your organisation considers a disaster recovery plan and begins vetting solution providers, question their ability to meet these DR requirements.

1.  Do they cover the DR gamut? Before diving into the details of each component, make sure you’re dealing with a full-suite DR solution. Verify they offer:

  • Server infrastructure services that cover assessment and reporting to demonstrate the state of your data, optimised server architecture to create the appropriate solution and hosting/management of systems infrastructure to ensure your data isn’t compromised if your location has been compromised.
  • Enterprise storage infrastructure, primarily data security and management, that guarantees the data that supports the processes is unharmed and available.
  • Strategies and solutions for DR as well as business continuity that go beyond hardware and data to prescribe the actions everyone from senior management to the janitor should take in the event of a disaster.


2.  What recovery time does the provider offer? The answer will vary from organisation to organisation and situation to situation. But after providing a glimpse of your services and size, expect fairly concrete answers during the proposal process. Focus on the two key areas of recovery time:

  • Recovery Time Objectives (RTO) – the time necessary for your business systems to return to functionality after a disaster.
  • Recovery Point Objectives (RPO) – the time necessary for the most recently saved data to become available after a disaster.

Of course, the time necessary for recovery goes hand-in-hand with the DR system’s availability. As you negotiate the service-level agreement, look for an extremely high guaranteed server and network availability number, usually 99.9 per cent. During the 2011 floods and cyclone that hit Queensland, Datacom was able to help one metals producer maintain 100 per cent uptime throughout the disaster and offer 24/7 remote support.

3.  What stability and scalability do the company offer? The concept of stability goes far beyond hardware, processes and data recovery for a quality DR provider. You’ll need a service provider that can grow with your organisation and remain in it for the long haul. Be sure to discuss scalability and flexibility. Can the provider adapt to your evolving organisation? Is it at the bleeding-edge of technology, procedures and thought leadership? Is it prepared for your pending expansion?

If the DR provider you’re evaluating meets these criteria, you’ll be well on your way to launching a comprehensive DR solution that ensures business continuity during the unexpected.

Cloud or Server Virtualisation: Which is Right for You?

In 2011, Australia only hit 63.3% of its cloud growth expectations, according to a Computerworld story. This means many Aussie organisations aren’t competing at their full potential — especially in the international market where cloud computing continues its growth at a much higher rate. But it also means many nationalorganisations have found solutions that provide all the agility needed without opting for the cloud, usually thanks to server virtualisation.

Through server virtualisation, the process of running multiple individual computing environments from a single server, many organisations wonder why opting for a cloud solution is worth it. After all, do they need a cloud solution’s large data centre when, through server visualisation, they have more horsepower under the hood than they’ll likely ever need?

In the end, the best solution is a tailored solution. The key isn’t asking if it’s one or the other — it’s asking which functions are best meant for the cloud and which flourish in visualised server environments. For example, you might ask:

  • Will this project be customer-facing and focused on aggressive growth, or is this project dedicated for a static internal audience?
  • If for a current function, is it meeting the needs of your organisation and/or customers? If not, what needs are not currently addressed?
  • What are the applications or data that you need to support?
  • Do you have the core competency, resources and technical expertise to manage scaling large amounts of data in-house?
  • What are your demands for server workload?
  • What are your requirements for disaster recovery?

You can probably guess which answers will lead you to a virtualised server and which to a cloud solution. Before you buy, it’s worthwhile tapping into IT consultants who can conduct an in-depth discovery to learn which option — or which combination — suits your organisation. As a one-stop solution, a professional services team should not only determine the right solution, but implement it, test it and support it. 

As you begin researching your options, remember to think about the future. Organisations must anticipate their storage and computing power needs two to three years down the line to ensure their investment makes sense.

The Importance of Reducing IT Complexity

By Lauren Fritsky

The bulk of IT costs goes toward maintaining creaky infrastructure and complicated, often archaic systems. Three quarters of the average IT budget is spent on legacy systems alone, according to Microsoft. IT departments pay a steeper price. IT staff bogged down in keeping legacy systems a few steps from death’s store and tuning up old infrastructure have no time to innovate or be strategic. This creates a vicious cycle in which the department keeps the status quo without ever having a chance to prove both its technical and business power.

Organisations that can simplify their IT environments – standardise hardware, better manage applications and keep the data centre uncluttered – can reduce costs, better pool resources and help IT leverage technology to drive business results.

One issue: Years ago, it was server sprawl causing a headache in the data centre. Now, it’s virtual machine sprawl. The latter might sound innocuous, but taking server virtualisation too far – basically, creating VM after VM just because you can – might result in a mess of too many VMs that overtax your infrastructure and cause the cost of licenses to soar.

One solution: Virtualisation management technology and services can help IT better manage and contain the virtual environment. Through our managed services team, Datacom can oversee server management and monitoring so you keep virtual sprawl in check. The process begins through automation, which helps streamline the virtual environment and more quickly provision virtual machines. The end result is a self-service method that allows IT to routinely provision, secure and manage VMs so they don’t spiral out of control.

One issue: Legacy systems maintenance can keep IT from developing innovative new applications that could boost system efficiency and end-user satisfaction. There’s also research demonstrating that the more apps a business runs, the less efficient it is. The organisations with the best performance run an average of 20 applications, according IT research by The Hackett Group; lesser-performing companies run 39 apps on average.

One solution: Many organisations keep legacy systems going because they fear the cost and challenge of updating them. Leveraging the help of an IT outsourcer to plot and execute your legacy system modernisation or transformation can take the headache out of revamping your business-critical applications so you can get better use out of them. The first part of the process involves conducting an audit of all the apps running in these systems. Identifying which ones are critical to the business will the set the wheels in motion to clean up the clutter.

How have you planned to reduce IT complexity in your organisation?

Why You Should Stick to Your Server Refresh Cycle

By Lauren Fritsky

 

It’s understandable your organisation wants to hold on to its tried and trusted infrastructure, especially in a time when budgets are down and upgrading to the latest and greatest isn’t always feasible. Waiting too long to replace your ageing hardware, however, can actually end up costing more over the long-term. Servers still in use past their optimal three-year lifespan drive up yearly maintenance costs by 24 to 44 per cent, according to IDC figures.

If you want to maximise your ROI and server performance, it’s best to upgrade your servers every three to three-and-a-half years. Transitioning to newer server systems designed for maximum energy effectiveness, performance and manageability will enable your organisation to make the most of this upgrade opportunity.

Greater savings

Of course upgrading your infrastructure comes at a cost. But you can potentially cover that price tag with the savings you get from your new investment. The IDC figures show organisations that upgrade their servers somewhere between that three- and three-and-a-half-year sweet spot cover the payback period in just under a year. By simply upgrading their servers when the time comes, organisations can nab an ROI of more than 150 per cent over the course of three years and slash opex spend by 33 per cent. This cost reduction is usually attributable to less need for IT support and maintenance in addition to greater energy efficiency realised by updated server infrastructure. In fact, some newer offerings have built-in technology to identify which parts of the server need cooling, which cuts down on fan usage and allows servers to run with less power and in less space.

Better performance

Failure rates start to increase when servers are pushed into a fourth year, the IDC found. The reason: servers used past their due date spark more downtime and IT support costs. Failure rates tend to increase from 7 to 18 per cent for older servers; new software might also encounter compatibility problems and more patching requirements when operated on old hardware. Newer server offerings, such as the HP ProLiant Gen8 blade server, require less than 40 per cent of the aggregate power requirements to run workloads by pooling resources to prevent workloads from interfering with each other. They can also tap into greater processing power while using less rack space.

Less support needed

When servers are left ageing in data centres, it can increase the time IT staff spends on monitoring heating and cooling and rack setup and addressing downtime. The increased number of service incidents that tends to occur with ageing infrastructure means the data centre team is reduced to constantly checking server systems to prevent disruptions. The more efficient power usage and reduced cabling seen with some newer servers allows IT administrators to shift their focus away from infrastructure management and toward more high-priority tasks. What’s more, if you engage in an evaluation of your current IT environment with a solution architect team like Datacom’s, you can take the responsibilities of server procurement, design and implementation off your IT team.

Learn how Datacom solution architects can help your organisation optimise its server architecture with current HP blade server offers for maximum performance and cost savings.