Datacom NSW Saves One Council More Than $100,000 on Data Centre and Disaster Recovery Costs

In 2010, Woollahra Council in Sydney sought a refreshed data centre environment and disaster recovery solution. Some of the council’s goals included improved server management and monitoring and a better production environment with an eye toward greener IT in the form of reduced energy consumption. The council also needed to build a new DR infrastructure.

Early on, Datacom was able to set itself apart from the competitors, according to Saleh Nabil, Manager Information Systems at the Council. During the proposal process, Datacom “focused on the production environment solution and disaster recovery environment at a competitive price,” he said. “This was an immediate differentiator as Datacom aligned to our direction and thoughts.”

The challenges

Woollahra wanted to streamline its IT production and disaster recovery environments while reducing future infrastructure costs. The 24 physical servers at the council added a maintenance and monitoring burden to network administration; it was also taking the IT staff about a month to provision a single new physical server. In the production environment, too much downtime was occurring during system replication. In addition to tackling these areas, the council wanted room to expand its data centre infrastructure in the future.

The solution and benefits

Datacom was able to decrease the number of Woollahra’s physical servers from 24 to four. As these servers cost $5,000 to $10,000 to refresh every three years, Woollahra will be able to save between $100,000 and $200,000 on this investment. In addition, the new virtualised environment now allows council IT staff to provision new servers in four hours instead of four weeks. The council’s IT department can now better monitor the systems being used in areas such as hardware performance and undertake preventative maintenance when systems are underperforming.

The Datacom solution enabled Woollahra to remove two data centre racks, which leaves space for any possible expansion efforts in the data centre. Datacom was also able to help the council move towards its green initiatives by enabling power and cooling efficiencies while reducing overall management overheads. To ensure optimised performance, Datacom provided ongoing support services and continual checks on the data centre environment, such as configuration enhancements and recommendations.

The specifics

When Datacom won the Woollahra contract, they set to work designing and implementing a virtualised data centre environment and end-to-end disaster recovery solution. As part of this transformation, Datacom installed new production server hardware and core production network equipment to address the issues of excess downtime and used the replaced hardware to set up disaster recovery infrastructure. A backup and recovery plan was established to support the critical servers in case of an outage. The solution relied upon HP Blade Technology, HP Systems Insight Manager/VMware vCentre and HP Multi Node P4000 SAN.

“Datacom facilitated a discussion with HP regarding commercial viability, and because HP was in the process of marketing this particular product, we were able to achieve a very good deal to buy it,” Nabil says.

In the end, Woollahra was happy with the solution from start to finish. “Datacom were very patient with us during the pre-sales and pre-planning process in terms of coming up with the right solution for the council,” Nabil says.

3 Ways to Maximise Your Server Virtualisation Investment

When it comes to new IT investments, the benefits – including cost savings – often depend on how well you use and take care of the technology or infrastructure. Akin to a new car, organisations could easily wind up spending more money if they quickly run the technology purchase into the ground. If your organisation is wondering why it isn’t seeing as great of an ROI on server virtualisation as expected, look to how well the IT staff is managing the physical servers and virtual machines.

1.  Use more of your physical servers

Technology and business research firm Forrester reports that one of the chief reasons organisations don’t maximise their virtual infrastructure investment is because they fail to put enough virtual machines on their physical servers. It is tough to strike a balance between too few VMs and too many, as the latter can inevitably lead to poor performance. But setting a strict utilisation percentage is not the answer either, as the organisation might get stuck in a cycle of buying more servers to host the VMs earlier than should be needed.

In Forrester’s research, many companies reported stopping at three to five VMs per server when some of these servers could actually host up to 15 VMs. While how many VMs IT runs will depend on factors such as resource-heavy apps, organisations can typically run three to five virtual machines for each core on a new Intel or AMD processor, according to a CIO report. One Sydney council was able to reduce the number of physical servers in its data centre from 24 to four and save more than $100,000 with a server virtualisation project implemented by Datacom.

2.  Avoid virtual sprawl

Of course the danger of creating more VMs on your servers is that you’ll spawn server sprawl. It’s now so easy to create VMs that everyone wants one when they want it. This may not seem as dangerous as physical server sprawl, but it is. Too many VMs lead to an over-allocation of resources, which drives up costs; there’s also the risk the organisation might have to purchase another physical server when they shouldn’t need to. VM sprawl also drains the IT department’s management abilities.

VM sprawl is a sneaky beast – it happens quietly and slowly, so the best defence against it is regularly monitoring how resources are being used and how many VMs are in the data centre one month compared to the next. VMs should have a lifecycle, and careful reporting will help IT departments determine when VMs are no longer needed. Going forward, IT should demand justification for VMs when they are requested to avoid creating them just because it’s easy. IT could also turn off an unused VM every time someone asks for a new one.

3.  Replace your physical servers on time

Yes, it can be a drag when you have to potentially fork over thousands or tens of thousands of dollars to replace your physical servers. IDC figures have shown that organisations that upgrade their servers within three-and-a-half-years make back the amount of their investment within 12 months and receive an ROI of more than 150 per cent over three years.Sticking to this refresh cycle can unearth some of the virtualisation benefits your organisation initially sought out, such as improved maintenance and better energy efficiency.

How does your organisation plan to fully realise the cost-saving benefits of server virtualisation?

How Managed Cloud Benefits Extend Beyond Cost Savings

By Lauren Fritsky

You know about the cost-saving benefits of the cloud – organisations are saving as much as 30 per cent in IT costs in the first three years of their cloud investment, according to O’Reilly Media. The value of cloud extends beyond reduced spending, however. A cloud investment can bring more benefits to the business by transforming the innovation and project delivery taking place in the IT department. With managed cloud, IT no longer needs to oversee infrastructure and carry out related maintenance and troubleshooting. The scalable nature of cloud also allows for faster application delivery and lets IT try out new ideas without needing extra time and resources.

The cloud enables more innovation

Leveraging a cloud service allows IT departments to innovate with lower risk and in less time, according to a recent study of more than 1,000 organisations by the London School of Economics and Political Science. Before the cloud, businesses looking to experiment with a new application or system often had to go through a lengthy project management and delivery process in which they were forced to acquire new hardware and processing power that was only intended to be used for a short period of time. Extra IT resources were sometimes needed to manage these experiments as well.

Provisioning in the cloud allows for scalable computing resources that are immediately accessible. By transferring your infrastructure to managed cloud, you allow experienced IT providers to manage your servers and disk space while the IT department, their time now freed up, gets to develop new apps, tools, delivery methods and services that can drive revenue and competitive advantage in the organisation. Organisations can take advantage of consumption-based processing power when running projects in the cloud, which saves both costs and time compared to conducting these experiments on in-house infrastructure. The cloud’s flexible nature allows for scale in those innovations showing a possibility for success while at the same time allowing IT to quickly terminate those experiments that do not show promise.

The London study points out that this type of innovation gain is most realised when businesses opt for a managed cloud service that allows them to customise the solution to their business needs. Through customisable cloud, organisations can match their service levels to their expected business process/innovation needs.

The cloud makes delivering business apps easier

The ability for cloud to dynamically provision means end users can get their critical business applications delivered at a faster clip. The London School of Economics and Political Science study says 60 per cent of business executives feel their cloud service allows business apps to be provisioned faster. Close to 55 per cent of executives also say cloud cuts the time and cost to configure apps. This time savings not only allows IT to focus their expertise on other projects; it also serves to boost end-user satisfaction. Remember, end users don’t care how an application or service is delivered as long as they can access it when they need it and have as seamless a user experience as possible. Cloud enables this to happen with very little maintenance or management needed on the part of IT.

Have you realised similar benefits through adopting cloud at your organisation? Share your story in the comments.

Why You Should Stick to Your Server Refresh Cycle

By Lauren Fritsky


It’s understandable your organisation wants to hold on to its tried and trusted infrastructure, especially in a time when budgets are down and upgrading to the latest and greatest isn’t always feasible. Waiting too long to replace your ageing hardware, however, can actually end up costing more over the long-term. Servers still in use past their optimal three-year lifespan drive up yearly maintenance costs by 24 to 44 per cent, according to IDC figures.

If you want to maximise your ROI and server performance, it’s best to upgrade your servers every three to three-and-a-half years. Transitioning to newer server systems designed for maximum energy effectiveness, performance and manageability will enable your organisation to make the most of this upgrade opportunity.

Greater savings

Of course upgrading your infrastructure comes at a cost. But you can potentially cover that price tag with the savings you get from your new investment. The IDC figures show organisations that upgrade their servers somewhere between that three- and three-and-a-half-year sweet spot cover the payback period in just under a year. By simply upgrading their servers when the time comes, organisations can nab an ROI of more than 150 per cent over the course of three years and slash opex spend by 33 per cent. This cost reduction is usually attributable to less need for IT support and maintenance in addition to greater energy efficiency realised by updated server infrastructure. In fact, some newer offerings have built-in technology to identify which parts of the server need cooling, which cuts down on fan usage and allows servers to run with less power and in less space.

Better performance

Failure rates start to increase when servers are pushed into a fourth year, the IDC found. The reason: servers used past their due date spark more downtime and IT support costs. Failure rates tend to increase from 7 to 18 per cent for older servers; new software might also encounter compatibility problems and more patching requirements when operated on old hardware. Newer server offerings, such as the HP ProLiant Gen8 blade server, require less than 40 per cent of the aggregate power requirements to run workloads by pooling resources to prevent workloads from interfering with each other. They can also tap into greater processing power while using less rack space.

Less support needed

When servers are left ageing in data centres, it can increase the time IT staff spends on monitoring heating and cooling and rack setup and addressing downtime. The increased number of service incidents that tends to occur with ageing infrastructure means the data centre team is reduced to constantly checking server systems to prevent disruptions. The more efficient power usage and reduced cabling seen with some newer servers allows IT administrators to shift their focus away from infrastructure management and toward more high-priority tasks. What’s more, if you engage in an evaluation of your current IT environment with a solution architect team like Datacom’s, you can take the responsibilities of server procurement, design and implementation off your IT team.

Learn how Datacom solution architects can help your organisation optimise its server architecture with current HP blade server offers for maximum performance and cost savings.