IBM’s Auckland datacentre issues resolved
Subscribe now for $100 (23 issues) and save more than 37% off the cover price!
Get the latest news from Computerworld delivered via email.
Sign up now
Problems at IBM’s Highbrook datacentre in Auckland appear to have been resolved by 10am today.
The Virtual Server Services environment had been down since 3am on Monday.
An IBM spokesperson in Australia says in a bare-bones statement: “The New Zealand Virtual Server Services environment has been restored. We are actively working with all impacted clients to ensure they are fully operational.
“All other service offerings delivered from our Highbrook data centre continue to be fully operational.”
There is no indication of what caused the outage at the $80 million data centre, which opened in May 2011 to provide cloud services to major IBM clients.
Eagle Technology has been badly affected by the outage. Computerworld spoke to CEO Gary Langford at 11am this morning.
“It’s caused us quite a bit of sleeplessness,” says Langford. “Twenty-five of our clients were at Highbrook.”
“The system only came up in the past hour, and we’re now checking every thing and working through it.
“Once the dust has settled, we’re going to have a need to understand what went wrong.”
He says the outage was ironic, given that Eagle was last year named IBM’s Cloud Partner of the Year.
Posted by Anonymous at 14:35:44 on February 28, 2013
Seriously people how about we stop bashing IBM because they are big blue yanks and look at the facts. Not that IBM should be smelling of roses, there is some serious rot; but lets at least be rational about what the issues really are.
As previously noted in other comments, the data centre did not go down, a single virtual server hosting offering went down. The reporting on this across the NZ media has been disgraceful and fear mongering.
The comment titled "IBM NZ lack quality engineers" is just an example of NZ's tall poppy syndrome and the poster obviously lacks understanding of the complexity and opportunity for failure that exists in such a complex system.
I don't see anyone claiming that Amazon Web Services 'dont have the engineers capable of running a cloud' despite the fact that in the last six months of 2012, they suffered 3 major outages, one which continued for >70 hours and resulted in customer data loss. And it's not just AWS and IBM NZ clouds who have suffered issues. Last year, Tumblr, GoDaddy, Salesforce, GoogleTalk, Dropbox, Google Apps, Office365, Azure all suffered major customer impacting outages.
People need to understand that architecting business IT using cloud platforms requires you to think differently to how it was done when we all owned all our own hardware.
Secondly - IBM don't comment locally because they are not allowed to. Thats a corporate policy and has nothing to do with the calibre of the people in country.
Thirdly - the availability advertised is three nines, not five nines and many of the customers are on older contracts which were sold with an SLA of 98.5%. Furthermore for cloud services, the uptime is typically calculated on a month by month basis, not annual basis.
Finally - people who are are running critical business services need to understand there DR posture and the resulting risk. If they choose to put all their services into a single site with no DR plan, they will almost certainly at some point suffer an outage. If your services are that critical, plan for disaster and don't put all your services into a single site offering. It's not rocket surgery.
The real problem at IBM NZ is the management (from the very top in the US, all the way to exec level in AU/NZ) have run the delivery organization in to the ground, driving down moral and cutting staff levels to the bone, forcing overworked smart technical engineers to focus on mundane compliance tasks that could or should be automated but have not been due to complex, cumbersome and frequently changing process.
Posted by Anonymous at 9:44:06 on February 22, 2013
Posted by Anonymous at 20:34:24 on February 22, 2013
Posted by Anonymous at 13:02:29 on February 22, 2013
Posted by Anonymous at 10:54:44 on February 22, 2013
Their IBM XIV storage was due an upgrade, a new cash card was installed that was faulty and it took the entire array down. It then took 3 Days to re-store the array from back-ups. This has come from a customer effected by the outage.
So much for XIV being Tier 1 Storage.... In my opinion, there are only 3 suppliers that produce Enterprise Grade Storage in the IT market: HDS - VSP; EMC - VMAX & IBM - DS800 range... Everything else is midrange with single points of failure.
Posted by Anonymous at 22:06:27 on February 22, 2013
Posted by Anonymous at 8:06:04 on February 25, 2013
Posted by Allan at 10:21:50 on February 24, 2013
Posted by Anonymous at 15:37:27 on February 23, 2013
Your source who is purportedly a customer and provide you this hearsay knowledge obviously received this knowledge via Chinese whispers as it simply incorrect.
Fact #1. It wasn't XIV storage.
Fact #2. It was not related to planned work or changes - it was a fault.
Fact #3. No data was lost.
Fact #4. There was some corruption with a small subset of data which was successfully restored from recent backup.
Fact #5. There are many other storage appliances that are worth of the 'enterprise' tag. I'm glad you prefixed your inaccurate opinion with the fact that it was just, your ill informed opinion.
Jeeze - hearsay information is simply not 'a fact'
Posted by Anonymous at 12:50:35 on February 23, 2013