Opinion: Early lessons from Novopay
Subscribe now for $100 (23 issues) and save more than 37% off the cover price!
Get the latest news from Computerworld delivered via email.
Sign up now
Several inquiries into Novopay, of varying independence, are already underway. However there are already lessons we can learn from the debacle, especially in the context of the proposed $1.5 billion IRD IT spend up.
It’s important that we resist the urge to rush to judgment on the Novopay situation; it’s likely many more details are yet to emerge before the picture is completely clear. While the documents released recently by the government certainly paint a fairly damning picture, that’s only half the story.
However even at this early stage, some things are clear. For starters, it’s fair to say that the project launch has been an abject failure, certainly in the eyes of the users. Even if the system is fully salvaged and regardless of where blame eventually sits, the fact is that Novopay will be used as a case study in future IT courses for many years to come.
And secondly, there are some lessons we can learn for future projects such as the upcoming major IRD revamp, that perhaps could even help avoid some seriously massive problems in that and other large-scale taxpayer-funded projects.
Lesson 1: Does the tender process lead to bad results?
Someone asked me the other day whether it was clear yet at what point the project first came unstuck. In our view, it was probably before it even started; likely before the tender had even been won, when various companies were jostling over the project.
Why? We’re talking about a complicated project. Not only are there more than 100,000 people relying on the system, there is also massive complexity in how these teachers and support staff work, with many schools doing things in different ways.
There are part-time teachers working across multiple schools, big variations in pay and contract rates and much more. You could say “yes, but it’s just a payroll system…” but it really ain’t that easy in this case.
So what do you do if you’re tendering on a hugely complex project like this one? Well, you basically have two options.
You either spend a considerable sum - probably in the hundreds of thousands of dollars or more — working through the requirements in absolute detail and properly speccing the whole project out, all the while knowing that there’s still a very high chance you’ll lose the tender and thus have to write off that initial investment.
And, of course, you have to build the ever-growing cost of this work on unsuccessful tenders into the price of successful ones.
Or option 2, you basically “wing it”. You still spend a bunch of time, but you only end up with a rough idea of how complex you think the project will be, cost that out, build some fat in for complexity you missed, put in a bid and cross your fingers.
I’m not saying this is what Talent2 did in this case at all, but it is clear the complexity was completely under-rated, and one does wonder if the incumbent lost the contract because they knew full well the complexity involved. Perhaps they based their quote on that knowledge - and were undercut in the process. But I digress.
While a simplification, those are the two options in front of most companies bidding for complex projects in government, and neither lead to a good result. From that perspective, it’s excellent that minister Steven Joyce has announced there’ll be a review into how future projects are tendered.
But what are the implications for the upcoming IRD system? For one, we have to accept the fact that the larger and more complex a project is, the greater the chance it’ll fail and if it does, the bigger mess it will make.
So when we’re talking about billion dollar projects, the chance of failure is huge and the cost fallout could be felt for many years to come.
The logical answer is simply to chop it up into a series of far smaller projects or components, focusing on how each component interrelates.
While this approach might take a little more work up front, the risks are far lower than what appears to be the current “big bang” thinking of just farming the whole project out to some offshore company and hoping for the best.
Under the component approach, if some projects fail the fallout is contained and an appropriate plan B can be put in place. To put it another way, if one company is handling the lot, you can’t just kick them to the curb and get someone else to do it without massive cost. With smaller projects you can.
But more to the point, this approach doesn’t exclude capable and incredibly competent local providers from what will likely be the largest government ICT spend-up in a while.
Rather than sending it offshore, the government could back New Zealanders and use it to truly transform our industry and position it to take on the world, in a way that dramatically reduces risk. But will it?
Lesson 2: Launching to 100 percent capacity on day one
Here we go again. IT and project people should have already understood why a full scale launch on day one was a bad idea. Yet by all accounts, that’s exactly what happened with Novopay.
Rather than a staged release and picking, say, 100 schools for the first run, then ironing out the teething issues in a manageable way before taking on more, someone just flicked the switch and hoped for the best.
And predictably, boom! Some relatively minor teething issues occurred but because they were at maximum capacity on day one, guess what happened?
The issues were at a scale they couldn’t manage and their support service was totally overwhelmed, mainly initially over missing payslips of all things. Manual processing was required, which takes time and causes audit issues. They really never caught up from there.
So what we’re really interested in is why and when that decision was made to push go on 100 percent capacity on day one – because that was just ‘asking for it’. Was it made as a result of political pressure to go live at all costs? Was it pressure from the incumbent? Or was it simply made out of incompetence from those who should have known better?
There was originally intended to be a staged release, or at least a pilot programme, of just the South Island. It was scrapped, presumably because the decision-makers had such complete faith in the system they decided that it wasn’t needed.
So there certainly are going to be many lessons learned from Novopay and those two are barely scratching the surface.
However neither is new, and one does have to wonder why they have cropped up once more. Will we just continue to make the same mistakes into the future?
Let’s hope that if our government does throw a billion dollars into revamping our tax system, Kiwi companies get a chance to do it right rather than being cut out of the project altogether.
Matthews is chief executive of the Institute of IT Professionals
1. Software cost estimation is inherently uncertain. If you ask 5 vendors to estimate from the same requirements, and they all estimate with equal care and thoroughness, you will still get 5 different answers.
2. Customers, particularly government ones, are price sensitive. If they treat price as an important selection criterion, they are likely to choose the lowest bidder - which typically equates to the bidder who made the biggest (downward) mistake in estimation.
3. So the project tends to go to the sucker who under-bid. From the vendor's viewpoint, we only get awarded the work in those cases when WE are the sucker! When we've made a mistake in the opposite direction, and over-estimated, we tend to not win the work.
Economists know this as the Winner's Curse. To avoid this problem, in the US, Federal agencies operate under a specific rule for (physical) engineering projects: at tender time they can ask vendors anything EXCEPT the price.
More info here and
Posted by John Rusk at 20:29:20 on February 14, 2013
Posted by Anonymous at 14:41:32 on February 14, 2013
Posted by Anonymous at 14:39:47 on February 14, 2013
What i want to throw in the mix is, that systems especially in government have to adhere to these 1000 processes and 3T+ documentation, review and audit levels,...
It sounds like we're designing projects to last for 1000 years!
Why not go exactly the opposite direction? Write throw away projects. 100% focus on delivering functionality that woks. We don't care what it looks under the covers. We will throw away the project in 5y, when the next throw away project finishes.
IT has such a high rate of change that planning for anything 2y+ is actually idiotic. And you'd want to incorporate new developments on a regular basis. So do throw away projects at the fraction of the cost and start the throwaway project for the successor even before the former has gone live.
The idea being
1) No money wasted on all that stuff nobody needs anyway
2) In case of failure you already have a second project on the way that can be substituted
3) You will keep abreast of developments in IT
4) You get several chances to "fix" the things you did wrong last time
5) Lends itself well to more modern SDLCs
6) Can harness more of the actual power and energy that developer have by giving them more slack/focus on what they really do well.
7) With lower cost you could actually have several projects "competing" against each other.
Posted by Oliver at 13:45:27 on February 14, 2013
Time and again companies believe they can get everything into Release 1. The major problem with this is fundamental: requirement change outstrips development speed for any sufficiently large project making it structurally impossible to deliver an effective system. Agility comes with requirements pruning, bloat delivers only fragility.
Posted by Marco at 17:16:13 on February 13, 2013
I have written a payroll and would not want to write another as they are fairly unpleasant applicants to write. You would have to be brave or stupid to roll out to everyone rather than staggering the release.
Posted by Greg Nixon at 8:22:02 on February 13, 2013
*Complexity* in itself is never the problem. Software developers and designers are used to dealing with *complexity* and computers certainly have no problem handling it.
The problem occurs because there is *complication* in addition to *complexity*. This can happen when a data model that was developed for a different organisation is used as the starting point.
For example if Talent2 began with a data model for a payroll system that was developed for one of its other clients rather than developing it from scratch they are dealing with *complication* in addition to *complexity*.
To avoid *complication* and facilitate a successful project the data model must be developed from scratch to fit the needs of the organisation by correctly modelling the *complexity* required.
Posted by Matthew Jenkinson at 19:57:08 on February 12, 2013
Posted by Anonymous at 15:42:41 on February 12, 2013
Now, they are great people and way more talented than me, but when I was approached by 20-odd schools to provide support for this app, I asked them "Why on earth did you choose this one?"
"The ministry approved it!" was the reply. Needless to say the complete lack of any real testing, and the hopeless lack of staffing led to disaster. The minedu, on the other hand, when pressed on their approval, took the line that "nobody made schools buy any software", which was a lie: the Minedu had threatened schools with NGA audit failures if they didn't buy in. This was a program worth millions that was developed by an NZ outfit, and it failed badly to perform.
Truthfully though, most of the blame goes onto the Ministry for writing an absurd spec that made no reference to company structure or staffing levels or helpdesk, but which was long on ethnicity bias and manipulation.
Posted by GJ Philip at 15:29:23 on February 12, 2013
Posted by Anonymous at 16:30:06 on February 13, 2013