Victoria’s ambitious education network was a spectacular failure. As Justin Warren explains, it badly managed and riddled with corruption.

In January 2017, the Independent Broad-based Anti-corruption Commission (IBAC) published its Operation Dunham special report into Victoria’s Ultranet project. The report is well-written, clear, and thorough in its treatment of the issues uncovered by IBAC’s investigation, and it is mercifully free of the usual weasel-words and buzzwords that clutter too much of today’s writing.

Ultranet was to have been an online teaching and learning system for all Victorian government schools. It was announced in 2006 by then Premier Steve Bracks, and the programme — to be run by the Department of Education and Training — was given a budget of $60.5 million.

The Victorian Auditor-General’s Office (VAGO) audited the project. Its report, published in December 2012, found that “Use of the Ultranet is well below expectations, with only 10 percent of students and 27 percent of teachers logging into the system.”

In 2013, the project was closed down after having blown anywhere from $127 million to as much as $240 million.

The report has clear lessons for the governance of technology projects generally, though few of them are new.

Systemic failure

IBAC’s investigation found that the tender processes for the Ultranet were corrupted due to improper relationships between senior officers of the Department, primarily with Oracle Corporation Australia Pty Ltd (Oracle), and then with CSG Services Pty Ltd (CSG).

While there were multiple, serious incidents of corruption uncovered by IBAC during its investigation — including senior officers potentially breaching insider trading laws — I choose to focus here on the systemic nature of the issues uncovered, rather than individual instances.

As IBAC stated in its report, “it was the collective failure of the Department’s three ‘lines of defence’ that ultimately allowed the conduct under investigation to continue unabated.”

The first line of defence was the reliance upon “the ability of managers and leaders to follow correct procedures and to act with integrity.” Multiple senior officers in the department failed to act with integrity. Darrell Fraser, former Deputy Secretary, in particular, went to extraordinary lengths to ensure the CSG/Oracle offering was successful, frequently ignoring the correct procedure.

Yet it is the failure of the other lines of defence that make clear that the Department suffered a systemic failure. Individual components of a system can, and will, fail. Organisational resilience requires other systems to provide checks and balances, reinforcing each other to protect the organisation as a whole.

The second line of defence in the Department was “the various systems and processes that are in place to safeguard and regulate activities performed by individuals in the first line. This includes financial approval processes, procurement processes, and governance committees and frameworks.”

The regulation of the actions of individuals failed. In some cases multiple individuals were colluding to circumvent controls, with one approving the actions of another. In other cases, processes were circumvented by bad actors providing false information that was not independently verified by those responsible for approvals.

These issues should have been caught by a third line of defence, “the assurance provided by the Department’s audit functions that the systems and processes specified in the second line are working appropriately.”

The Ultranet project avoided audit scrutiny because whenever it was to be placed on the Department’s audit plan, Rosewarne and Fraser would push back. Gateway reviews, “conducted by an independent review team at critical points of a project’s lifecycle”, flagged “numerous urgent and critical issues”, yet the project was permitted to proceed anyway.

If any one of these three “lines of defence” had worked correctly, the Ultranet project may well not have been the unmitigated disaster it turned out to be. That they all failed together is typical of a systemic failure.

The causes of this systemic failure were not isolated to individual bad actors, but a result of the culture within the Department. During Operation Ord, IBAC exposed “serious and systemic corruption within the Department of Education and Training”.

“The failure to address Mr Fraser’s behaviour time and time again can only be described as a serious failure on the part of some of the most senior leadership within the Department. Operation Ord also exposed this concerning culture among a group of influential senior executive officers at the Department. This same culture made Mr Fraser’s behaviour permissible.”

The failures in the Ultranet project were a consequence of these same, cultural problems.

Systemic failures are indicative of widespread issues that require an holistic approach to fixing them; targeted interventions will not be sufficient. Changes will be required to the organisation as a whole, and therefore require the involvement of the Board and senior executives. It is far too easy to point the finger of blame at individual bad actors. The nature of systemic failures is that it does not matter who the bad actors are, merely that there are some, and there will always be some, hopefully few in number.

Any system that requires all of its components to operate flawlessly, in all conditions, forever, is fatally brittle and doomed to failure sooner or later.

Regulatory capture

The Ultranet case illustrates the risk of becoming too cosy with any one vendor.

Oracle was given preferential treatment in its application to Ultranet. There were obvious conflicts of interest for influential members of the Ultranet Board. Fraser’s inappropriate relationship with Oracle was well-known within the Department.

There is an argument for going with a known and trusted partner, rather than an unknown, but single-sourcing is a two-edged sword.

A strategic alliance can reduce complexity, which reduces costs and time-to-market, but it adds substantial switching costs. Relying on a single vendor reduces negotiating leverage on future deals. It also adds a hurdle if a better alternative arises from a different vendor, reducing your ability to adapt to change.

It can also lead to a de-skilling of the organisation, in favour of relying on the vendor.

As we saw with the Australian Bureau of Statistics’ Census 2016 failure, this loss of skill can mean the organisation is no longer able to accurately assess the abilities of the vendor. Unable to tell if the vendor is performing good work, or providing value for money, an organisation can become dependent on the vendor, and cedes control over its own destiny.

The cost of changing your mind needs to be factored into decision making. The purpose of “agile” methods that are much talked about is to reduce the cost of change; mistakes become less costly and are easier to correct.

A major problem with the Ultranet was the substantial financial and emotional investment by the principal actors, and the sunk-cost fallacy, where humans tend to double-down on poor choices in order to “save” the money that has already been spent.

Multi-sourcing, particularly using open standards, provides flexibility to the organisation if it needs to change course, and provides a ready Plan B should the initial direction prove incorrect.

When there uncertainty about the correct which is true of most IT systems as history should make clear — then the degree of flexibility needs to match the degree of uncertainty.

What could possibly go wrong?

The novelty and excitement of new things is particularly prevalent in the technology industry. This unbridled enthusiasm to embrace the new can cause people to overlook problems that are obvious in retrospect.

Why wait until a major project has failed to determine why it failed? A useful technique is to conduct a pre-mortem.

The key to a pre-mortem is to work out all the things that could go wrong ahead of time, and determine what, if anything, can be done about them. By spending time working out the potential flaws in the project, the team can alter the design to avoid the problems identified, clarify their objectives, or ensure appropriate safeguards are put in place to manage issues as they arise.

Pre-mortems differ from critical reviews in important ways: critical review is performed after the fact, by an external party. A pre-mortem is performed as part of the design process by the team itself.

Criticism by an external party tends to create an adversarial relationship, where criticism of the project is taken personally by the team who has invested so much of themselves in the work. A pre-mortem can help to alter this antagonistic relationship to one of colleagues working on an interesting problem and solving it. Making a system resilient to failure becomes an explicit design objective, rather than a process of avoiding scrutiny.

Assessing potential risks and guarding against them ahead of time is smart preparation. If a team is planning to scale Mount Everest, and someone points out that ordinary shoes are not a good choice, that is helpful advice, not pointless negativity. CXultural norms should be set by senior leadership that detecting flaws — and helping to fix them before they become disasters — is every bit as important as creating new things.

The pre-mortem should also include the risks identified to the project. “What if the risk assessment turns out to be wrong?” is an important question to ask.

Who watches the watchers?

“Quis custodiet ipsos custodes?”
— Juvenal, “Satires”

There is a remarkable ability of those designing systems to believe in the inherent goodness of humankind, given the abundance of evidence to the contrary.

It is important to strike a balance between trusting people, and checking to see if they make mistakes. Most errors are innocent, but some are not. In general, the best system design is one that makes it easy to do the right thing, and hard to do the wrong thing.

With Ultranet, bad actors were able to bypass checks and balances due to systemic cultural issues within the Department. Leadership from the most senior levels of the organisation is required to show not only that the rules must be followed, they must also be seen to be followed.

When designing a system, too often people believe it best to err on the side of caution, and concentrate on making it difficult to do the wrong thing, with the side-effect of making it difficult to do the right thing as well.

Such a system has the opposite of the intended effect. People will bypass the system so they can get their work done, undermining the whole approach. The response is generally to erect ever-higher barriers, like stern threats of dismissal if people violate company policy.

Have you read all of your company’s policy documents? Have you followed all of them to the letter on all occasions?

It’s easy to justify disobeying a system that people consider ridiculous, or unnecessarily burdensome. The hierarchical nature of organisations tends to allow more senior people more leeway over which systems they disobey. We saw this with Ultranet, and yet senior people have greater scope than junior line-staff to damage the organisation.

With great power comes great responsibility.

Finally, an important question to ask is “What if someone simply doesn’t follow these rules?” Relying on individual integrity, as we see with Ultranet, is not enough. It is important to ask not “Who will let them?” but “Who will stop them?”

Slow down to speed up

The technology industry is currently infected with the idea that everything must move at breakneck speed, and anything that gets in the way is of little value. Security and compliance are seen as impediments.

In some cases this is true, in the same way that your immune system is an impediment to a lethal infection. A disruption to your heart’s ability to pump blood and oxygen to your brain is not a good thing.

The potential advantages from IT systems are great, this is true, but only if those systems make it into production on-time and on-budget. Projects that fail waste time and money that could have been spent on other, useful systems that will now never get the chance.

How many successful initiatives will we never see in the constantly resource-constrained education sector because Ultranet deprived them of $240 million of sorely needed capital? That’s a lot of books and pencils.

IT projects are like any other investment, and should be managed as such.

Your Next Steps

  • Encourage teams to perform a “pre-mortem” as part of setting up any project. What resources will be needed to overcome issues if they happen? What warning signs can you look for to detect if one of the risks is about to actually happen?
  • Do a pre-mortem of your own governance systems. If a bad actor were to attempt to defraud the company, how would they do it? Would you be able to notice before major damage is done?
  • Evaluate your vendor relationships. Are you too close to particular vendors? Why have you aligned your business with certain partners, and are those reasons still applicable? What will it cost you to change your mind?
  • Think critically about your own behaviour. Do you provide a good role model for the correct behaviour in your organisation? Ask those who work closely with you the same question and see if they agree. You may be surprised by what you hear.