As the latest wave of artificial intelligence hype washes over us, we must be mindful of the ethical considerations of how we deploy technology. The is that we’ll go too far down a dark path without realising what we’ve done. Justin Warren explains.

There’s a lot to like about automation. Indeed, the history of humankind is a history marked by the regular invention of technologies that improve our lot in life. While this progress is far from evenly distributed, we humans have managed to improve our circumstances rather a lot since we first discovered that fire was useful.

But just like fire, technology can be used for both good and evil. This isn’t exactly news, but as Aldous Huxley is purported to have said:

“That men do not learn very much from the lessons of history is the most important of all the lessons that history has to teach.”

Right now we’re attempting to stuff as much autonomous computing power as we can into every passing watch, car, sex-toy, refrigerator, or light-bulb with nary a thought for what will happen when they’re let loose upon the world. We’re also discovering what happens when these tools go awry.

Strategic Incompetence?

Automation leverages the bad as well as the good. Computers can make mistakes at a scale and speed that dwarfs we mere humans.

The Commonwealth Bank (CBA) added fancy newautomation technology to its (already) Automated Teller Machines, but there was a flaw. CBA says that single flaw was responsible for the bank’s alleged failed to notify AUSTRAC of 53,506 transactions that were over the statutory reporting threshold.

One flaw, but a huge number of errors.

Similarly, misconfigured software lead to the flash crash of 6 May 2010 that wiped billions off the US stock market in seconds.

In 1995 Barings Bank collapsed because, in large part, it lacked internal controls to manage risk, an unacceptable excuse. It’s hard to see shareholders accepting a similar excuse when automation magnifies a small operational loss into a billion dollar failure.

Boards need to consider the risk automation poses to the underlying business before it’s deployed. After all, the board is expected to ensure is adequately managed.

Responsibility Failure

Poor risk management is often excused because “software systems are extremely complex”. But many other extremely complex systems are expected to operate safely, especially when there’s a risk to human life. Airplanes rarely fall out of the sky of their own accord. Pre-packaged food tends not to kill us immediately.

Software will probably follow in the path of other human products and services: regulation of the risks will only happen after clear and obvious harm befalls a politically charged number of humans. In short, many people will need to die before society cries “Enough!”

Organisations that manage these risks poorly may not technically be breaking the law — yet — but that’s a fairly low bar to clear. Besides, there are other advantages if you get it right.

Customers tend to prefer products that don’t maim or kill them. If your organisation’s use of automation has fewer negative side-effects, that‘s a big competitive advantage — particularly if you draw attention to your superior ethical position.

Addressing these issues now will also give you a head start on those who wait until they’re forced to be more ethical.

Getting complex automated systems to adhere to new legal frameworks will make compliance with Sarbanes-Oxley or GDPR look like filling in quarterly BAS. The sooner your organisation gets on top of having its systems behave within well defined parameters, the better off it will be.

Is Not Automating Unethical?

There are situations where a failure to automate may also be seen as unethical. The key is to understand who the automation serves.

Joseph Bironas, a site reliability engineer at Google, argued:

“If we are engineering processes and solutions that are not automatable, we continue having to staff humans to maintain the system. If we have to staff humans to do the work, we are feeding the machines with the blood, sweat, and tears of human beings. Think The Matrix with less special effects and more pissed off System Administrators.”

Reducing the amount of tedious, meaningless labour humans are forced to endure would seem to be an unalloyed good thing. Refusing to automate these kinds of tasks could be seen as unethical, because it forces humans to serve machines created by others. That means they’re serving those human masters in a kind of machine-driven slavery.

Automation can even lead to an efficiency dead-end. Toyota has placed humans back into automated factories so they properly understand what the machines are trying to achieve, and can find new and better ways of achieving the desired outcome.

“We cannot simply depend on the machines that only repeat the same task over and over again,” project lead Mitsuru Kawai told Bloomberg. “To be the master of the machine, you have to have the knowledge and the skills to teach the machine.”

Providing meaningful work for humans has long challenged organisations, but until recently that concern has mostly been confined to blue-collar jobs. Now the professions and middle-classes are finding that their knowledge work is also being automated. They’re left to wonder “What will I do now?”

Happy employees are important if for no other reason than companies need paying customers. Replacing the entire workforce with robots is of no use if robots don’t buy what you sell.

Next Steps

  • Make sure that governance of the use of automation in decision making is on the board’s agenda. Boards should ensure they’ve defined clear parameters of the level of understanding required before decision-making is even partially automated, lest the board’s authority be undermined by poorly understood automated systems.
  • The board should ensure that all automated systems have a custodian who is responsible, and accountable, for overseeing their operation. The person should be able to clearly and readily answer questions about the decision parameters used by the automated systems.
  • The board should also ensure that automated systems have clearly defined scope. If they begin to make decisions outside their remit, they can add unforeseen risk. This would be like a staff member overstepping their purchase authority. The board should ensure the organisations has appropriate controls in place to detect and correct such behaviour, just as they do to guard against internal fraud.