Schrodinger’s Automation in AI and the Automation Bias

Automation and AI

Automation is one way in which we can have human-centered Artificial Intelligence. It can relieve us from repetitive and mundane tasks so that we can make the most of our human skills and talents. That said, we may have to use those human skills and talents as a check on Artificial Intelligence and its outcomes, or perhaps to solve the problems that appear when we don’t open the box. We can be said to have an Automation Bias, where we assume that technology just works. 

We have all heard of Shrodinger’s cat; if you open the box, then the cat might be alright! If you don’t open the box, then the cat might still be alright. In organizations, we see a similar approach; if we don’t look at the problems of AI and automation, then we can’t be accountable or responsible because we didn’t know about it. It’s akin to not opening the box and hoping that the cat will be alright, if we don’t look.

In business, AI is poised to have a transformational impact on the scale of earlier general-purpose technologies. Although it is already in use in thousands of companies worldwide, most big opportunities have not yet been tapped. The effects of AI will be magnified in the coming decade as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning. The bottleneck now is in management, implementation, and business imagination.

Automation and the history of the 'blame game'

Automation in the Home

Technology is often touted as the solution to automating and correcting errors in human behaviour, reducing risk. There are highly visible examples where human tragedy has been due, at least in part, to failures in safety leadership ascribed to faulty, manual processes. One example was the Piper Alpha disaster which occurred in 1988. Piper Alpha was an oil platform located in the North Sea approximately 120 miles off the northeast of Aberdeen, Scotland. To date, the Piper Alpha tragedy remains the world’s worst offshore disaster in terms of lives lost; one-hundred and sixty-five workers died onboard plus two rescue workers who were trapped in debris from the disintegrating oil rig. The subsequent investigation found that the tragedy was due, in part, to a paper-based filing system that resulted in paperwork being misfiled, leading to crucial information being missed by operatives (Cullen, 1993). 

A further example of human tragedy due, in part, to faulty manual processes is the Hatfield rail crash, which was a railway accident in October 2000, at Hatfield, Hertfordshire. This tragedy resulted in the deaths of four people and injured more than seventy other people. A Railway Safety investigation found that the Hatfield rail crash was partly due to incorrect and inadequate checks of the track, checks made of the wrong track, and poor maintenance (Rail Safety and Standards Board, 2018). The Hatfield tragedy led to changes in corporate manslaughter law, ensuring that organisations and individuals were held accountable (Bain and Barker, 2010; Whittingham, R., 2004). 

How can Automation, Accountability, and Responsibility be handled in terms of the law?

Automation in the Law

In the UK, the Companies Act 2006 brought in changes to Governance and Stewardship in the corporate setting, partly due to preventable tragedies such as the Hatfield rail crash. The Companies Act 2006 made Directors responsible for the promotion of the company’s success for the benefit of its members as a whole, making Directors responsible for the prevention of issues such as the Hatfield rail crash where five executives were charged with manslaughter (Sheikh, 2013). In response to a UK Government green paper, the then UK Prime Minister Theresa May noted that ‘some directors seem to have lost sight of their broader legal and ethical responsibilities’ (Department of Business, Energy and Industrial Strategy, 2017).

Under May’s leadership, the UK Government introduced UK Corporate Government reforms to encourage businesses to take investor and employee concerns more seriously (Department of Business, Energy and Industrial Strategy, 2017). In response to UK Corporate Governance reforms and changes in the Companies Act, private and public organizations increasingly turned to technology to prevent issues, as evidenced in business writing at the time (Manyika, Roberts, and Sprague, 2008).

In other words, making people accountable and responsible from a legal perspective pushed businesses to move to technology to spread accountability and responsibility. The computer is to blame, and it means that companies may be less willing to look at how automation makes crucial decisions because they would know, right?

Automation Bias: an example

Automation can be incredibly helpful but we can’t take the perspective where ‘if we don’t look, it might be alright’. Organizations choose what to look at, and they can hold a bias that the tech simply just works. Technology on its own can create unintended consequences for businesses and individuals.  In the UK, one recent example is the Post Office Horizon system was poorly implemented and badly tested. The Horizon IT system resulted in fraud allegations made against individual Postmasters and Postmistresses around the UK, while the computer systems themselves were assumed to be dependable and accurate (Christie, 2020). Described as the most widespread miscarriage of justice in UK history, the human consequences were tragic, with Post Office staff being sent to prison for crimes they did not commit, facing financial ruin, and being shunned from their families (Lloyd, 2022).

What can we do about Automation Bias?

What can we do to ensure that our businesses are not prey to Automation Bias? Here are a few steps:

Testing should be viewed as necessary but not optional

Organizations can get very hung up on which model to select. However, this doesn’t mean that they have considered the testing part of the process. As an example, consider auto-correct; often it is right, but sometimes there is an excruciating error that is visible to other parties.

Ensure of the business question so you get the answer you were looking for

In my experience, I see organizations setting out to one thing with AI and then changing their minds very quickly because the data has led them to get sidetracked. In the agile approach, you could add additional requirements to the backlog.

What is the worst that can happen?

Plan for these failures; don’t simply consider the happy path

 

Next Steps

If you’d like to know more about how I can help, please book some time here or contact me on LinkedIn.  If you need any references for the above, please get in touch and I can provide them separately.

Leave a Reply