June 4-6, 2025 | Shenzhen Convention and Exhibition Center
South China International Industry Fair
Powered by Hannover Messe & CIIF
汉诺威米兰展览(上海)有限公司
汉诺威米兰展览(上海)有限公司
86-21-20557000/86-21-22068388
汉诺威米兰展览(上海)有限公司
汉诺威米兰展览(上海)有限公司
Scanning Pays Attention to Us and Gets More Information
Press Service
Press Service
HOME > Media & Press > THE AUTOMATION PARADOX
THE AUTOMATION PARADOX
Sharing:

Automation exists to make life easier. Thermostats regulate the temperature in our homes so that we don’t have to think about it. Dishwashers do our washing up. Direct debits pay our bills. And as automation becomes more powerful, so we become freer. Robots are doing more manufacturing and warehouse work than ever. Autopilots are flying our planes and will soon be driving our cars. Life has never been so comfortable.

 

Until things go wrong. There are examples of systems that have been automated so well that ongoing, meaningful human input ceases to be necessary. And, in these cases, instances of system failure, though rare, are bound to be severe – because there is unlikely to be anyone around who can step in to help. One could almost say that good automation, by definition, carries within itself the seeds of its own catastrophe.

 

This is what they call the automation paradox.

 

Awareness of the idea has been around for almost as long as industrial automation itself. British cognitive psychologist Lisanne Bainbridge, in an influential paper of 1983, was thinking of machine control in process industries when she observed ‘the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator.’

 

She was also thinking of the flight deck – over twenty years before the Air France Flight 447 disaster provided a deadly illustration of her point. In 2009, on its way from Brazil to Paris, the passenger plane’s autopilot failed after its airspeed sensors were compromised by ice crystals. Suddenly forced into action in dark and turbulent conditions, the captain and two co-pilots struggled to interpret the situation, pitched upwards instead of down, panicked and fatally lost all control.

 

It is difficult not to speculate that the pilots might have managed better if they had been more practised flyers; that, in appearing to make their jobs easier, autopilot had in fact rendered them peculiarly vulnerable to misadventure.

 

The impact of automation on humans

The theory is simple: the more automation relieves human beings of exposure to challenging work, the less accomplished at that work human beings will become. And it is apparently not just theory. Recent research from the States suggests that a growing reliance on robotic surgery in hospitals is – for all its great successes – nevertheless directly connected to a training deficit in the early careers of junior doctors.

 

the-automation-paradox

The problem could even extend beyond the sphere of specialist expertise into the realms of general cognitive competence. A growing reliance on AI for support in a range of basic life skills – from map reading to language translation to arithmetic itself (pocket calculators provide the simplest example) – means that, while, with this help, human beings have never been more capable, take it away and they have never been less.

 

If the paradox has anything to teach us it would seem to be that, while automation frees us from certain kinds of work, it simultaneously obliges us to know more than ever. Contrary to popular myth, airline pilots today do not have it easy. Not only do they still need to know how to fly planes (and to keep that knowledge fresh despite limited opportunities for practice) but they additionally need to know how autopilots work – not to mention why and how they might stop working.

 

Not only does automation actually increase the knowledge burden, then, but it is hard to see, at least in the medium term, how that burden cannot fail to grow. As automated systems become steadily more complex (because more powerful, more digitalised and thus more interconnected), troubleshooting and maintenance become correspondingly more difficult. Reasons for system failure may not be as commonplace as before, but they are less predictable and require both more time and higher levels of expertise to remedy.

 

Clearly, not every system user can be a system expert. But the greater the overall rift between technology and technological literacy, the greater the risk of industrial and other serious accidents. The automation paradox thrives only in that rift.

 

Education, in this regard, has its work cut out for it. For just as the digital and other technologies behind automation grow more sophisticated, so too in many areas they are becoming more sequestered. The commercial viability of cloud-based AI and other services, for example, has much to do with their user-friendliness: no technical knowledge required. (But note that when that once-in-a-blue-moon server outage strikes, countless system users are left powerless).

 

Hence, finally, the vital importance of strong and yet accessible system design.

 

In these terms the automation paradox can be fought against by making systems as robust, or as resilient, as possible. Various strategies for creating so-called fault tolerance in computer engineering, for example, have been developed over the years: built-in self-test mechanisms, back-up strategies and fail-safe protocols are all attempts to protect systems (and people) from disaster. Chaos engineering subjects systems to planned turbulence so that they learn to cope with the real thing.

 

Solving the automation paradox?

Such measures, however, do not ultimately solve the automation paradox; if anything, by maximising system self-reliance, they threaten to reinforce it. In design terms a more credible attempt to undo the paradox, or to prevent it from becoming problematic, lies in the concept of human-centred automation.

 

The idea covers a variety of ways in which the ease of human interaction with a control system is made a primary design consideration. This might be something as simple as the use of audible alarms connected to incidences of malfunction. Or it might involve making sure that processes do not run so fast (a natural pressure in the marketplace) that human intervention, when called for, comes always too late.

 

More recently, digital developers have begun to look at ways in which artificial intelligence can help systems learn the properties of human interaction and adapt their modes of interface accordingly.

 

As automated systems grow ever more powerful it is crucial that they remain as open as possible to ordinary human understanding and control – so that their paradoxical potential for damage remains closed.