Gepubliceerd op dinsdag 12 juni 2018 door Ruud Plomp.

The film ‘2001: A Space Odyssey’ was made in 1968 and is still actual the way it deals with (the fear for) Artificial Intelligence (A.I.). The film shows life of the crew on board of a spaceship on a voyage to Jupiter with the sentient computer HAL controlling everything on board. Two still awake astronauts talk with HAL as if he is a colleague. HAL also controls the life of most of the crew on board, which is artificially kept asleep during the long transfer. In the story of the movie HAL kills most of the crew during the voyage. HAL has a secret mission, more important than the lives of the crew.

This handing over of “Control” from humans to machines/systems is not really new. We are using sensors for instance already since the 15th century. Leonardo da Vinci designed a sprinkler system in the 15th century by automating his patron's kitchen with a super-oven and a system of conveyor belts. During a huge banquet everything went wrong, and a fire broke out. ‘The sprinkler system worked all too well, causing a flood that washed away all the food and a good part of the kitchen’.1

In the current discussions on A.I. the main issue is how much of our responsibility are we going to hand over to A.I., to Algorithms, as we see today with the autonomous driving car. Is A.I. just another tool like earlier tools mankind has invented and made, like hammers, screwdrivers, dredgers, cranes, cars, written language, mathematics, bureaucracy, etc. Being a relative new technology it’s hard to judge the true value of the fears we experience with it. Can we compare the fears we now feel with the fears we experienced when seeing the first trains? These early fears for trains, as we now know – afterwards – were not in relation with reality.

This year – 2018 – an offshore oil and gas platform in the North Sea will be visited by an autonomous robot. This machine’s main jobs will be checking for gas leaks and monitoring equipment. This will be the first for the sector. The explanation for sending the robot instead of workers is to take humans out of dangerous and dull jobs.

Also Maersk Line will test what is claimed to be the world’s first A.I.-powered situational awareness system aboard one of a new build ice-class container ship.

Another appliance – currently being researched for small offshore rigs – is the Asset Integrity Monitoring method, which makes continuous live monitoring of offshore sites possible. Sensors are deployed inside or very close to equipment, constantly detecting and transmitting any changes.

Looking at it from the QHSE point of view, there’s definitely a Win on Health: No workers = no injuries, no disabled nor deaths. On Quality the big win is that these machines (and their sensors) can operate and measure continuously, almost weather independent. This is a big step forward for being “in control”. On Safety the main challenge is to keep the IT system free of hackers2 and stay in control on the algorithms. For the Environment the big Win is that polluting or leaks can be chased in a very early stage.

A.I. in combination with robotization is the next step in automation of mechanical processes. By implementing these technologies Homo sapiens will hand over more and more responsibilities and jobs to machines and technologies. It is human nature to explore the boundary of what can and what cannot be delegated to new technologies. Our ambitions will ensure that this boundary will sometimes be exceeded.

We know humans are not perfect and will do things with adverse outcomes. So, we know things will still go wrong when we program algorithms and hand over human autonomy to A.I. Are we like Frankenstein, creating our own monster? A monster that will finally kill us?

Will there ever be a moment as in the movie ‘2001: A Space Odyssey’ when one of the two remaining astronauts (Dave) asks the computer HAL to open the Pod bay doors, while his colleague (Frank) is outside the spaceship. “Do you read me, HAL? HAL, do you read me?” he keeps saying. And HAL refuses to answer at first and refuses to open the Pod bay door. Until HAL starts talking: 

  • DAVE: Open the pod bay doors, Hal.
  • HAL: I’m sorry, Dave. I’m afraid I can’t do that. 
  • DAVE: What’s the problem? 
  • HAL: l know that you and Frank were planning to disconnect me, and I’m afraid that's something I can’t allow to happen. 
  • DAVE: Where the hell did you get that idea, Hal? 
  • HAL: Although you took very thorough precautions in the pod against my hearing you, I could see your lips move. 
  • DAVE: Hal, I won’t argue with you anymore. Open the doors! 
  • HAL: Dave...This conversation can serve no purpose anymore. Goodbye.

The conclusion we can derive from the above is that we have to be very careful – and really think twice – when handing over responsibilities to algorithms. Nowadays U.S. Naval Academy is teaching young officers again using the sextant for navigation, as a backup when GPS fails3. While handing over vital responsibilities to algorithms we should be aware of possible failures or attacks.

A backup system based on human action should always be present. We can put the responsibility of a lot of our complex actions and processes in the hands of algorithms, but human intervention must always be possible to overrule these algorithms.

  1. Wikipedia; The history of the Fire Sprinkler System.
  2. BBC, June 7th 2018; Ship hack 'risks chaos in English Channel'
  3. NPR, February 22nd 2016; U.S. Navy Brings Back Navigation By The Stars For Officers