I’ve recently seen the 2014 version of Robocop. As an action movie, it is somewhat disappointing, but as a piece of social commentary it is extremely thought provoking.
Set in the not-too-distant-future, the film opens with a scene in a Middle Eastern town where US troops are ‘peacekeeping’ with the assistance of militarized robots. These are depicted in the film as two-legged vehicles with in-built AI brains, an array of scanners, and a variety of deadly weapons. The public relations arm of the defense company which supplies the robots is broadcasting the patrol live, to prove how effective robotic peacekeepers are. Their aim – to have the law in the United States changed to permit the use of robots for law enforcement there – a market worth many billions of dollars. Their attempted propaganda backfires when a group of rebels attacks the patrol, and one of the robots guns down a young child that it identifies as a ‘threat.’
While the movie is pitched as science fiction, this particular scene had more than a touch of present-day reality to it. The only two differences are that today’s military drones are aircraft (not walkers), and they are piloted remotely by humans on the ground (rather than onboard AI). They are still used with impunity across the Middle East for the targeted killing of (suspected) rebels, with substantial collateral damage. The justification for their use – to ‘save lives of American servicemen and women’ – is the same as in the film. While the President finds it unproblematic to deploy them abroad, the concept of using them on home turf is far more controversial. In the film, the harassment by robots breeds discontent and rebellion amongst the occupied population. Similarly, in the real world, drones have been described as “Al Qaeda's best recruitment tool ever”.
The technological trends point towards computers being in the driving seat in future. The movie puts the spotlight on two central moral dilemmas of using computer algorithms to make life-or-death decisions. Firstly, the great strength of algorithms – that they have no emotion, so are prone to neither biases nor fear – is also a key weakness. The absence of empathy means that a computer will be unable to replicate the nuances of human judgment. Secondly, algorithms are touted as removing scope for ‘human error’. But if a computer algorithm takes a human life it is not meant to, who is responsible? Presumably if artificial intelligence becomes sufficiently sophisticated, the artificial entity itself has moral culpability. But discounting full AI, could we hold an individual programmer responsible? Or the company that manufactures the robot? This is, as yet, far from clear, and these questions are just as relevant when we consider civilian applications of robotics e.g. in factories, or in self-driving cars, which are very much present-day concerns.
We also need to debate the legitimate use of technology by the State. We have seen from Edward Snowden’s leaks that the State (at least, US and UK governments) is willing to use every technological tool possible to enhance its own power and entrench its position. Civil servants and security contractors assume a pragmatic mindset and swat away any concerns they might have about morality. The very phrase ‘National Security’ seems to obliterate any civil rights of the Individual. This is something we must bear in mind if we think the idea of robotic policemen or ‘search and destroy’ drones deployed one home turf seems implausible.
A final remark is that there is a very real chance that robots may soon constitute a new class of Weapon of Mass Destruction. The pattern of the 20th Century was for scientists to create lethal weapons (nuclear, chemical, biological), national governments to use them on some enemy, then later, upon realizing their potential to devastate entire populations, to declare them illegal. The potential problem with declaring something illegal after using it is that it might not work next time – the weapon might be sufficiently destructive that it annihilates its creator. This has long been explored in science fiction – from John Wyndham’s Day of the Triffids, to Michael Crichton’s Prey; from James Cameron’s Terminator, to Danny Boyle’s 28 Days Later. Now it is something we must take seriously as a future scientific reality.
Wednesday 12 March 2014
We Need to Talk About Robocop
Labels:
Al Qaeda,
artificial intelligence,
Drone wars,
Drones,
Edward Snowden,
Ethics,
John Wyndham,
Robocop,
robotics,
Terminator,
Triffids,
UAV,
War on Terror,
WMD
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment