By Para Mullan, Senior Project & Business Relationship Manager, CIPD.
I watched Mo Farah beat fellow contestants in the 10,000 metres race at the Olympics last week. Along with the rest of the population I witnessed him fall, pick himself up and continue running with a sense of purpose — to win the race and a record third gold medal. The Olympic motto is about giving one’s best and striving for personal excellence. It is a motto that ordinary mortals, not just Olympic athletes, can subscribe to in our everyday lives: to do the very best we can.
This motto though can’t apply to robots. Robots can never feel a sense of achievement as they carry out the work we program them to do. They can do what they are told to do, flawlessly, but can’t feel pride in personal excellence. Robots have no capacity to do what Mo Farah did in his race — fall, be driven by thoughts of his family to pick himself up immediately, look around to see where his fellow contestants were, and start running again with purpose and determination. This is what being human means. We are able to see the bigger picture, make judgments, and act with consciousness to make a difference in the world.
Hang on a minute you say — this was how robots used to be.
Now we are starting to build smarter robots, capable of acting with feeling. Well, the first robotic lawyer in the US is limited to making appeals against parking fines. This is sophisticated form filling, not evidence of feelings. Robots are also used in the American justice system to make risk assessments on the probability of convicted people offending again. Judges use the robotic assessments to weigh up low risk versus high risk of re-offending and pass sentences accordingly.
It might seem positive that if robots can fully assess past data and predict wrongdoing, judges are better able to give the appropriate sentences, helping to cut down crime and reduce the country’s increasing prison population. But here are some valid concerns.
Accountability of robots
First, if a judge makes a mistake and the low risk offender re-offends, the judge can blame the robots for his/her decision and in the process exclude his/herself from any failure of judgment. This means less accountability. We would be losing an important feature of humanity: humans being accountable for their actions, something else that is meaningless for machines.
Do robots remove bias?
My second concern is that the robotic prediction consistently made black offenders to be high risk and white ones low risk. In 2014, then US Attorney General Eric Holder was concerned enough to request that the US sentencing board examine the risk scores. He said:
“Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualised and equal justice.”
There is not even transparency of the data used to make the predictions. The company that makes this type of software says it wants to protect its intellectual property: hence its secrecy. But given this lack of transparency, it is difficult to accept that the findings are objective.
Robots are what we make of them
Robots can carry out narrow tasks like appealing against parking fines, analysing millions of pieces of data, and working within a structured environment, but they are unable to do the reasoning, thinking and making judgment calls on situations that judges should be doing. Humans make mistakes, but we are accountable for our actions, correct our mistakes and learn from them. Robots are what we make of them — and they are simply tools that we humans can use to make our working lives easier. That’s the way we should use them, and not think they are substitutes for humans, or allow them to justify us giving up on our own human traits.