After a number of what are referred to as ‘revolutions’ in the way society reacts to changing technologies we find ourselves on the precipice, if not already tumbling down, the hole that is AI.

The industrial revolution at the turn of the 20th century, whilst bringing electricity, food, and huge benefits to society also killed hundreds of thousands of people and consigned others to a life of destitution. With all big changes come BIG questions, for the last few years the question has been, ‘what about bias in AI’?

AI is a power for good but can it replace ‘wholesale’ our decision making? What humans bring to decision making is a guaranteed feedback loop, and the ability to change quickly, when the picture we thought we were painting looks more like a Jackson Pollok than a Monet. That human feedback loop, of course, is still full of bias, so can AI break that and bring us to a fair world, or will the privileged be the only ones to experience human intervention on a decision that was otherwise invisible and unchallengeable?

Seeing the picture AI is already painting is difficult but if we continue to use the mathematical models of today or even yesterday, then we will be successful in replicating our past, but with increased speed and efficiency, consigning the poor to remain poor and black and minority groups to be hounded by the police, thrown in prison and to endure longer sentences than their white counterparts.

The questions we choose to answer, and data we collect, is full of bias and the models we put that data into are our opinions embedded in mathematics. Businesses and the free market are interested in the bottom line, not fairness, and what about AI from China? Other cultures have other norms.

So, should AI engineers sign some sort of Hippocratic oath?

Following the market crash of 2008, brought on by WMD ‘weapons of Math destruction’, a term coined by Cathy O’Neil, in her book by the same name, two financial engineers Emanuel Derman and Paul Wilmott wrote their own version of such an oath, it said:

  • I will remember that I didn’t make the world, and it doesn’t satisfy my equations.
  • Though I will use models to boldly estimate value, I will not be overly impressed by mathematics.
  • I will never sacrifice reality for elegance without explaining why I have done so.
  • Nor will I give the people who use my model false comfort about its accuracy. Instead I will make explicit its assumptions and oversights.

So before we fall in, and leave complicated large scale decision making to deep learning and AI, let’s consider the consequences of not seeing humanity and fairness, a code of ethics, and a full understanding of what bias will do to the disenfranchised, and society as a whole, before we have another revolution again based on hunger, disease and poverty.