BusinessDay: Humans are criminally liable for AI’s mistakes

BusinessDay: Humans are criminally liable for AI’s mistakes

Adam blames Eve who in turn blames the snake. Bill Clinton points his finger at the camera saying “I did not have sexual relations with that woman”. Hansie Cronje distances himself by saying “the devil made me do it.” The blame game is an innate ability for homo sapiens.

When I was in primary school, my friends and I were playing rugby one afternoon when a man from a house across the street walked up to the playing field and furiously demanded to know which of us had thrown a brick through his living-room window. In choir-like precision, “it wasn’t me” rang out across the field. One of my friends blamed the brick. It was a hilarious excuse and I remember it to this day.

The brick is a man-made object incapable of making decisions or catapulting itself through the air into a window. It is like a murderer blaming the gun for his crime. Humans make things and we use those things to harm others. What will happen when the things we make learn to decide for themselves? What if our creations can make mistakes or commit a crime? Does the responsibility lie with the thing or with its creator?

We have already created such things. Artificial intelligence (AI) and automated descisioning systems are ingrained into our everyday lives whether we are aware of it or not. Organisations and governments are mining our data from a variety of sources. Our internet searches, purchasing habits, lifestyle and behavioural data are collected. Mobile-phone usage, social-network interaction and video-surveillance systems place us under the ever-watchful eye of Orwell’s Big Brother.

The ability of AI systems to automatically make decisions about people is based on profiling, which is the process of analysing a person’s behaviour, interests and routines, as well as their personality and characteristics. This information is used to categorise people into different groups. Individual profiles are created as a result of the algorithms identifying correlations between various behaviours and features.

Banks use it to determine the risk and credit worthiness of clients, medical institutions to prioritise treatment for patients, universities to determine the eligibility of academic applications and HR teams use it to scan the suitability of job applicants and determine staff’s eligibility for promotions and salary increases.

The European General Data Protection Regulation (GDPR) states that profiling is “any form of automated processing of personal data … to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.”

The algorithms are created by humans and therefore infiltrated with bias. People’s race, gender, sexual orientation or even geography may affect how the system reaches its conclusions and actions through automated descisioning.

A topic of debate is whether AI systems should be held accountable for their mistakes. There is currently no justification for granting legal-person status to machines. Accordingly, someone else should be held responsible if the AI discriminates or causes harm. Herein lies the conundrum: who is responsible? Is it the C-suite, the AI engineers or the legal department?

Unfortunately AI regulation is still far from enforcing responsibility on “the who” that created “the what”. Blaming the devil or the brick for the cracks in the window should never be acceptable as we use the most powerful technology ever created.

• Prof Steyn is on the faculty at Woxsen University, a research fellow with Stellenbosch University and the founder of AIforBusiness.net

Images Powered by Shutterstock