My introduction to the ethical issues and lack of diversity behind Artificial Intelligence

“The problem with Artificial Intelligence is … it’s just a talking point. It’s not really ‘that’ relevant and even if it were to be relevant to me, there’s nothing I can do about it.”

That is the sentiment of most people I speak with outside of the tech industry. The average consumer may just be coming to terms with the meaning of the word ‘algorithm’ and may associate it with the algebra learnt at school (or not, as the case may be) … to predicting the winner of the next Grand National, winner of the Champion’s League Cup or the next Prime Minister. In the main, I would argue however, Artificial Intelligence still remains a mystery to most people including myself.

And that is why I found it imperative that I attended the Breakfast Briefing organised by Perfect and hosted at Ricoh UK this week. My primary aim was to bring myself up to speed on the development of Artificial Intelligence so that I might better be able to engage with our clients at makepositive who are increasingly adopting Machine and Artificial Intelligence. My secondary aim was to gain an understanding of the ethics underpinning the development of AI and MI. I was also keen to meet and network with others who are already streets ahead of me in terms of industry knowledge and the applied science of Artificial Intelligence. Despite this chasm in knowledge, I usually find some commonality to justify my place at these events, no matter how tenuous.

And so to the event.

Despite having read the invitation several times, and gained a good understanding of the synopsis, I was entirely unprepared for what was to follow.

I managed to read the discussion document for the second time whilst travelling to the event. In the preliminary session, I found that my attention was fully drawn to the superbly written paper that sets out in technical and yet simple terms, the issues around Artificial Intelligence and in particular the absence of any recognised standards and controls.

And so I entered the room fully versed in the global challenges facing the New Frontier …the world of Artificial Intelligence.

Having been the co-author of a very simple algorithm designed several years ago to measure social media performance, I soon found comfort in the fact that I was amongst others who had a limited interest in the tech. What made us one, regardless of technical knowledge or industry sector we represent, was the shocking realisation that none of us was aware of any standards governing the creation and implementation of Artificial Intelligence. Yes, we are all bound by our various codes of conduct and the usual norms and controls that one would expect of senior representatives from some of the world’s largest companies … but we could find no evidence of certified, approved or moderated interrogation of the algorithmic processes driving Artificial and Mechanical Intelligence.

Does this matter?

Simple examples include the scenario described by one of the delegates as follows: imagine the scenario where a recruitment company identifies a need to streamline their recruitment process and filter out inappropriate applicants via an online survey. Having interrogated the results, the firm may conclude, with some statistical logic, that candidates with say red hair, routinely fail the interview process. Other characteristics such as level of qualifications may well be factored into the algorithm designed to eliminate those with a high statistical probability of failing the interview. The result will be that redheads no longer get put forward for the job. Statistically, this may seem sensible. Simplistic, but possible.

The problem lays in the source of the logic. If the decision to penalise redheads was based on a personal opinion or discrimination, whether consciously or otherwise, then the trend may be set for ever more. There is no current arbitrator of such processes. What is perhaps even more concerning, is the distinct possibility that the subsequent data may be reused further down the process and influence ongoing, new formulae if not unchecked.

And so we can see the seriousness of the problem facing the ratification and management of Artificial and Mechanical Intelligence in the rush to the New Frontier.

I can say from personal experience that the design and implementation of any algorithm must be checked for rigidity and bias. In the case of my algorithm, global reports and surveys were subsequently commissioned and published on the back of the data we produced, with good faith, but without proper scrutiny. This should not happen.

What happens next?

To do nothing to protect the integrity of the future of Artificial Intelligence is clearly not an option. Even since writing this blog, conversations around the issues of diversity and inclusion in AI have been progressing in Whitehall, across the corridors of Leeds University Business School and amongst my colleagues at makepositive and Salesforce. Impetus is growing and actions are currently being planned to address this universally challenging yet crucial topic. If you would like to be part of the debate, or better still, the solution, then please get in touch via nadio.granata@makepositive.com.

My thanks go to Vicky Sleight, Founder of Perfect for creating this initiative and inviting me to this career-changing event. And special thanks go to Vicki MacLeod for producing such an interesting and thorough review of what will undoubtedly become a major concern to global business as the world starts to take seriously not just the amazing benefits but the ethical risks associated with Artificial and Mechanical Intelligence.

Nadio Granata mcim, fhea, airp is a Teaching Fellow (Marketing), company director and consultant to the tech and recruitment industries.

Share this post: