Data Use vs Data Need: What should your business be prioritising?
July 21st, 2020
In this blog series, we tackle an emotive topic: ethics in Artificial Intelligence. Can we apply an ethical framework to AI – and if so, how? Who determines what is ethical in the context of your data, and how can we keep those metrics bias-free and positioned for the best outcome?
The use of AI without ethics is the stuff of nightmares – not to mention the subject of countless movies. While in real life, AI is unlikely to chuck you out an airlock like Hal 9000 in 2001: A Space Odyssey, there are still some significant issues around how we generate and use data: chief among them, predictability and trust. An AI programme without either of those attributes risks resulting in bias and discrimination, invasion of privacy, other social ills, or just plain poor-quality or unreliable outcomes.
To counter this, firstly a robust regulatory and legal framework must play an essential role in ensuring the safety and ethics of AI and machine learning, as they evolve to provide an ever-widening array of essential services. Then, developers and data scientists must have a blueprint from which to consider the social and ethical implications of how their projects will be delivered and used.
Important work is being done to help us define the issues and develop ethical solutions. In June, the UK government released a guide to using AI in the public sector, citing the potentially substantial impact on society as motivation for making AI ethics and safety a high priority. The guidelines acknowledge that intentional misuses of AI are rare – but building blocks must be established to avoid unintended and potentially harmful consequences. And in a comprehensive new report this year, “Understanding Artificial Intelligence Ethics and Safety”, Dr David Leslie of the Alan Turing Institute’s Public Policy Programme describes this project delivery environment as one that requires a collaborative approach in “maintaining a deeply ingrained culture of responsibility and in executing a governance architecture that adopts ethically sound practices at every point in the innovation and implementation lifecycle”.
There are certainly great challenges in this field, but also tremendous opportunities to create a real and lasting impact for both individuals and society at large. Along the way, we must keep transparency and human values to the fore, with rigorous and thoughtful decision making driving our projects.
We hope you enjoy this series and as ever, we welcome your comments and feedback. Take a look below to explore the topic:
Tariq Rashid – Founder, Digital Dynamics
The last two decades have seen dramatic advances in automation, from affordable smartphones that can understand your voice commands, to self-driving cars with safety records comparable to human drivers, and computers that can diagnose disease as well as experienced doctors.
Read more here – www.kdrrecruitment.com/safe-and-trusted-ai
Chance Hooper – Director, Red Lion Consultancy Ltd.
To talk of Ethics in AI is a curious task: are we to discuss the ethics of using AI for certain applications? Or are we to discuss the Ethics of the AI itself? The former is more a reflection on our intent and says probably more about the wielder than the tool being used, the latter opens up a raft of further questions, all of which require an answer before we can decide which route through that maze we wish to navigate.
Artificial Intelligence (AI) is still in its infancy; while the image Hollywood gives us of robots roaming our cities and becoming smarter than humans seems a long while off (if it ever happens) AI is still here, mostly in the form of Machine Learning. One question that seems to arise the most around AI is will it take our jobs?
Read more here – www.kdrrecruitment.com/will-ai-take-our-job/