Ethics in AI
July 11th, 2019
Since robots and machines have been incorporated into our lives, we have always lived with the question of will the robot rise and take over humans? We see it in films and TV but could it become a possibility one day?
The field of Artificial Intelligence (AI) is growing, rapidly. AI and Machine Learning is becoming a part of our lives, and while at the moment there doesn’t seem to be much of a threat I want to take a look at whether AI can go too far and if there are already warning signs of this happening.
It was only at the beginning of the month that it was reported that Facebook had to shut down its AI bots as they started communicating with each other in their own language. While an AI bot creating a language that appears to be similar to broken English may not cause alarm bells it should. The AI was defying the code it had been programmed with, using machine learning to better understand as it ‘worked’.
This is a huge advancement for AI and machine learning and shows the potential it could have for the future but it has caused concern, especially in the tech world. Professor Stephen Hawking has recently come out warning that AI could one day surpass the intelligence of humans, he also warned of the “real risk of AI” not being malice (as seen in the films) but determination.
“The real risk with AI isn’t malice but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble” – Stephen Hawking
And it’s not just Professor Stephen Hawking that is concerned, over 100 tech leaders, including CEO of Tesla Elon Musk, have written a letter to the United Nations (UN) warning of “killer robots” and that AI should not be used to manage weaponry. Their concern is that once a robot and AI system has complete control of a weaponry system it could select and engage in warfare without human intervention. Part of the letter reads: “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend”.
This seems pretty scary to me and answers my questions about AI going too far! No-one is entirely sure what rate AI is developing, while AI creating its own language and learning all the time doesn’t seem like a massive development (compared to warfare), it certainly doesn’t seem like a far-off future only for the movie screens anymore.
Do you think AI can go too far? Do you think “killer robots” could be in the near future?
This blog was originally published on LinkedIn. To read the original blog click here