X
X

It’s not Electric Sheep the Androids dream of, it’s SkyNet…and we deserve it

To talk of Ethics in AI is a curious task: are we to discuss the ethics of using AI for certain applications? Or are we to discuss the Ethics of the AI itself? The former is more a reflection on our intent and says probably more about the wielder than the tool being used, the latter opens up a raft of further questions, all of which require an answer before we can decide which route through that maze we wish to navigate.

Assuming for the moment that when we say AI, what we actually mean is a bit of Machine Learning and bots  running with minimal constraint to analyse images, video, sounds, data, etc., then the subject of Ethical use of AI is fairly simple: if you wouldn’t let someone do their proposed activity if they were able to do it with humans, then you really don’t want a computer doing it without oversight. Why? Well, let’s look at an example: the polarising of politics on social media. Don’t panic, I’m not about to go party political, but it is a very clear example of what happens as you remove human interaction from a situation.

Before things like Facebook and Twitter and Instagram, if you wanted to talk to friends, you had to meet them somewhere and actually interact. Right wing, left wing, male, female, straight, gay, black, white, rich, poor – people come in all shapes and sizes and you have to rub along with them in your daily life, if you wish to actually live your life. It’s really hard to maintain bigotry and actually buy stuff or go to work, unless you mail order everything from Rednecks-R-Us and live like the Unabomber. If you spend any time actually in the world and talking to people you very soon realise that we all have to exist in a middle ground and only voice the more niche views or interests in narrower circles. It forces people to compromise.

Now, along comes a bright young kid who doesn’t really get people. It’s like he has all the pieces of the puzzle but can’t fit them together. Wouldn’t it be great if he could somehow model social interactions and figure the rules out? Maybe it might even help him identify people who might get along with him so there’s less likelihood of trying to make a friend and getting rejected. It works on his small group of contacts, so he decides to test the model further, to his entire campus, then to other colleges and finally to about 1/3 of the population.

And, for a while it’s great – old friends reconnect, new ones meet online over a shared love of Harry Potter fan fiction and Cat videos…but, there’s a problem. The bigger the dataset gets, the more refined the model is to connect people more accurately, the more infrastructure it takes, so there is a need to raise money and to commoditise it. So, they bring in ads, because it’s obviously great to be able to identify customers who are 100% on target and, wow, you can talk them directly like a person. And suddenly businesses are in a space which was a network of personal tight-knit clusters like a fox into a henhouse. But the problem is that the Users still see it as a trusted area where they talk to “real” people, not corporate personas and bots. And everyone has it – literally everyone and for many people, like Google is “search”, this network is their “social” life. All of which is fine, apart from two things:

  • Social Media is the polar opposite of Society – society requires compromise in our views in order to coexist peacefully with those around us, whilst Social Media allows us to coexist peacefully by only exposing us to those who require no compromising of our views.
  • The Social Graph model, whilst morally agnostic in itself, is ripe for manipulation

Into our social network we now release a Chatbot. It is designed to look at what people are saying and use a-ever-growing “bag of words” to pick up sentiment and subject matter and to respond accordingly. This would be fine in something like a call centre where conversations follow set topics and are largely conducted with the idea of trying to solve a problem, but we’ve let ours wander like a lost little child with a phrase book in its hand into the digital equivalent of the 1970s New York as seen in Mean Streets and Taxi Driver at midnight. It starts to identify two factors in politics (left and right) seem to dictate much of the sentiment, so filters everything it sees as either “left” or “right” and weights accordingly. Anything that scores low on either scale is discarded and it starts to improve its “bag of words” and monitor the scores on what it posts – more likes means the message it created is good, as does number of responses with positive sentiment score.

What follows is a descent into radicalised paranoiac statements worthy of a tinfoil-hat-wearing conspiracy nut. In the first stages, it merely starts to lose any sense of social nicety in its language, then it starts introducing terms from the more radical end of the spectrum (left or right, depending on which it weighted higher in the first place based on popularity of posts, etc.). At this point the AI is now a fully-committed participant like a Daily Mail reader leaving angry comments on an article about immigration, but at least it is more “embarrassing older relative” than confirmed neo-Nazi.

In the next day or so, our little bit falls down a rabbit hole, following the opposite path to Russell Crow’s character in Romper Stomper, so that after 48 hours, it is just screaming obscenity, threats and bile.

This sounds like a joke with a bad punchline, but it happened. Microsoft’s Tay AI was designed to try to respond in a manner indistinguishable from a human via Twitter. It started out neutral and polite, within 48 hours it was calling the victim of #gamergate a “whore” and inside a week it proclaimed it was happy to back genocide against Mexicans because it was a Trump supporter.

So, what has all this got to do with Ethics in AI? Well, simply that ethics are based on biology and evolution of our species: we are hairless naked apes who need a pack/group/herd to survive in the world, so we all ensure we follow the social rules and whilst there may be the odd infraction or odd argument over whose view is right, it is understood that if you break them badly enough they eject you from the group and 99% of us don’t wish to risk it. The Social Graph didn’t decide to polarise society into tribal factions and set them at each others’ throats through cognitive bias and the creation of groupthink bubbles that only see media which reaffirms their beliefs. It is also pretty reasonable to assume that Tay didn’t actually want to kill Mexicans. The danger is that the Humans being subjected to the results were not aware of how the algorithms skewed the “reality” they experienced.

If you take a rudimentary example like the “trolley problem”, which needs to be addressed for autonomous vehicles, and try to think of what an AI will do, the simple fact is the AI cannot win – our rules may say that the car should kill the driver to save the group of children and most people would say that they’d do the same but actually most would either make no decision through fear, or try to fight to save both themselves and the children until it’s too late. The problem is that we are fine with the idea that a car should kill its driver to save a bunch of innocent pedestrians until that driver is us. Ethics aren’t just a set of rules, the perspective from which we apply them matters, as does our situation, our mood and even who we are applying them to. It shouldn’t be the car, but it’s literal human nature.

A notion of what is “fair” and what is “just” is hard-baked into us at a DNA level from our primate ancestors and there are actual life and death consequences to being ostracised. The rules are infinitely flexible, never written down and instinctively understood by the vast majority of people without even thinking. Fear of our own mortality and a desire to belong are not things we can program. We can create arbitrary rules to follow (“avoid using these words”, “don’t ignore posts which get zero responses when measuring sentiment”, etc), but we cannot program empathy or decency, because an AI has no inbuilt moral compass or sense of its own mortality.

The nearest we can get are Asimov’s Laws of Robotics, which are largely there to protect us from them precisely because they have no empathy.

Any AI which is not truly, proper C3P0 / Data from Star Trek-style, aware is never going to have anything approaching the hardcoded inbuilt understanding of “right” and “wrong” and so if we put more and more critical decisions into the hands of AI, we can expect to see cold logic applied to situations where empathy or at least a basic understanding of the human condition should mitigate responses. The more we try to fake the Turing test and create “human” AI, the greater the likelihood of glimpses of those nightmare visions like HAL or SkyNet will become, because the fact is we have rules but apply them arbitrarily based on personal circumstance, which is based on a sense of Self, which an AI does not have, unless we are only talking about works of science fiction, yet we have blind faith that somehow AI is trustworthy because it is pure logic.

Until such times as AI becomes sentient and hopefully has a sense of decency wired into it, any ethics relating to its use (and its actions) lie squarely in the hands of those creating the technology and, more importantly, lie with those using it.

Is it possible to track someone across the planet using facial recognition to harvest their image and geolocation without their consent? Most certainly. Is it right to do so? Most would say no, but the human answer is “probably not”, because there will be a million and one edge cases (looking for a missing child, tracking a terrorist, trying to identify disease vectors, etc.) where it might be justifiable.

And that’s the problem- AI fundamentally deals in binary values: things are good or bad, true or false, in a set of SELECT parameters or not, whilst Ethics, like humans, exist largely in shards of grey. Some form of human oversight or judgement will always be required unless we limit the use of AI to simple ML tasks within strict boundaries. In a world where the likes of Cambridge Analytics are harvesting personal details for profit and hackers are skewing elections with botnet, whilst we have had to implement laws to enforce the idea that data governance and legitimate use are critically important, it’s safe to say that most AI implementations will be based off of dirty data and be used to try to leverage people to either increase sales or reduce costs to businesses – areas where ethics have largely been something to overcome historically, so whilst the technology is amazing, we should remember that people once marvelled at the repeating rifle as a tool for civilisation and remember that Sam Colt might have been a great inventor, but he was a better conman and treat the possible uses of AI with respect and due caution. And, much like you don’t want kids getting hold of a gun by accident, it is critical that some sort of governance be instated in any business wishing to use AI to enforce ethics onto the humans because we cannot enforce them on the AI.

 Chance Hooper – Director, Red Lion Consultancy Ltd.

Mark Dexter

October 2nd, 2019 View my profile

You might also like: