EmpA(I)thy
Empathy has always appeared to be the last barrier between man and machine. But what change will Big Data make?
Relationships are impossible without empathy. Understanding one another is key to humanity. This very human quality allows us to assert our superiority over machines. Yes. machines compute, synthesise, calculate a level beyond our understanding. But if they can’t “put themselves in our shoes” there is a whole realm of knowledge walled off. IQ is no substitute for EQ.
So, apparently, man will never fully be replaced by machines. Robotic Surgeons perform an operation - but we still want a doctor to break the news of a diagnosis. RegTech (Legal AI) may output the results of a statute - but it takes a judge to determine whether the man in the stand lies when he says he won't reoffend. Robot therapists? Inconceivable.
But what happens when Big Data develops? Already, 300 likes, and Facebook predicts how you will vote better than your spouse. CCTV watches us as we walk; apps track our spending; Spotify selections are snapshots into our psyche. As the quantity of data we output increases, the complexity and nuance of a psychometric profile generated by it will become ever more accurate.
Today, we are increasingly sensitised to the use of personalized data. Take the Facebook Ads on the side of your newsfeed, displaying content from the sites you last visited. 5 years ago they would have seemed Orwellian. Now, they are the fabric of everyday life. We are increasingly accustomed to machines gauging and reading our interests. Hundreds of privacy boxes checked, consent for our data to give a tailor-made service to our tastes, emotions and more. Interacting with things that "understand" us, via our data, will become the norm. A hyper-personalised service is simply to be expected.
When this happens, expect upheaval in the fabric of our relationships. Think of the friends to whom you are an open book. They know how to cheer you up, how to push your buttons - when to give you space and when you need a hug. Now imagine this, but with no errors. Interacting with something that perfectly identifies how to deliver the optimal outcome. Eventually, this will be the AI algorithms we interact with for hours.
When this happens, it is going to bring a host of problems.
One is a reduction of our tolerance for others. What will it be like when the conversational missteps, awkward pauses, misjudged intentions of conversation with real people seem unwelcome smears that contrast with the manicured, flawless conversations with Alexa and Siri? The degree of our acclimatization to a personalised world of social media is shown in the current zeitgeist for cancelling. We're not used to being disagreed with. Being misunderstood, and the differences in interaction with those who think in a radically different way to us is at odds to the responsive devices we spend so much of our time on. Our tolerance has suffered, and will continue to do so.
The second problem is due to the difficulty of setting a goal for AI. As AI develops, it offers us is the opportunity to become increasingly general in how we specify the goals and tasks for a machine. From basic calculations (2+2 = 4) we have moved to more abstract goals like "maximise the amount of user interaction" or "win a game of Go". So what abstract goal do we ask these machines with vast amounts of data to deliver on?
The importance of answering this cannot be understated. Already, we can see how things go wrong when the goals aren't specified scrupulously. It's why Facebook throws out clickbait, (or the food porn videos I can't seem to stop watching). It aims to maximise the time on the site - not the quality or personal growth from the content delivered. The parameters for the model led to the fake news epidemic, the hours wasted on ads, the fact that the person who you love to hate-stalk always comes up on your newsfeed.
None of these consequences were designed, but are the byproducts of a poorly specified goal. But how to specify the 'right' goal? Even something (prima facie) uncontroversial like 'make people happy' has unintended effects. Would people be shown the news? Charity advertising? Allegations against leading figures? Some of these only make a sadist happy. Machines and AI will be able to deliver an ever more increasingly abstract set of goals. But ideas of what is good, what is right - even being the tech nerd that I am, I struggle to imagine how Facebook could move to finding the answer to these questions that are fundamental to being human. Until the right goal can be specified, we will continuously have these unforeseen consequences. Given how little progress philosophy has made to answering questions about the Right and the Good in the past 4,000 years, I am doubtful that an answer will be found soon. Without it, we will have to see the next mutated output of good intentions.
The third impact of this hyper-personalisation is that it will remove our ability to be challenged. We learn to be accurate in how we speak, and what we say because we are called out for a lack of clarity by people misunderstanding what we meant. We learn to express ourselves clearly because there is room for error. But if we are surrounded by things that can understand us better than even we know ourselves, why bother aiming for precise language? There are so many advantages to being forced to communicate and express ourselves - to think about how we come across to others. It permits us to clarify our own ideas, to refine them through the mode of communication and bring them to the forefront of our consciousness. An erosion of our capacity to understand one another.
On the one hand, machines that become ever more understanding. On the other, a species so used to having its mind read, it becomes unable to read those of others.
Could AI, eventually, have more empathy than mankind?