Beyond GDPR – about the fundamental right to understand why

Artificial Intelligence (AI) is one of the buzzwords of 2019. AI reduces effort. AI increases effectiveness. AI will change the way we live together and conduct business. AI might even increase the H2H-respect between buyer and seller. Great! But be aware. There is also a downside to AI. One that needs to be thoroughly investigated, before going light over the consequences and putting more AI into all kinds of apps.

The downside of AI is that AI can be wrong. Even more, it can be so wrong that it damages careers or even lives. AI could help judges issue a better sentence, when imposing a fine, sending someone to prison, or pronouncing a death sentence. The process is supposed to be more objective, less error-prone, more uniform, and fairer among peers. Unless AI is wrong. If there is no way to find out how a decision was made by AI, it will be hard to get a second opinion. Or to understand why you have to pay a fine – or find yourself in death row. There is a reason why conclusions of judges and juries are to be written out in detail – by law. This is a fundamental human right.

Now, shouldn’t we also impose this right on AI?

AI can be very annoying – even for basic every-day-life applications

Okay, one might say, let’s wait and not use AI in these capital crime cases. In the meantime, AI can be very helpful. True. But AI can also be very annoying, unpleasant, and even irritating. Let me explain this with a very small experience of ‘AI super light’ that went seriously wrong – so wrong it irritated me at first, and then got me puzzled over the ‘why’ of it. There were no life-threatening consequences, yet it was very annoying, and I almost cancelled my account of Apple Music because I could not figure out how to get out of a self-fulfilling seventies and eighties trap.

What happened?

I bought myself a new Apple toy last year. Added Apple Music to it, browsed through some proposed charts, and downloaded some seventies and eighties tracks. I was born in 1965, so loving the seventies and eighties does makes sense, doesn’t it? And indeed, it was no surprise to me, that I could listen to great songs of the Dire Straits, Simple Minds, U2, Wham, Tears for Fears, … in the weeks after my purchase. Apple Music moreover allows you to click on a “love this music” heart icon, whenever you appreciate a certain song. In the meantime, Aretha Franklin had died, and Queen was all over the movies. So, after some weeks, it was no surprise to me that these artists were also brought to my attention. And yes, I loved them. Only, after some time, I was hearing more of the same, and a lot less of other music.

Now here’s how this works. Behind the scenes, there is a little button that says “suggest music that I heard before”, and I presume the Apple Music back-office application also detects certain download preferences to understand what music to present to the lazy customer I am. Moreover, the algorithm behind the love-this-music-heart-button takes some sort of null-sum decisions, that reduces the open slots for new music each time you click on that button. After a while, almost no new music can get in. It wasn’t until I also clicked on a hit of the synthesizer guru Jean-Michel Jarre, that I started to realize that I was listening to a prominent amount of synthesizer music in comparison to what I would really like to hear.

For the sake of the exercise, I gradually un-hearted my previous choices and switched off the button that kept on re-suggesting what I had listened to, to start hearing new artists and songs again. Yes, I had beaten the very light AI from Apple Music! And I now know why – at least how Apple Music determines my preferences, and hence their proposals. But just suppose this was not about music preference, but about approving a bank loan, or selecting me for pre-qualification interviews for a job, or being sent to prison. I might at least want to know why.

Beyond GDPR … documenting and understanding AI?

The “right to know why” is probably one of the basic rights that will be necessary to allow and cope with AI on a broad scale in the future. “Knowing why” should be part of the future of GDPR-like regulations that protect individuals against the undesired effects of using digital platforms for storing individual’s data in databases. In order to let us know why, deep learning and AI will have to document their decision making, and might even have to document the learning processes behind the AI decision taking, in order for humans (and other machines!) to understand the proposed decisions. And in order to oppose and overrule AI decisions.

This debate is not entirely new though. The debate is treated by many authors on ethics and automation, to start with “I Robot” from Asimov. It is also featured in one of the top scenes of Stanley Kubrick’s 2001, Space Odyssey. Here is the trailer to sample … and when you ask yourself why computer system Hal decided not to open the gate, then go see the whole movie 😉

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s