When AI Goes Wrong We Won’t Be Able to Ask It Why

by Jordan Pearson,  Motherboard

Software governs much of our daily lives from behind the scenes, from which sorts of information we consume, to who we date. For some, secretive algorithms decide whether they are at risk of committing a future crime. It’s only natural to want to understand how these black boxes accomplish all this, especially when it impacts us so directly.

Artificial intelligence is getting better all the time. Google, for example, recently used a technique known as deep learning to kick a human’s ass at Go, an incredibly complex board game invented thousands of years ago. Researchers also believe deep learning could be used to find more effective drugs by processing huge amounts of data more quickly. More relatably, Apple has injected the technique into Siri to make her smarter.

Futurists believe that computer programs may one day make decisions about everything from who gets insurance coverage, to what punishment fits a capital crime. Soon, AI could even be telling soldiers who to shoot on the battlefield. In essence, computers may take on a greater role as our insurance salespeople, our judges, and our executioners.

These concerns are the backdrop for new legislation in the European Union, slated to take effect in 2018, that will ban decisions “based solely on automated processing” if they have an “adverse legal effect,” or a similar negative effect, on the person concerned. The law states that this might include “refusal of an online credit application or e-recruiting practices.”

In the event that a machine screws up somebody’s life, experts believe that the new law also opens up the possibility to demand answers. Although a “right to explanation” for algorithmic decisions is not explicit in the law, some academics believe that it would still create one for people who suffer because of something a computer did.

This proposed “right,” although noble, would be impossible to enforce. It illustrates a paradox about where we’re at with the most powerful form of AI around—deep learning.

We’ll get into this in more detail later, but in broad strokes, deep learning systems are “layers” of digital neurons that each run their own computations on input data and rearrange themselves. Basically, they “teach” themselves what to pay attention to in a massive stream of information.

Despite these programs being put to use in many facets of our daily lives, Google and Apple and all the rest don’t understand how, exactly, these algorithms make decisions in the first place.

If we can’t explain deep learning, then we have to think about if and how we can control these algorithms, and more importantly, how much we can trust them. Because no legislation, no matter how well-intentioned, can open these black boxes up…………………..

“We are heading into a black future, full of black boxes.”   Read the full article and see what you think.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.