Digital Genies

by Jacob Brogan, Slate.com

As artificial intelligence grows increasingly sophisticated, it also grows increasingly alien. Deep learning algorithms and other A.I. technologies are creating systems capable of solving problems in ways that humans might never consider. But it’s important that such systems understand humans as well, lest they inadvertently harm their creators. Accordingly, some researchers have argued that we need to help A.I. grasp human values—and, perhaps, the value of humans—from the start, making our needs a central part of their own development.

In an interview, University of California, Berkeley professor Stuart Russell emphasizes the need to ensure artificial intelligence (AI) understands fundamental human values, a task he says is fraught with uncertainty.

“What we want is that the machine learns the values it’s supposed to be optimizing as it goes along, and explicitly acknowledges its own uncertainty about what those values are,” says Russell, recipient in 2005 of the ACM Karl V. Karlstrom Outstanding Educator Award.

He notes the addition of uncertainty actually makes the AI safer because it permits itself to be corrected instead of being singled-minded in the pursuit of its goals. “We’ve tended to assume that when we’re dealing with objectives, the human just knows and they put it into the machine and that’s it,” Russell says. “But I think the important point here is that just isn’t true. What the human says is usually related to the true objectives, but is often wrong.”

Russell says the AI should only act in instances in which it is quite sure it has understood human values well enough to take the right action. “It needs to have enough evidence that it knows that one action is clearly better than some other action,” he says. “Before then, its main activity should just be to find out.”  Read the interview

DCL: This interview is worth a read by anyone interested in the current issues facing AI research. And since AI is fast becoming an enabling technology in most of our society’s support systems,  our cars, home appliances, Internet search systems, and medical systems, for example, that should include all of us. What I got out of this interview was an overwhelming feeling that it will be a very long time indeed before a robot behaves according to values such as respect, honesty, loyalty, compassion, fairness, and so on. Perhaps it is worth remarking that many members of the human race do not behave according to basic human values, and as a result, a lot of them are in prison as we speak. Why should robots be any different?  Anyways, soldier on, professor Russell.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.