Artificial Intelligence Isn’t a Threat–Yet

by Gary Marcus, The Wall Street Journal

Does artificial intelligence threaten our species, as the cosmologist Stephen Hawking recently suggested? Is the development of AI like “summoning the demon,” as tech pioneer Elon Musk told an audience at MIT in October? Will smart machines supersede or even annihilate humankind?

New York University professor Gary Marcus, CEO of Geometric Intelligence, says Hawking and Musk have a point, but the existential threat they fear is still many decades off and people face somewhat different threats from AI in the nearer term. Marcus says “superintelligent” machines are unlikely to arrive soon, but we are already in the process of placing a great deal of power and control in the hands of automated systems and need to be certain those systems can handle it.

Marcus points to stock markets and autonomous driving technology as two examples of automated systems that could do tremendous damage if not properly and rigorously controlled. Although Marcus acknowledges such technologies have tremendous potential to do good, he says steps have to be taken to ensure they do not go haywire. Marcus says those steps could include funding advances in program verification and establishing laws surrounding the use of automated systems in specific, risky applications.  Article in WSJ

DCL: I have worked in AI since 1958 (at MIT and at Stanford). I have yet to see anything in it to be frightened of – except trusting it to work!  Predictions from its proponents have usually come up short. When you read  Alan Turing’s 1950 paper “Computing Machinery and Intelligence” you will find that he is very careful about describing areas of activity where machines might be developed so as to exhibit “thinking” or “intelligent behavior”.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.