<Prev | [Index] | Next>


technews-editor@acm.org
Date: Mon, 10 Dec 2018 11:36:58 -0500

Linda Geddes, BBC News, 5 Dec 2018 via ACM TechNews, 10 Dec 2018

Computers can be tricked into misidentifying objects and sounds, raising issues about the real-world use of artificial intelligence (AI); experts call such glitches `adversarial examples' or `weird events'. Said the
Massachusetts Institute of Technology (MIT)'s Anish Athalye, ``We can think of them as inputs that we expect the network to process in one way, but the machine does something unexpected upon seeing that input.'' In one experiment, Athalye's team slightly modified the texture and coloring of certain physical objects to fool machine learning AI into thinking they were something else. MIT's Aleksander Madry said the problem may be rooted partly in the tendency to engineer machine learning frameworks to optimize their performance on average. Neural networks might be fortified against outliers by feeding them more challenging examples of whatever scientists are trying to teach them.

https://orange.hosting.lsoft.com/trk/click%3Fref%3Dznwrbbrs9_6-1d7a4x219197x069560%26


<Prev | [Index] | Next>