<Prev | [Index] | Next>


technews-editor@acm.org
Date: Mon, 17 Apr 2017 12:16:09 -0400 (EDT)

Princeton University 13 Apr 2017 via ACM TechNews 17 Apr 2017

Researchers at Princeton University have demonstrated how machines can be reflections of their creators' biases. They determined common machine-learning programs, when fed ordinary human language available online, can obtain cultural prejudices embedded in the patterns of wording.
"We have a situation where these artificial intelligence [AI] systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from," warns
Princeton professor Arvind Narayanan. The team experimented with a machine-learning version of the Implicit Association Test, the GloVe program, which can represent the co-occurrence statistics of words in a specific text window. The test replicated the broad substantiations of bias found in select Implicit Association Test studies over the years that relied on human subjects. Coders might hope to prevent the perpetuation of cultural stereotypes via development of explicit, math-based instructions for machine-learning programs underpinning AI systems. https://orange.hosting.lsoft.com/trk/click?ref=znwrbbrs9_6-13472x2118efx072995&


<Prev | [Index] | Next>