David EppsteinDavid Eppstein - 2016-07-30 15:30:28-0700 - Updated: 2016-07-30 15:30:28-0700

Neural networks are inadvertently learning our language’s hidden gender biases

Shared with: Public, Suresh Venkatasubramanian
+1'd by: Sridhar Hariharaputran, Ed S, Yusu Wang, Shishir Pandey, Vaidotas Zemlys-Balevičius, Alok Tiwari
Reshared by: Stefan Huber, Vaidotas Zemlys-Balevičius
Sariel Har-Peled - 2016-07-30 21:10:05-0700
Misleading title - as the article note, the bias is in the society not the language.
Stefan Huber - 2016-07-31 04:36:33-0700
Well, the bias is measured within language, and it is concluded that the bias in the expression of thoughts is due to a bias in the thoughts per se. (Which seems reasonable.) So to me the title is not misleading; if at all, it would be the conclusion to society.
Sariel Har-Peled - 2016-07-31 19:20:05-0700
No. Not really. They showed bias in the corpora they used for training their algorithm.

There are languages that are inherently gender biased like Hebrew, but English seems to me to be much better balanced. The question of course is how you define the language - is it the abstract grammar, or some random texts you use to train an algorithm on.
Suresh Venkatasubramanian - 2016-08-01 07:27:50-0700
Claiming that the neural networks reveal hidden bias that "no one knew was there" is just silly.
David Eppstein - 2016-08-01 10:00:57-0700
+Suresh Venkatasubramanian I agree, but I think the parts about trying to measure and correct for the bias are interesting.
Suresh Venkatasubramanian - 2016-08-01 10:14:20-0700
Oh completely :). That's singing my song :). But I've become a little sensitive to the habit of CS folk to claim that we've discovered things that people already know about.
David Eppstein - 2016-08-01 10:23:42-0700
Even in that habit we're copying someone else (the physicists).
Shreevatsa R - 2016-08-01 22:42:45-0700
Not specific to this article, but a random thought: I wonder if machine learning / neural networks may actually be used for good, by giving us a way to measure/discover the most severe kinds of bias existing in society. E.g. might we be able to estimate in which contexts sexism or racism is greater in magnitude? Or possibly notice biases that simply happen not to be a source of concern for society currently (not a cause or a movement with many supporters), but are nevertheless severe in impact? (Imagine discovering that there are large biases against atheists, or against obese people, or some category no one's seriously thought about…)
Suresh Venkatasubramanian - 2016-08-02 00:44:34-0700
that's definitely a possibility
Sariel Har-Peled - 2016-08-02 12:53:45-0700
Or you might discover things like that... http://tylervigen.com/page?page=1