
Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.
Last week, there were some hot debates around whether affect recognition should be banned from future artificial intelligence (AI) research. Affect recognition or the capability to detect emotions of a person, has been regarded as a subset of facial recognition.
Training AI to analyze human expressions with the aim to underline the emotions behind them has always been tough. Now, experts from a New York University research center is adding onto the challenge with a report to remind the public that we should not take AI for granted, as what it says about human emotions, may have serious impacts on us and our society.
The AI Now 2019 Report
The AI Now Institute at New York University is an interdisciplinary research center focused on understanding the social significance of AI technologies. The institution published its AI Now 2019 Report recently, looking into the expression analyzes and how it may influence decision making.
In a rather deterrent tone, the report said affect recognition is “a particular focus of growing concern in 2019 – not only because it can encode biases, but because it lacks any solid scientific foundation to ensure accurate or even valid results. Regulators should ban the use of affect recognition in important decisions that impact people’s lives and access to opportunities. Until then, AI companies should stop deploying it”.
The authors did not make their claims out of the blue. According to MIT Technology Review, emotion recognition is a rapidly growing market, estimating to reach $20 billion and it is presently used to assess job applications and people who are suspected to be involved in illegal activities. Although there have been research studies to prove that affect recognition can lead to racial and gender disparities and there is little evidence to support if one can accurately tell how a person is feeling just by looking at his or her facial expression.
Recommendations
Indeed, emotions can be expressed in many different ways, as such, the AI Now Institute thought perhaps asking AI to interpret affects is asking “a question that is fundamentally wrong”. They suggested authorities should stop using all forms of facial recognition in “sensitive social and political contexts until the risks are fully studied” and the AI industry should “make significant structural changes” to systematically address racial and diversity issues.
Besides, the authors also feel that research on AI bias should also encompass non-technical aspects including how it will be practice and possible harms. On the other hand, tech workers should have a thorough understanding of what they are building and judge whether they are ethical. Last but not least, when AI is being deployed, those who are affected should have the right to challenge solutions that are “exploitative and invasive” in nature.
Author Bio
Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.