Microsoft Calling It ends its work on terrifying emotional recognition technology

Microsoft Calling It ends its work on terrifying emotional recognition technology

Image for an article titled Microsoft's Calling It Quits on Creepy Emotion Recognition Tech

Picture: John MacDougall (Getty Images)

Microsoft turns its back to itself scientifically suspect and ethically questionable emotion recognition technology. At least for now.

Microsoft has announced that it plans to phase out its so-called “emotion recognition” detection systems from its Azure Face facial recognition services. The company will also gradually phase out opportunities that try to use AI to infer identity attributes such as gender and age.

Microsoft’s decision to slow down the controversial technology appears in the context of a larger one renovate its ethics policies. Natasha Crampton, Chief Responsible AI Officer at Microsoft, said the company’s change was in response to experts citing a lack of consensus on the definition of “emotion” and concerns about over-generalizing how AI systems might interpret them.

“We’ve worked with internal and external researchers to understand the limitations and potential benefits of this technology and find trade-offs,” said Sarah Bird, Product Manager, Azure AI Core Group Sarah Bird. statement. “API access to functions that anticipate sensitive attributes also opens up many ways to misuse them – including exposing people to stereotyping, discrimination, or unfair denial of services,” added Bird.

Bird said the company would move away from the general purpose system in the Azure Face API that tries to measure these attributes to “reduce risk”. As of Tuesday, new Azure customers will no longer be able to access this discovery system, although existing customers will have until 2023 to discontinue use of it. Most importantly, while Microsoft says its API will no longer be available for general use, Bird said the company may continue to research the technology in some limited cases, in particular as a tool to assist people with disabilities.

“Microsoft realizes that these capabilities can be valuable when used in a range of controlled availability scenarios,” added Bird.

The course correction is an attempt to adapt the Microsoft rules to the new 27-page one Responsible AI standard annual document in preparation. Among other guidelines, the standard calls on Microsoft to ensure that its products are subject to proper data management, support informed human oversight and control, and “provide the correct solutions to the problems they are designed to address.”

Emotion recognition technology is “raw at best”.

In an interview with Gizmodo, the executive director of the Surveillance Technology Oversight Project, Albert Fox Cahn, called it a “no-brainer” for Microsoft to turn its back on emotion-recognition technology.

“The truth is that the technology is primitive at best, capable of decrypting at most a small subset of users.” said Fox Cahn. “But even if the technology were improved, they would still punish anyone who is neurodiverse. Like most behavioral AI, diversity is penalized and those who think otherwise are treated as a threat. ”

ACLU’s senior policy analyst Jay Stanley welcomed Microsoft’s decision, which he said reflects a “scientific reluctance” to automatically recognize emotions.

“I hope this helps solidify a broader understanding that this technology is not something to be relied on or implemented outside of the experimental contexts,” Stanley said in a phone call to Gizmodo. “Microsoft is a famous brand and a big company, and I hope it has a broad influence in helping others understand the serious shortcomings of this technology.”

Tuesday’s announcement comes just after years of pressure from activists and academics who have spoken out against the potential ethical and privacy traps of readily available emotional recognition. One of these critics, USC Annenberg research professor Kate Crawford, delved into the limitations of emotional recognition (also known as “affect recognition”) in her 2021 book. Atlas AI. Unlike face recognition, which tries to identify a specific person, emotion recognition aims to “detect and classify emotions by analyzing any face” – Crawford’s argument is fundamentally flawed.

“The difficulty in automating the relationship between facial movements and basic emotional categories raises the greater question of whether emotions can even be adequately grouped into a small number of discrete categories,” Crawford writes. “There is a stubborn point that our facial expressions may indicate little about our sincere inner states, as anyone who smiled without really feeling happy can attest to.”

Crawford is not alone. 2019 report conducted by the New York University research center AI Now, argued that emotional recognition technology placed in the wrong hands could potentially allow institutions to make dystopian decisions about individuals’ ability to participate in basic aspects of society. The report’s authors called on regulators to ban the technology. Recently, a group of 27 digital rights groups wrote: open letter Zoom CEO and founder Eric S. Yuan urging him to abandon Zoom efforts to integrate emotional recognition into video calling.

Microsoft’s turn to emotional intelligence comes almost exactly two years after it joined Amazon and IBM in prohibition police use of facial recognition. Since then, AI ethics teams at large tech companies like Google and Twitter have multipliedthough not without some heated voltages. Although Microsoft’s possible decision to withdraw from emotional recognition may prevent it from duplicating the same frightening public trust issues that plague other tech companies, the company remains a major concern among defenders of privacy and civil liberties due to its partnerships with law enforcement and keen interest in military contracts.

Microsoft’s decision was broadly accepted by privacy groups, but Fox Cahn told Gizmodo he would like Microsoft to take further action on the case other, more profitable, but also in terms of technology.

“While this is an important step, Microsoft still has a long way to go in clearing its civil rights history,” said Fox Cahn. “The company continues to profit from the Domain Awareness System, [an] Orwell’s intelligence software developed in collaboration with the NYPD. The domain awareness system and the AI ​​surveillance systems it enables raise exactly the same concerns as emotional recognition, only DAS is cost effective. ”

Leave a Reply