Microsoft Drops Emotion Recognition as Face Analysis Concerns Grow | IT business advantage

Despite the potential of facial recognition technology, it faces ask ethical questions and prejudice.

To allay those concerns, Microsoft recently launched its Responsible AI standard and made a number of changes, the most notable of which is the discontinuation of the company’s AI technology for emotional recognition.

Responsible AI

Microsoft’s new policy contains a number of important announcements.

  • New customers must request access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer, and existing customers have one year to sign up and be approved for continued access to the facial recognition services.
  • Microsoft’s Restricted Access Policy adds: use case and Customer’s eligibility requirements to access the Services.
  • Face detection capabilities, including detecting blur, exposure, glasses, head position, landmarks, noise, occlusion, and face framing, remain widely available and require no application.

The focal point of the announcement is that the software giant will “discontinue facial analytics capabilities that claim to infer emotional states and identity traits such as gender, age, smile, facial hair, hair and makeup.”

Microsoft noted that “the inability to generalize the association between facial expression and emotional state across use cases, regions, and demographics … opens up a wide range of ways in which they can be exploited, including exposing people to stereotyping, discrimination, or unfair denial of services.”

Also read: AI suffers from bias, but it doesn’t have to

Away from facial analysis

There are a number of reasons why major IT players have moved away from facial recognition technologies, including limiting law enforcement access to the technology.

Justice Concern

Automated facial analysis and facial recognition software have always caused controversy. Combine this with the often inherent societal biases of AI systems and the potential to exacerbate problems with bias increases. Many commercial facial analysis systems today inadvertently exhibit biases in categories such as race, age, culture, ethnicity, and gender. Microsoft’s Responsible AI Standard implementation is designed to help the company anticipate potential issues of bias through the Fairness goals and requirements outlined.

Appropriate usage controls

Regardless of Azure AI Custom Neural Voices its limitless potential in entertainment, accessibility and education, it can also be greatly misused to mislead listeners by impersonating speakers. Microsoft’s Responsible AI program, plus the sensitive user assessment process essential to the Responsible AI standard, have overhauled facial recognition and custom neural speech technologies to develop a layered control framework. By restricting these technologies and implementing these controls, Microsoft hopes to protect the technologies and users from misuse and ensure that their implementations are valuable.

Lack of consensus on emotions

Microsoft’s decision to do away with public access to its AI’s emotion recognition and facial feature identifiers is due to the lack of a clear consensus on the definition of emotions. Experts from inside and outside the company have pointed to the effect of this lack of consensus on emotion recognition technology products as they generalize inferences across demographics, regions and usage scenarios. This hinders the technology’s ability to provide appropriate solutions to the problems it seeks to solve and ultimately affects its reliability.

The skepticism associated with the technology comes from its disputed efficacy and justification for its use. Human rights groups claim that emotion AI is discriminatory and manipulative. A study found that emotion AI consistently identified that white subjects had more positive emotions than black subjects on two different facial recognition software platforms.

Intensifying Privacy Concerns

There is an increasing amount of research on facial recognition technologies and their unethical use for public surveillance and mass facial recognition without consent. While facial analytics collects generic data that is kept anonymous, such as Azure Face’s service that infers identity attributes such as gender, hair, age, and more, anonymization doesn’t eliminate ever-increasing privacy concerns. Aside from consenting to such technologies, subjects may often be concerned about how the data collected by these technologies is stored, protected and used.

Also read: What does explainable AI mean for your business?

Face Detection and Bias

Algorithmic Bias sees machine learning algorithms portraying their creators’ biases or their input data. The widespread use of these models in our technology-dependent lives means that their use cases run the risk of adopting and spreading mass-produced biases.

Face detection technologies struggle to produce accurate results in use cases involving women, dark-skinned people and older adults because these technologies are often trained on facial image data sets dominated by white subjects. Bias in facial analysis and facial recognition technologies produces real-life consequences, such as the following examples.

inaccuracy

Regardless of the advances that face detection technologies have made, bias often produces inaccurate results. Studies show that face detection technologies generally perform better on lighter skin tones. a study reports findings identifying lighter-skinned men with a maximum error rate of 0.8% compared to up to 34.7% for dark-skinned women.

The failures in recognizing the faces of dark-skinned people have led to cases where the technology has been misused by law enforcement officers. In February 2019, a black man was accused not only from shoplifting, but also from attempting to hit a police officer with a car, even though he was 40 miles from the crime scene at the time. He spent 10 days in jail and his defense cost him $5,000.

Since the case was dropped in November 2019 for lack of evidence, the man is suing the relevant authorities for false arrest, imprisonment and violation of civil rights. In a similar case, another man was wrongfully arrested due to inaccuracy in facial recognition. Such inaccuracies raise concerns about the number of wrongful arrests and convictions.

Several vendors of the technology, such as IBM, Amazon and Microsoft, are aware of such restrictions in areas such as law enforcement and the implication of the technology for racial injustice and have taken steps to prevent potential misuse of their software. Microsoft’s Policy forbids the use of his Azure Face by or for the state police in the United States.

Decision

It is not uncommon for facial analysis technology to be used to aid in the evaluation of video interviews with job applicants. These tools influence recruiters’ hiring decisions using data they generate by analyzing facial expressions, movements, word choice, and tone of voice. Such use cases aim to reduce hiring costs and increase efficiency by accelerating the screening and hiring of new hires.

However, failing to train such algorithms on data sets that are both large enough and diverse enough leads to bias. Such bias may consider certain people more suitable for work than others. False positives or negatives can determine the employment of an unsuitable candidate as well as the rejection of the most suitable candidate. As long as they contain bias, the same results are likely to be experienced in a similar context where the technology is used to make decisions based on people’s faces.

What’s next for facial analysis?

All of this doesn’t mean Microsoft is completely ditching its facial analytics and recognition technology, as the company recognizes that these features and capabilities can deliver value in controlled accessibility contexts. Microsoft’s biometric systems, such as facial recognition, are limited to partners and customers of managed services. The availability of facial analytics will remain available to users through the Limited Access scheme until June 30, 2023.

Restricted access only applies to users who work directly with the Microsoft account team. Microsoft has provided a list of approved use cases with restricted access here† Until then, users have time to submit approval requests to continue using the technology. Such systems will also be limited to use cases deemed acceptable. In addition, a code of conduct and guardrails will be used to ensure that authorized users do not misuse the technology.

Computer Vision and Video Indexer’s celebrity recognition features are also subject to restricted access. Video Indexer’s facial recognition is also under Restricted. Customers no longer have general access to facial recognition from these two services, in addition to the Azure Face API.

As a result of the review, Microsoft announced, “We are conducting responsible data collections to identify and mitigate differences in technology performance across demographics and assess ways to present this information in a way that would be insightful and useful to our customers .”

Read next: Best machine learning software

#Microsoft #Drops #Emotion #Recognition #Face #Analysis #Concerns #Grow #business #advantage

Leave a Comment

Your email address will not be published. Required fields are marked *