Microsoft restricts police AI use

Pro Search

3 steps completed
Answer
Microsoft has taken significant steps to restrict the use of its artificial intelligence (AI) technologies by police and law enforcement agencies, reflecting a broader commitment to responsible AI practices and ethical considerations. These measures are part of Microsoft's ongoing efforts to ensure that its AI technologies are used in ways that respect privacy, security, and human rights.
  1. Banning Police Use of Facial Recognition Technology: Microsoft announced that it would not sell its facial recognition technology to police departments in the United States until there is a federal law that is grounded in human rights to govern the technology. This decision was influenced by concerns over privacy and the potential for misuse, particularly given studies showing that facial recognition systems can misidentify people of color more often than white people.
  2. Ethical Overhaul and Limited Access to AI Tools: Microsoft has overhauled its AI ethics policies, introducing a "responsible AI standard" that includes limiting access to its Azure Face service. Companies wishing to use Azure Face's facial recognition capabilities must now apply for approval, ensuring adherence to Microsoft's AI ethics standards. Furthermore, Microsoft has decided to phase out facial analysis features that infer emotional states, gender, or age due to privacy concerns and the potential for misuse.
  3. Prohibition of Azure AI Face Service for Police: The use of Azure AI Face service by or for state or local police in the U.S. is prohibited by Microsoft policy. This includes the use of real-time facial recognition technology on mobile cameras used by law enforcement. The policy applies globally and aims to prevent misuse in "in the wild" environments, where police officers on patrol use body-worn cameras to attempt to identify individuals.
  4. Responsible AI Practices and Governance: Microsoft has established a comprehensive framework for responsible AI, which includes principles such as fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. The company has implemented governance systems, such as the Aether Committee and the Office of Responsible AI, to ensure that AI products and services align with these principles. This includes reviewing sensitive use cases and developing policies and best practices for responsible AI development and deployment.
These actions by Microsoft underscore the company's commitment to developing and deploying AI technologies in a responsible manner that prioritizes human rights and ethical considerations. By restricting the use of its AI tools by police and implementing a rigorous ethical framework, Microsoft aims to mitigate the risks associated with AI and ensure that its technologies contribute positively to society.
Related
what is the reason behind the ban on governmental use of the commercial version of the ai copilot
what are the six key principles for responsible ai according to microsoft
what is the ai for good program launched by microsoft