Microsoft has made it clear again that U.S. police forces can’t use Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI tech, to use generative AI for facial recognition.
New language added to Azure OpenAI Service’s terms of service on Wednesday makes it clearer that integrations with Azure OpenAI Service can’t be used “by or for” U.S. police departments for facial recognition. This includes integrations with OpenAI’s current image-analyzing models and maybe even future models.
There is a new bullet point that says “any law enforcement globally” can’t use “real-time facial recognition technology” on mobile cameras like body cameras and dashcams to try to figure out who someone is in “uncontrolled, in-the-wild” settings.
The policy changes come a week after Axon, a company that makes tech and guns for the military and police, announced a new product that uses OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Critics were quick to point out the problems that could happen, such as hallucinations (even the best generative AI models today make up facts) and racial biases from the training data (which is especially scary because people of color are much more likely to be stopped by police than their white peers).
It’s not clear if Axon was using GPT-4 through the Azure OpenAI Service or if the new policy was in response to the launch of Axon’s product. In the past, OpenAI’s APIs limited how its models could be used for face recognition. Axon, Microsoft, and OpenAI have all been contacted. If we hear back, we will make changes to this post.
Microsoft Has Some Room To Move With The New Rules
Police in the United States are the only ones who can’t use the Azure OpenAI Service at all. Also, it doesn’t cover facial recognition done with stationary cameras in controlled settings, like a back office (though the terms say that U.S. police can’t use facial recognition at all).
This is in line with how Microsoft and its close partner OpenAI have recently approached contracts for AI-related work in law enforcement and security.
Bloomberg reported in January that OpenAI is working with the Pentagon on a number of projects, including ones that involve cybersecurity. This is different from the past, when the company said it wouldn’t give its AI to militaries. The Intercept reports that Microsoft has suggested using OpenAI’s DALL-E image generation tool to help the Department of Defense (DoD) make software for military activities.
In February, Microsoft’s Azure Government product added Azure OpenAI Service. This added more compliance and management tools designed for government agencies like law enforcement. Microsoft’s SVP of government business, Candice Ling, promised in a blog post that Azure OpenAI Service would be “submitted for additional authorization” to the DoD for workloads supporting DoD missions.
Also Read: Sundar Pichai Talks About Search, Ai & Dancing With Microsoft
Update: After the article came out, Microsoft said that the change it made to the terms of service was wrong and that the ban only refers to facial recognition in the U.S. It doesn’t mean that police departments can’t use the service at all.
What do you say about this story? Visit Parhlo World For more.