Big Tech still doesn’t see enough women as AI leaders.
Meta said on Wednesday that it would be making an AI advisory council with only white guys. What else do you think we should expect? Women and people of color have been complaining for decades that they are ignored and left out of the world of AI, even though they are qualified and have played a big part in its development.
Meta didn’t answer right away when we asked them to say something about how diverse the advisory board is.
The Oversight Board and Meta’s real board of directors are more diverse in terms of race and gender than this new advisory board. This AI board wasn’t chosen by shareholders, and it doesn’t have any duty to them. Meta told Bloomberg that the board would look at “technological advances, innovation, and strategic growth opportunities” and make suggestions. Periodically, it would get together.
The fact that the AI advisory council is made up of only businesspeople and entrepreneurs and no ethicists or people with academic or deep study backgrounds says a lot. Because they’ve brought so many products to market, current and past executives at Stripe, Shopify, and Microsoft might seem like good candidates to oversee Meta’s AI product roadmap. However, AI isn’t like other products, as has been shown over and over again. Taking risks is part of the job, and if something goes wrong, it can have big effects, especially on groups that are already struggling.
Sarah Myers West, managing director of the AI Now Institute, a nonprofit that studies the social effects of AI, told TechCrunch that it’s important to “critically examine” the companies that are making AI to “make sure the public’s needs are served.”
“This technology makes mistakes a lot of the time, and we know from our own research that those mistakes hurt communities that have been discriminated against for a long time more than others,” she said. “We should set a very, very high bar.”
The bad things about AI happen to women a lot more often than to men. In 2019, Sensity AI found that 96% of AI deepfake videos online were sexually explicit videos that people did not agree to watch. Since then, generative AI has become a lot more common, and women are still the ones who are hurt by it.
One big story from January involved sexual deepfakes of Taylor Swift that were shared without her permission. One of the most popular posts got hundreds of thousands of likes and 45 million views. X hasn’t always been good at protecting women in these situations, but because Taylor Swift is one of the most famous women in the world, they stepped in and banned search terms like “taylor swift ai” and “taylor swift deepfake.”
But you might not be able to do anything about it if you’re not a worldwide pop star. A lot of reports say that middle school and high school students are making explicit deepfakes of their peers. This technology has been around for a while, but it’s never been easier to get your hands on. You don’t even need to be good with computers to download apps that promise to “undress” photos of women or swap their faces onto pornography. In fact, NBC’s Kat Tenbarge reported that ads for an app called Perky AI, which said it could make graphic pictures, were shown on Facebook and Instagram.
It is said that Meta didn’t notice two of the ads until Tenbarge told them about the problem. They showed blurred photos of actors Sabrina Carpenter and Jenna Ortega and asked people to tell the app to take off their clothes. A picture of Ortega from when she was 16 years old was used in the ads.
It wasn’t just one time that letting Perky AI promote was a bad idea. Meta’s Oversight Board has recently started looking into why the company didn’t do anything about reports of sexually explicit content made by AI.
It is very important that women and people of color have their opinions heard in the development of AI products. For too long, these kinds of disadvantaged groups have been shut out of the study and development of technologies that could change the world, and the results have been terrible.
As an easy example, women weren’t allowed to take part in clinical trials until the 1970s. This meant that whole fields of study were created without thinking about how they would affect women. Black people are especially affected by technology that wasn’t made with them in mind. For example, a 2019 study from the Georgia Institute of Technology found that self-driving cars are more likely to hit black people because their sensors might have a harder time recognizing black skin.
When algorithms are trained on data that is already biased, they only repeat the biases that people taught them. In general, AI systems are already making racial discrimination worse in areas like housing, jobs, and the criminal justice system. As Axios pointed out, voice helpers have trouble understanding different accents and often mark the work of people who don’t speak English as being made by AI because, as they say, AI’s first language is English. Face recognition tools are more likely to flag black people as possible matches for criminal suspects than white people.
The way AI is being developed now is influenced by the same power structures of class, race, gender, and Eurocentrism that we see in other areas, and it looks like not enough leaders are doing anything about it. Instead, they are making it stronger. They want to break things and move quickly, but investors, owners, and tech leaders don’t seem to understand that generative AI, which is the hottest AI tech right now, could make things worse instead of better. A McKinsey study says that AI could automate about half of all jobs that don’t require a four-year degree and pay over $42,000 a year. These are jobs where minorities are overrepresented.
Also Read: Mark Zuckerberg’s Makeover: Midlife Crisis or Well-thought-out Rebranding?
One of the biggest tech companies in the world has a team of all-white men who are trying to save the world with AI. It’s not clear how this group of men could ever give advice on goods for everyone when they only represent a small group of people. Building technology that everyone, really everyone, can use will take a lot of work. So many things need to be done to make AI safe and welcoming, from research to knowing how society works on an intersectional level, that it’s pretty clear that this advisory board won’t help Meta get it right. One good thing about Meta is that it could lead to the start of another business.
What do you say about this story? Visit Parhlo World For more.