Elon Musk’s Grok added a new AI tool for making pictures on Tuesday night. It’s not very safe, just like the AI chatbot. In other words, you could make fake pictures of Donald Trump smoking pot on the Joe Rogan show and share them right to the X platform. But it’s not really Elon Musk’s AI company that’s making things crazy. Instead, the feature in question was created by a new company called Black Forest Labs.
xAI stated that it is working with Black Forest Labs to power Grok’s image generator with its FLUX.1 model, which made the partnership public. Black Forest Labs is an AI image and video company that started up on August 1. It seems to agree with Musk’s idea of Grok as a “anti-woke chatbot” that doesn’t have to follow strict rules like OpenAI’s Dall-E or Google’s Imagen. The social networking site is already full of inappropriate pictures from the new feature.
The AI picture maker that Grok made seems to be here. As we all knew, there aren’t many safety measures.
A news release says that Black Forest Labs, which is based in Germany, just came out of stealth mode after getting $31 million in seed funding led by Andreessen Horowitz. Y Combinator CEO Garry Tan and former Oculus CEO Brendan Iribe are two other well-known backers. Robin Rombach, Patrick Esser, and Andreas Blattmann, who co-founded the company, used to be experts who helped make Stability AI’s Stable Diffusion models.
Artificial Analysis says that the FLUX.1 models from Black Forest Lab are better than Midjourney’s and OpenAI’s AI picture generators in terms of quality, at least based on how users rated them in their image arena.
The company says it is “making our models available to a wide audience” by putting open source AI models for making pictures on GitHub and Hugging Face. The business says it will soon also make a text-to-video form.
Parhlo World asked Black Forest Labs for a response, but they didn’t answer right away.
Oh my god. The images that Grok makes don’t have any filters at all. The use of AI in this way is one of the most careless and foolish I’ve ever seen.
The company said in its launch statement that it wants to “enhance trust in the safety of these models.” However, some might say that the flood of pictures it made with AI on X Wednesday did the opposite. Many of the pictures that people could make with Grok and Black Forest Labs’ tools, like Pikachu holding an assault gun, could not be made with Google or OpenAI’s image generators. No doubt at all that images that are protected by copyright were used to train the model.
That’s Kind Of The Point
It’s likely that this lack of safety measures is a big reason why Musk picked this partner. Musk has made it clear that he doesn’t think safety features make AI models safer. In a tweet from 2022, Musk said, “The risk of teaching AI to be woke, that is, to lie, is deadly.”
Anjney Midha, board director of Black Forest Labs, shared on X a set of comparisons between images made on the first day of launch by the collaboration between Google Gemini and Grok’s Flux. The thread brings up Google Gemini’s well-known problems with making historically correct pictures of people, especially when it comes to adding racial diversity to pictures in the wrong way.
Midha said in a tweet, “I’m glad @ibab and team took this seriously and made the right choice.” He was talking about how FLUX.1 seemed to avoid this problem and mentioning the account of xAI lead researcher Igor Babuschkin.
Google said sorry for this mistake and in February stopped Gemini from making pictures of people. The company still won’t let Gemini make pictures of people as of today.
A Flood Of False Information
Musk might have trouble because there aren’t enough safety measures in place. When deepfake, AI-made pictures of Taylor Swift that were sexually explicit went popular on the X platform, it got bad reviews. Besides that event, Grok makes up hallucinated news that users on X see almost every week.
It was only last week that five secretaries of state told X to stop sharing false information about Kamala Harris on there. This month, Musk shared again a video that made it look like Harris had admitted to being a “diversity hire” by cloning Harris’ voice with AI.
Musk seems determined to let this kind of false information spread across the platform. By letting people post Grok’s AI pictures directly on the platform, which don’t seem to have any watermarks, he’s pretty much let everyone’s X timeline fill up with false information.
What do you say about this story? Visit Parhlo World For more.