OpenAI CEO Sam Altman said in a Reddit AMA that one big reason the company can’t ship goods as often as it’d like is that it doesn’t have enough computing power.
“These models have all become very complicated,” he wrote in answer to a question about why OpenAI was taking so long to make its next crop of AI models. “We also have a lot of restrictions and have to make tough choices about how to use our computers to work on many great ideas.”
There are a lot of stories that say OpenAI has had trouble getting enough computer power to run and train its generative models. Sources told Reuters this week that OpenAI and Broadcom have been working together for months to make an AI chip for running models. This chip could be ready as early as 2026.
Altman explained that OpenAI’s Advanced Voice Mode, a tool for ChatGPT that lets you have natural-sounding conversations, will not be getting the vision features that were first hinted at in April any time soon. This is partly because of limited resources. OpenAI showed off the ChatGPT app on a smartphone at its April press event. The app could recognize people by their clothes and other visual cues that were in front of the phone’s camera.
Fortune later said that the show was rushed to get people to forget about Google’s I/O developer conference, which was going on at the same time. A lot of people at OpenAI didn’t think GPT-4o was ready to be shown. It’s interesting that the voice-only version of Advanced Voice Mode wasn’t released for months.
During the AMA, Altman said that there is no set date for the release of the next big version of OpenAI’s image generator, DALL-E. “We don’t have a plan for release yet,” he said. Sora, OpenAI’s tool for making videos, has been held back because “we need to perfect the model, get safety/impersonation/other things right, and scale compute,” wrote Kevin Weil, who was also in the AMA and is OpenAI’s chief product officer.
Reports say that Sora has had technology problems that make it less competitive than systems from Luma, Runway, and other companies. The Information says that the first system, which was shown off in February, needed more than 10 minutes to process a 1-minute video clip.
Tim Brooks left Sora to work for Google in October. He was one of the co-leads the project.
Altman later in the AMA said that OpenAI is still thinking about letting “NSFW” content into ChatGPT “someday” (“we totally believe in treating adult users like adults,” he wrote), but that the company’s main goal is to make its o1 line of “reasoning” models and their successors better. At its DevDay meeting in London this week, OpenAI showed off some of the new features that will be added to o1. One of these was image understanding.
Also Read: It is Said That Openai Plans to Make Its First Ai Chip in 2026
He wrote, “We have some very good releases coming later this year.” “But nothing we’re going to call GPT-5.”
What do you say about this story? Visit Parhlo World For more.