OpenAI has finally put out the real-time video features for ChatGPT that it showed off almost seven months ago.
During a livestream on Thursday, the company said that Advanced Voice Mode, ChatGPT’s tool for having conversations that sound like they are between people, is getting better. People who have paid for ChatGPT Plus, Team, or Pro can use the app to point their phones at things and get responses almost instantly.
With advanced voice mode and vision, you can also share your screen and see what’s on someone else’s screen. It can, for instance, explain different settings choices or offer help with a maths problem.
Touch the voice icon next to the ChatGPT chat bar to get to Advanced Voice Mode with vision. Then touch the video icon in the bottom left corner to start the video. Tap the three dots and choose “Share Screen” to share your screen.
OpenAI says that Advanced Voice Mode with vision will be rolled out next Thursday and finish next week. But not all people will be able to get in. OpenAI says that members to ChatGPT Enterprise and Edu won’t get the feature until January. It doesn’t know when it will be available to users in the EU, Switzerland, Iceland, Norway, or Liechtenstein.
Greg Brockman, President of OpenAI, recently showed off Advanced Voice Mode with vision on CNN’s “60 Minutes.” He used it to test Anderson Cooper’s knowledge of anatomy. ChatGPT could “understand” what Cooper was drawing on the whiteboard as he drew body parts.
ChatGPT said, “The location is perfect.” “The brain is in the head.” It’s a good start with the shape. It looks more like an oval brain.
When it was shown in the same test, Advanced Voice Mode with vision did something wrong with a geometry problem, which suggests that it often has hallucinations.
Multiple times, the release of Advanced Voice Mode with vision has been pushed back. This is said to be because OpenAI released the feature before it was ready for production. OpenAI said in April that Advanced Voice Mode would be available to all users “within a few weeks.” After a few months, the business said it needed more time.
When Advanced Voice Mode finally came out for some ChatGPT users in early autumn, it didn’t have the visual analysis added on. OpenAI has been working hard to bring the voice-only Advanced Voice Mode experience to more systems and users in the EU before the launch on Thursday.
Competitors like Google and Meta are working on adding similar features to their chatbots. Project Astra, Google’s real-time conversational AI tool that looks at videos, was released to a group of “trusted testers” on Android this week.
OpenAI released a new mode on Thursday called “Santa Mode,” which adds Santa’s voice as a fixed voice in ChatGPT. This is in addition to the Advance Voice Mode with vision. Tap or click the snowflake icon in the ChatGPT app, which is next to the alert bar, to get to it.
What do you say about this story? Visit Parhlo World For more.