Late Thursday night, Oprah Winfrey showed a series about AI called “AI and the Future of Us.” OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and current FBI head Christopher Wray were among the guests.
There was a lot of doubt and caution in the air.
In prepared comments, Oprah said that AI is now out in the open, and people will have to learn how to live with the results, good or bad.
“AI is still out of our hands and mostly beyond our comprehension,” she said. “But it is here, and we will have to live with technology that can help us and hurt us…” We are the most flexible animals on Earth. We will change again. But don’t lose sight of what’s real. Things are at stake more than ever.
Sam Altman Makes Too Many Promises
Altman, who was Oprah’s first interview of the night, made the questionable case that AI today learns ideas from the data it is taught on.
He told Oprah, “We are showing the system a thousand words in a row and asking it to guess what comes next.” “The system learns to guess, and then it learns the basic ideas behind that.”
A lot of professionals would say otherwise.
Yes, AI systems like ChatGPT and o1, which OpenAI released on Thursday, can guess what the next words in a speech will probably be. But they’re just statistical machines; they find trends in data. They’re not doing it on purpose; they’re just guessing based on what they know.
Altman may have exaggerated about how smart today’s AI systems are, but he did stress how important it is to learn how to test those systems for safety.
As soon as possible, he said, “one of the first things we need to do is get the government to start figuring out how to test the safety of these systems in the same way we test planes and new medicines.” “Probably every couple of days, I talk to someone in the government in person.”
Altman may be pushing for rules because he wants them. The California AI safety bill, SB 1047, has been criticized by OpenAI, which says it will “stifle innovation.” Former OpenAI employees and AI experts like Geoffrey Hinton, on the other hand, have come out in support of the bill, saying that it would protect AI research in the way that is needed.
Oprah also asked Altman about his job as the boss of OpenAI. His main answer when she asked why people should trust him was that his company is trying to build trust over time.
In the past, Altman made it clear that no one, including himself, should be trusted to make sure that AI is good for everyone.
Later, the CEO of OpenAI said it was strange to hear Oprah ask if he was “the most powerful and dangerous man in the world,” as a news story put it. He didn’t agree, but he said he felt obligated to steer AI in a good way for people.
Oprah On Fake Ids
Deepfakes were talked about, as they always do in shows about AI.
Brownlee compared sample footage from Sora, OpenAI’s AI-powered video generator, to AI-generated footage from a system that had been running for months to show how believable fake media is becoming. The fact that the Sora sample was so far ahead shows how quickly the field has moved forward.
“You can still kind of look at bits of this and tell something isn’t quite right,” Brownlee said of the video of Sora. Oprah said she thought it looked real.
The deepfakes show led to an interview with Wray, in which he talked about the first time he heard of AI deepfake technology.
While Wray was in a meeting room, a group of FBI agents got together to show him how to make deepfakes with AI added to them. “They also made a video of me saying things I never would have said before.”
Wray talked about how sextortion with the help of AI is becoming more common. ESET, a cybersecurity business, says that between 2022 and 2023, the number of sextortion cases rose by 178%. This was partly due to AI technology.
According to Wray, someone pretending to be a friend targets a teenager and then uses AI to make compromising pictures to get the teenager to send real pictures back. In fact, it’s a guy in Nigeria typing away. Once they have the pictures, they threaten to blackmail the kid by saying, “If you don’t pay up, we’ll share these pictures that will ruin your life.”
In his speech, Wray also talked about false information about the upcoming U.S. presidential election. He said it “wasn’t time for panic,” but he did say that “everyone in America” needs to “bring an intensified sense of focus and caution” to the use of AI and remember that “bad guys can use AI against all of us.”
“Too often, we find that something on social media that looks like Bill from Topeka or Mary from Dayton is actually a Russian or Chinese intelligence officer off the coast of Beijing or Moscow,” Wray said.
In fact, a study by Statista found that near the end of 2023, more than a third of people in the U.S. saw false or misleading information about important issues. This year, false AI-made pictures of VP candidate Kamala Harris and former president Donald Trump have been seen by millions of people on social networks like X.
Gates On How AI Will Change Things
For a change of pace, Oprah talked to Bill Gates, the founder of Microsoft. Gates said that he hopes AI will make education and health much better.
Gates said, “AI is like a third person sitting in on a medical appointment, taking notes, and suggesting a prescription.” “So the doctor isn’t looking at a computer screen; instead, they’re talking to you, and the software is making sure there’s a great transcript.”
Gates didn’t think about the fact that bad AI training could lead to bias, though.
A recent study showed that speech recognition systems from big tech companies were twice as likely to get audio from black speakers wrong than audio from white speakers. Other study has shown that AI systems reinforce long-held, false beliefs that black and white people are biologically different, which causes doctors to make wrong diagnoses of health problems.
Gates said that AI can be “always available” in the classroom and “know how to motivate you… no matter what your level of knowledge is.”
In many classes, that’s not how it is seen.
Last summer, schools and colleges hurried to ban ChatGPT because they were afraid of it spreading fake news and plagiarism. Since then, some have taken back the bans they had. But not everyone is sure that GenAI can be used for good. For example, the U.K. Safer Internet Centre did a study and found that more than half of kids say they have seen people their age use GenAI in a bad way, like making up fake news or pictures to make someone upset.
Also Read: ‘betrayal’ of Non-profit Ai Goal is What Elon Musk is Suing Openai and Sam Altman for
Late last year, the UN Educational, Scientific, and Cultural Organization (UNESCO) pushed for states to control how GenAI is used in schools. This included setting age limits for users and protecting data and user privacy.
What do you say about this story? Visit Parhlo World For more.