An important step forward in the field of artificial intelligence is artificial general intelligence (AGI), which is also called “strong AI,” “full AI,” “human-level AI,” or “general intelligent action.” AGI will be able to do a wide range of cognitive tasks at or above human levels. This is different from narrow AI, which is designed to do specific tasks like finding flaws in products, describing the news, or building you a website. When CEO Jensen Huang talked to the press this week at Nvidia’s annual GTC developer meeting, he seemed really tired of talking about the subject. He says that he gets misquoted a lot, which is one reason.
It makes sense that the question is asked often: The idea brings up existential questions about what humans should do in a world where machines can think, learn, and do better than humans in almost every area. This worry comes from the fact that AGI’s goals and decision-making processes are hard to predict. They may not match up with human values or priorities, which has been a theme in science fiction since at least the 1940s. Some people are worried that once AGI gets a certain level of independence and power, it might not be able to be contained or controlled. This could mean that its actions can’t be predicted or undone.
When sensationalist reporters ask for a time range, they are usually trying to get AI experts to put a date on the end of humanity, or at least the way things are now. It goes without saying that AI CEOs don’t always want to talk about it.
Huang, on the other hand, told the press what he really feels about the subject. Huang says that figuring out when a good AGI will come out depends on how you describe it and makes a couple of connections: Even though time zones can be tricky, you know when the New Year is and when 2025 is. If you’re going to the San Jose Convention Center, which is where this year’s GTC conference is being held, the huge GTC banners will let you know you’ve arrived. The most important thing is that we can agree on how to tell if you’ve reached your destination, whether it’s in terms of time or space.
Huang says, “I think we can get to AGI within 5 years if we make it very clear that AGI is a set of tests that a computer program can do very well on, or maybe 8% better than most people.” He says the tests might be a bar exam for lawyers, a thinking test, an economics test, or even the ability to pass a pre-med exam. He is not ready to make a guess unless the person asking the question is very clear about what AGI means in this case. Okay then.
AI illusion Can Be Fixed
Huang was asked what to do about AI hallucinations during Tuesday’s Q&A. This is when some AIs make up answers that sound reasonable but aren’t based in reality. He looked irritated by the question and said that hallucinations are easy to treat by making sure that replies are well-researched.
Huang calls this practice “retrieval-augmented generation” and says it’s a lot like basic media literacy: “Add a rule: For every single answer, you have to look up the answer.” Look at the cause and the situation. Check the source’s facts against what you already know to be true. If the answer is truly wrong, even if it’s only partly wrong, throw out the source and move on to the next one. “The AI shouldn’t just answer; it should first look into the questions to find the best answer.”
Also Read: Nvidia Hires the Biggest Names in Humanoid Robots for Its New Gr00t Ai Platform
When looking for mission-critical answers, like health tips or something similar, Nvidia’s CEO says that it might be best to check a number of different sources and known sources of truth. This means that the person who is making the answer should be able to say, “I don’t know the answer to your question,” “I can’t come to a decision on what the right answer is,” or even “Hey, the Super Bowl hasn’t happened yet, so I don’t know who won.”
What do you say about this story? Visit Parhlo World For more.