Being Made Up By Google And Microsoft's Chatbots
Being Made Up By Google And Microsoft's Chatbots

Big Game Numbers Are Being Made Up By Google And Microsoft’s Chatbots


Google’s Gemini robot, which used to be called Bard, thinks that the 2024 Super Bowl already happened. If you needed more proof that GenAI makes things up, this is it. There are even (fake) numbers to back it up.

A Reddit thread says that Gemini, which is driven by Google’s GenAI models with the same name, is answering questions about Super Bowl LVIII as if it happened yesterday or weeks ago. It looks like a lot of bettors think the Chiefs will beat the 49ers (sorry, 49ers fans).

Gemini adds on pretty creatively. In one case, they gave a breakdown of player numbers that made it look like Kansas City Chiefs quarterback Patrick Mahomes ran 286 yards, scored two touchdowns, and intercepted a pass, while Brock Purdy only ran 253 yards and scored one touchdown.

Gemini isn’t the only one. Even Microsoft’s Copilot robot says the game is over and gives wrong references to support its claim. Is it because it’s biased against San Francisco? Because it says the 49ers won “with a final score of 24-21,” not the Chiefs.

It’s all a bit silly, and the problem may have been solved by now since this reporter couldn’t get the Gemini replies to show up in the Reddit thread. (It’s likely that Microsoft is also working on a fix.) But it also shows the big problems with GenAI today and how dangerous it is to put too much trust in it.

GenAI models are not smart at all. AI models learn how likely it is that data (like text) will happen by looking at trends and the context of other data. They do this by being fed a huge number of examples, which are usually found on the internet.

This method based on probabilities works amazingly well when used on a large scale. Even though the range of words and their chances of making sense are likely to lead to text that makes sense, it’s not a sure thing. For example, LLMs can come up with something that is grammatically right but doesn’t make sense, like the claim about the Golden Gate. Or they can spread lies that make their training data less accurate.

It’s not because the LLMs want to do harm. They aren’t mean, and the ideas of true and false don’t mean anything to them. They have just learned to connect certain words or phrases with ideas, even if those connections aren’t real.

This is why Gemini and Copilot told lies about the Super Bowl in 2024 and 2023.

Like most GenAI providers, Google and Microsoft are honest about the fact that their apps aren’t perfect and often make mistakes. But I would say that these acknowledgments are in small print and are easy to miss.

Also Read: Sundar Pichai Says That Almost 100 Million People Have Signed Up for Google One, the Cloud Storage Service

The spread of false information during the Super Bowl isn’t the only bad example of GenAI going off the rails. That difference probably lies in whether you support torture, reinforce race and ethnic stereotypes, or write convincingly about conspiracy theories. It is, however, a good lesson to check what GenAI bots say twice. There is a good chance they are not true.

What do you say about this story? Visit Parhlo World For more.

Leave a reply